STE WILLIAMS

Mirai Authors Escape Jail Time – But Here Are 7 Other Criminal Hackers Who Didn’t

Courts are getting tougher on the cybercrooks than some might realize.PreviousNext

Image Source: United States Courts

Image Source: United States Courts

Three individuals who admitted responsibility for creating and operating the highly disruptive Mirai botnet of 2016 have escaped jail time. Instead, they will now assist US law enforcement on cybersecurity matters.

On Sept. 18, a federal judge in Alaska sentenced Paras Jha, 22, of Fanwood, NJ; Josiah White, 21, of Washington, Pa.; and Dalton Norman, 22, of Metairie, La., to five years of probation and 2,500 hours of community service. The three also have to pay $127,000 as restitution for their crime.

Chief US District Judge Timothy Burgess cited the extraordinary cooperation the three individuals had extended to the FBI in several other major and ongoing cybercrime investigations as a reason for his “substantial departure” from sentencing guidelines.

The trio is certainly not the first to get off with what some would consider a light sentence, especially considering how disruptive Mirai was. But for every Jha, White, and Norman are many others who have ended up with substantial jail times. Here are seven criminal hackers who did not fare as well in court.

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full BioPreviousNext

Article source: https://www.darkreading.com/attacks-breaches/mirai-authors-escape-jail-time---but-here-are-7-other-criminal-hackers-who-didnt-/d/d-id/1332873?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Owning Security in the Industrial Internet of Things

Why IIoT leaders from both information technology and line-of-business operations need to join forces to develop robust cybersecurity techniques that go beyond reflexive patching.

The Industrial Internet of Things (IIoT) — within companies and across the entire global IIoT ecosystem — is an intricately intertwined and negotiated merger of information technology and operational technology, or OT. OT systems are not only business-critical, they can be nation-critical, or life-and-death critical.

Every IIoT customer I speak to wants the strongest possible security. But who inside the customer’s organization will execute and own the process? In meeting after meeting with customers building IIoT capabilities, I encounter a natural but sometimes tense uncertainty between IT and OT/line-of-business (LoB) professionals when it comes to IIoT security. That uncertainty is itself a security vulnerability because it delays essential security deployment.

A recent Forrester survey of IT and OT/LoB leaders showed IT and OT managers evenly divided on whether IT or OT is responsible for security. As an alarming result of this standoff, reports Forrester, an unacceptably large number of companies — 59% — are willing to “tolerate medium-to-high risk in relation to IoT security.” I believe that’s wrong as well as dangerous.

Consider the differences between enterprise IT and OT:

  • Availability: IT considers 99% uptime acceptable, while OT requires 99.999% uptime. The difference translates to between 8.76 hours and 5.25 minutes of annual downtime.
  • System life: IT systems are refreshed, on average, every three to five years. OT systems, by contrast, last 10 to 15 years.
  • Patching: IT patching/updates can be done whenever updates are available, but OT patching/updates risk interrupting strategic, revenue-generating industrial operations.

There are many other differences between IT and OT — such as varying approaches to the cloud — but all differences are subsumed by the universal need for the most resilient IIoT security available.

An approach I favor is helping industrial companies use the hard-won, long-fought lessons of IT to leapfrog to an advanced state of IIoT security, security that is expertly architected and deployed to meet OT’s differentiated requirements. If one thinks of OT systems as another form of data center — the heavily protected core of enterprise IT — there are some promising ideas one can adapt from decades of IT experience to provide new levels of IIoT security while honoring the specific needs of OT.

The Patching Conundrum
However, when it comes to patching — a process that aims to update, fix, or improve a software program — a direct port of everyday IT practice to OT is not always feasible. When it comes to patching, IT and OT speak different languages. For that reason, it is essential that leaders of the IIoT industry (IT and OT) join together, think deeply, and work with greater imagination to develop robust cybersecurity techniques that are more agile and effective than reflexive patching.

The bottom line for OT: Patches can create problems and sometimes make things worse, as we’re seeing with patches for the Meltdown and Spectre CPU vulnerabilities. Early patches for Meltdown and Spectre affected system performance.

The hard truth is that the soft underbelly of the modern industrial economy is largely old OT machines. In the world of IT, if something is infected, the first instinct is to shut it down fast, and then patch it (or replace it). But in OT, often the opposite is true: keep it up and running. Some crucial OT systems have been on factory floors for 15 to 25 years or more and can’t be easily taken down and patched, even if an appropriate patch were available, because those systems may not have enough memory or CPU bandwidth to accept patches.

Finally, there’s the issue of the relative complexity and fragility of OT systems compared with IT systems. IT systems can be taken down, patched, and started up again to deliver identical service. IT can run racks loaded with identical servers, and if one goes down or burns out, the next one in line takes over without a hitch. But OT systems are often highly orchestrated combinations of software and hardware that have “personalities.” Even when companies can take down machines for patching, when they come back up, results can be unpredictable as it is not the same system because the patch has introduced wild cards that can proliferate through other elements of the system. In OT, unpredictability is not acceptable. 

First in a series of articles.

Related Content

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Satish joined San Jose-based ABB in February 2017 as chief security officer and Group VP, architecture and analytics, ABB Ability™, responsible for the security of all products, services and cybersecurity services. Satish brings to this position a background in computer … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/owning-security-in-the-industrial-internet-of-things/a/d-id/1332876?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

A ‘Cyber Resilience’ Report Card for the Public Sector

Government agencies are making great strides in defending themselves against cyberattacks, according to new research from Accenture. But technology alone won’t solve the problem.

Cyberattacks against governments agencies are increasing in frequency, sophistication, and severity, demanding heightened vigilance. New research from Accenture finds that public service organizations experience on average 31 successful security breaches each year, often resulting in significant damage or the loss of high-value assets. And it only takes one successful cyberattack to create widespread damage, as demonstrated by the recent WannaCry and Petya malware attacks. Thankfully, government organizations are demonstrating success in defending themselves against attacks.

Our survey of 4,600 security practitioners (including 400 from government agencies) across 15 countries finds that government agencies today are preventing the majority (87%) of focused cyberattacks and that most understand the benefits of digital technologies for organizational and data security. Most respondents (83%) agree that new technologies such as artificial intelligence and machine learning are essential to achieving a sustainable level of cyber resilience, and two-thirds (62%) plan to continue investing in these technologies.

While building capacity for wise security investments is a priority for public service organizations, technology alone will not be sufficient to defend against cyberattacks. Government agencies must look beyond their four walls for help, while also taking steps to identify and address internal threats.

Security Teams Are Finding Breaches Faster; Collaboration Is Critical
Government agencies are detecting security breaches faster than ever before. More than half (52%) of survey respondents say it takes them one week or less to detect a security breach. However, despite faster detection times, security teams are finding less than two-thirds (63%) of all breaches. To improve detection rates, teams must develop strategic and tactical threat intelligence tailored to their organizations, which will allow them to identify security risks and constantly monitor for anomalous activity at the most likely points of attack.

When asked how they learn about attacks that their internal security teams are unable to detect, government respondents indicate that most attacks are identified with the assistance of law enforcement, white-hat hackers, peers, or competitors. These findings underscore the importance of cross-sector collaboration.

Agencies Are Addressing Cybersecurity from the Inside Out
While cyberattacks by external actors continue to pose a serious threat, organizations should not ignore the enemy within. According to survey respondents, two-thirds (72%) of the most damaging security breaches are the result of actions undertaken by internal actors such as employees. Many of these breaches result in sensitive information being published online accidentally or shared with unauthorized third-parties. A previous survey of health employees in North America reports that nearly one in five employees (18%) say they would be willing to sell confidential data to unauthorized parties.

Organizations must take a proactive approach to technology deployments, while reinforcing security behaviors and enhancing existing security protocols to help employees cope with increasingly sophisticated cyberattacks. For example, strengthening email controls and passwords as well as utilizing stronger spam filters can prevent malicious correspondence from reaching employees and reduce the likelihood that they fall victim to phishing scams.

Organizations can build a strong security foundation by identifying high-value assets and hardening the security around them, and by ensuring high levels of security are deployed across the entire organization — not just around core corporate functions. Organizations must also pressure test their system’s resilience by behaving like an attacker so they can better understand their vulnerabilities.

Cybersecurity Investments Continue to Grow, but in a New Direction
The heightened state of cyber awareness within government is also helping to fuel investments. A majority (87%) of public sector respondents say their organization plans to increase security-related spending over the next three years. When asked which capabilities are needed, nearly half (44%) cite either cyber-threat analytics or security monitoring (46%).

However, organizational spending patterns are shifting. The study identifies a growing focus on technologies that protect employee privacy (33%) and enhance customer security (32%). Spending in these areas will likely increase as new legislation emerges across the world to protect citizen data, adding further requirements on government organizations’ security practices.

Ideally, all cybersecurity investments should be overseen by a designated chief information security officer (CISO), a senior-level executive responsible for developing and implementing an information security program, which includes all procedures and policies designed to protect an organization’s communications, systems, and assets from internal and external threats. Government organizations must take immediate steps to develop the next generation of public service CISOs, who are capable of balancing security requirements with their organization’s operational risk appetite.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Ger Daly is Accenture’s managing director for defense and public safety. Mr. Daly leads Accenture’s defense, policing, customs and borders work with government clients globally. His defense industry experience spans large-scale enterprise resource programs (ERP), supply chain … View Full Bio

Article source: https://www.darkreading.com/cloud/a-cyber-resilience-report-card-for-the-public-sector-/a/d-id/1332878?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

SEC Slams Firm with $1M Fine for Weak Security Policies

This is the first SEC enforcement cracking down on violation of the Identity Theft Red Flags Rule, intended to protect confidential data.

The Securities and Exchange Commission (SEC) has issued a $1 million fine against a Des Moines-based organization for failing to implement sufficient security policies related to an incident that compromised personal data belonging to thousands of customers.

Voya Financial Advisors, Inc. (VFA), a broker-dealer and investment adviser, was charged with violating the Safeguards Rule and Identity Theft Red Flags Rule, both of which are intended to protect personal data and protect customers from identity theft. This marks the first time the SEC has enforced the Identity Theft Red Flags Rule with a penalty against an offending firm.

For six months in 2016, cyberattackers impersonated VFA contractors by calling the firm’s support line and requesting to reset passwords. With new passwords, the actors were able to gain access to personal data of 5,600 VFA customers. The SEC found the attackers used this information to create new online user profiles and gain unauthorized access to account documents. Its order states the VFA failed to shut down attackers’ access due to weaknesses in its security procedures, and it also failed to ensure the security of contractors’ systems.

VFA has agreed to pay the $1 million fine and will consult an independent expert to evaluate its policies and procedures, and ensure future compliance with both rules, the SEC reports.

Read more details here.

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/risk/sec-slams-firm-with-$1m-fine-for-weak-security-policies/d/d-id/1332899?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Millions of Twitter DMs may have been exposed by year-long bug

Your private Twitter Direct Messages may have spilled over to a developer who was never meant to see them, thanks to a bug in one of the platform’s application programming interfaces (APIs).

It was very limited, and it was fixed lickety-split, Twitter said in an announcement on Friday. The bug doesn’t affect all your DMs; rather, it only involves messages and interactions with companies that use Twitter “for things like customer service,” the company said.

The buggy API was Account Activity (AAAPI): an API that allows registered developers to build tools that let businesses communicate with customers via Twitter. In some limited cases, under very specific circumstances, if you chatted with a business – say, an airline – or a Twitter account that happened to rely on a developer who used AAAPI to make the chat happen, your back-and-forth may have gone to another registered developer.

Likewise, if your business authorized a developer using the AAAPI to access your account, Twitter says the bug may have erroneously affected your activity data.

Twitter says that as far as it’s determined, it would take a “complex series of technical circumstances,” all happening at once, for the bug to have caused a leak to be sprung.

Twitter explains that the AAAPI sends data to registered developers who use that API based on their active subscriptions. The bug involved data being sent by Twitter to the wrong registered developer’s webhook URL.

When it was discovered two weeks ago, the microblogging platform shipped a fix to prevent data from being unintentionally sent to the wrong developer.

Less than 1% of users

Though the bug was present for over a year, Twitter hasn’t at this point discovered any instances where DMs or protected tweets were actually delivered to the wrong developer. But neither can it “conclusively confirm it didn’t happen,” so it’s notifying the “less than 1%” of Twitter’s 330 million users – who may have been affected.

To trigger the bug, all of the things in this list had to be true during the relevant time period: between May 2017 and within hours of Twitter discovering it on 10 September 2018:

  • Two or more registered developers had active AAAPI subscriptions configured for domains that resolved to the same public IP.
  • For active subscriptions, URL paths (after the domain) had to match exactly across those registered developers.
  • Those registered developers had activity relevant to their subscriptions occur in the same six-minute time period (relevant because of a cache-like behavior).
  • Those registered developers’ subscribers’ activities originated from the same backend server from within Twitter’s datacenter.

If all those technical circumstances were in place, transmission of activities to the wrong webhook URL could have persisted until one of the following conditions were met:

  • For up to two weeks, OR
  • Until no relevant activity occurred for six minutes, OR
  • Until the IP address of the developer whose data was being misdelivered changed.

Twitter’s still investigating the issue.

It’s contacting affected accounts directly via an in-app notice and on twitter.com. Twitter has also contacted its developer partners to make sure they’re “complying with their obligations to delete information they should not have,” it said.

Twitter emphasized that any developer who received such data unintentionally is one that’s registered with its developer program, which it’s been expanding in recent months to stamp out abuse and data misuse.

For example, in July, the company compelled all devs to register, limited them to 10 apps (though you could request permission if you needed more), imposed new rate limits for POST endpoints in order to cut down on spam posts, and said that it had kicked 143K bad apps off the platform between April and June.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qViwC-5t3W8/

Facebook scolds police for using fake accounts to snoop on citizens

Put down that “Bob Smith” fake account, put your hands in the air, and back off, Facebook told the Memphis Police Department (MPD) last Wednesday, waving its real-names policy in the air.

In a letter to MPD Director Michael Rallings, Facebook’s Andrea Kirkpatrick, director and associate general counsel for security, scolded the police for creating multiple fake Facebook accounts and impersonating legitimate Facebook users as part of its investigations into “alleged criminal conduct unrelated to Facebook.”

This activity violates our terms of service. The Police Department should cease all activities on Facebook that involve the use of fake accounts or impersonation of others.

No sweat, the MPD said. We gave Bob the heave-ho before we got your letter. Fox 13 quoted a statement from public information officer Lt. Karen Rudolph:

We have received this letter; however, the account which was opened under the name Bob Smith was deleted prior to receiving this letter. No further comment will be made at this point pending ongoing litigation.

Facebook’s real-names policy is an attempt at maintaining “a community where everyone uses the name they go by in everyday life. This makes it so that you always know who you’re connecting with.”

… an idea that’s controversial, to put it mildly. For years, multiple groups have been telling Facebook (and Google, which dropped the policy four years ago) that using real names online leads to discrimination, harassment and worse, be it previously victimized women stripped of pseudonyms who were then contacted by their rapists, or Vietnamese journalists and activists whose identities were posted online after submitting legal documents to Facebook to prove they needed to use pseudonyms.

Illegal surveillance of legal activism?

In the case of Memphis, it’s police investigators who are being told to stop using fake names. Facebook’s letter was sent following a civil rights lawsuit filed by the American Civil Liberties Union (ACLU) of Tennessee that accused the MPD of illegally monitoring activists to stifle their free speech and protests.

The lawsuit claimed that Memphis police violated a 1978 consent decree that prohibits infiltration of citizen groups to gather intelligence about their activities. After two years of litigation, the city of Memphis had entered into a consent decree prohibiting the government from “gathering, indexing, filing, maintenance, storage or dissemination of information, or any other investigative activity, relating to any person’s beliefs, opinions, associations or other exercise of First Amendment rights.”

Before the trial even began over the ACLU’s lawsuit last month, US District Judge Jon McCalla issued a 35-page order agreeing with the plaintiffs, but he also ruled that police can use social media to look for specific threats: a ruling that, one imagines, would condone the use of fake profiles during undercover police work…

…but not the illegal surveillance of legal, Constitutionally protected activism.

A few months ago, criminal justice news outlet The Appeal reported that the ACLU lawsuit had uncovered evidence that Memphis police used the “Bob Smith” account to befriend and gather intelligence on Black Lives Matter activists.

According to the Electronic Frontier Foundation (EFF), Facebook deactivated “Bob Smith” after the organization gave it a heads-up. Then, Facebook went on to identify and deactivate six other fake accounts managed by Memphis police.

Relying on court filings in the ACLU case, former litigator Leta McCollough Seletzky wrote that Memphis investigators used the fake Bob Smith account to “cozy up to activists and access private posts.” They also sent uniformed and plainclothes officers to observe protests, noting the participants and taking photographs. They also tracked private events, including a “Black Owned Food Truck Sunday” gathering.

But do we believe that undercover cops shouldn’t drive unmarked cars, or infiltrate criminal networks? How is using a fake name different from other policing techniques that rely on stealth?

It’s easy enough to demand that the real-names policy be applied when the police illegally surveil a group of law-abiding activists. But that doesn’t make it a good policy across the board, as Native Americans, activists living under repressive regimes, or drag queens will tell you.

Police, in certain circumstances, are yet one more group who need to use fake names in their work. They really can’t flash blue lights when they open up Facebook accounts to investigate crooks, including burglars, thugs, muggers, child predators or drug dealers.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/vK6UDI36p4M/

Domain flub leaves 30 million customers high and dry

The CEO of cloud software and services company Zoho was left begging Twitter users for help on Monday after his domain registrar effectively took the company offline, stranding millions of users.

The drama started at 8:22am PCT, when Zoho.com’s founder and CEO Sridhar Vembu took to Twitter with a complaint about Zoho’s domain registrar, TierraNet. The company had taken Zoho’s domain down and he couldn’t reach senior management to get it reinstated.

A domain registrar is the company that reserves a domain name for a client to use on the internet, and then keeps that record alive so that it continues to resolve to an actual IP address.

If a domain registrar decides to take down that domain name, it effectively removes the client’s online address from the domain name system (DNS), which is the web’s address book. That means that when you type the domain into your browser, you get a bad request error rather than seeing their website. Their computers may still be running, but you can’t reach them.

For Zoho, this was a big deal. The company is huge. It has 30 million users, and 5,000 employees worldwide. It provides cloud-based software solutions ranging from email to CRM, invoicing, IT and helpdesk software. Its customers range from HP to Hyatt Hotels. So when its site is not available, people notice. The complaints began appearing:

When Zoho customers heeded Vembu’s online complaint and began complaining to TierraNet, the registrar told them that it had taken down Zoho’s domain after receiving complaints of phishing attacks using Zoho’s email service.

In messages to Zoho users who were complaining about the outage, TierraNet’s support staff said that they had tried to contact the company to no avail. Vembu responded that the company had received three complaints, and had only one investigation pending.

In a blog post explaining the incident, Vembu alleged that this was the result of an automated script rather than a human decision, calling out TierraNet for not consulting further with it.

Somehow this automated algorithm decided to shut down the Zoho domain based on these 3 cases – without prior warning of the shutdown, or investigation into the traffic supported by this domain.

While Vembu has been actively apologizing to customers and calling out TierraNet on Twitter, the domain registrar did not reciprocate on social media. Its own Twitter account was last updated almost a year ago. It consists mostly of messages acknowledging its own service outages from 2015 and 2017.

Zoho quickly switched to CloudFlare as its domain registrar, which enabled it to get its domain re-listed in DNS records. However, because the computers that hold DNS records will cache those records for a period of time to cut down on the number of requests they must make, it took some time for the new entry to propagate throughout the DNS system. The lag led to complaints like this one.

It also meant that Vembu found himself, the cofounder and CEO of a massive online services company, giving tech advice online to help get the message out from Zoho’s official support team. He was explaining how to change DNS servers to point to Cloudflare and avoid the propagation lag.

Customers seemed pretty understanding on the whole, and many praised Vembu for owning the situation and being transparent on Twitter.

Nevertheless, it begs the question: why was the CEO of a massive online software company reduced to public begging messages asking someone to help him get through to the provider that effectively controls his entire online presence?

Why was there not at least a backup domain that Zoho could have posted on Twitter while it resolved the issue? As Vembu said in one of his many contrite tweets, the company learned a valuable lesson on Monday.

He’s acting on it, though, taking steps to make sure that he isn’t caught out again. He concluded in his blog:

You have my assurance that nothing like this will ever happen again. We will not let our fate be determined by the automated algorithms of others. We will be a domain registrar ourselves.

That would go some way towards closing what was a wide-open hole in the company’s risk strategy, and should send other online companies running to check for single points of failure in their own setups.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/7HtfxiZvbHk/

Microsoft is killing passwords one announcement at a time

Microsoft’s quiet campaign to abolish passwords reached another milestone yesterday with the announcement that Windows 10 and Office 365 users can now log in to Azure AD applications using only the Authenticator App.

The change is so simple it makes you wonder why passwords have seemed so fundamental for so long.

Currently, Windows 10 and Azure AD users log into their Microsoft accounts using an email address and password, which (if it is turned on) requires authentication via one of a number of two-step verification options (such as SMS, or a code generated by the Authenticator app).

Now, once the user has logged in for the last time to enable the feature, all future logins happen by entering the user name and approving a notification that pops up on the Android or iOS Authenticator app.

Approve that and the login is confirmed using the smartphone’s fingerprint reader, facial recognition or PIN – all without a password in sight.

It’s very similar to Microsoft’s Windows Hello face ID authentication, but without the need to own an expensive high-resolution camera. It’s also a bit like Google’s Prompt, which approves logins using push notifications but only after the user has already entered their username and – of course – an account password.

Clearly, Microsoft has decided that the app residing on the smartphone is now ready to become the primary factor whereas Google is evolving in that direction but hasn’t yet decided to make the final jump.

The benefits of Microsoft’s new app authentication are twofold:

  1. Phishing attacks will be unsuccessful because access no longer depends on stealable passwords.
  2. While no more secure than the best types of multi-factor authentication, it is quicker (i.e. no codes to generate, or physical tokens to fumble with).

A possible downside is that it depends heavily on the first factor – the smartphone app – and smartphones aren’t always well secured from physical access if they’re stolen or lost.

The app asks the user to confirm each login using whichever security mechanism is being used by the smartphone itself. On an iPhone that would be Face or Touch ID, while on Android that would be Google’s less battle-hardened equivalents, or perhaps even a simple four-digit PIN code.

Shifting the weak point to the device

For sure these will improve over time but anyone who wants to turn on Authenticator access to their Microsoft account now should assess the security of their smartphone first. While it’s true that simple password and username access is even less secure, it could be argued that abandoning the password completely simply shifts the weak point to the device.

Another emerging approach that adds a greater level of security is to keep the authentication part of the process on a separate physical token such as the YubiKey, version 5 of which was released this week.

This adds FIDO2/WebAuthn support, an emerging standard that can also be used in a passwordless single-factor way. The main advantage of WebAuthn is that it supports lots of websites and not only Microsoft’s.

Whichever gains traction (and Microsoft also supports WebAuthn by the way), users could be entering a strange world where one factor starts becoming better than two for many people.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/dPAdZDd7oo4/

How to thwart rogue staff: Tune in today to our live insider threat webcast

Broadcast On 26 September 2018 at 10am PDT, 11am MT, 6pm UK, we’ll have a studio full of experts lined up to talk about insider threats and how even the best organisations can suffer from occasional bouts of “bad employee syndrome”.

It’s clearly a challenge that’s not going to go away as more companies develop mobile workforces and the ability to spot bad practices and actors has become more difficult than ever.

During the hour long broadcast, we’ll be exploring:

  • What is the new face of the insider threat – how do bad apples manifest themselves in today’s organisations?
  • What are the costs of doing nothing – how much damage is being done, day on day, and why is nothing being done?
  • What does a solution look like – how do best practices, tools and roles combine to mitigate the insider threat?
  • Where to start – what is the best approach to take an organisation from a denial state to a clear view?

The gig is hosted by our own Jon Collins and includes experts from LogRhythm and analysts from Freeform Dynamics. They’re all geared up to answer your questions so come along, come prepared and learn a lot. You can register here.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/25/the_evolution_of_the_insider_threat/

Canadian security boss ain’t afraid of no Huawei, sees no reason for ban

Canadian Center for Cyber Security chief Scott Jones has told a parliamentary committee there’s no need for the country to cut Chinese comms giant Huawei out of its 5G rollout.

Speaking to the Canadian parliament’s Standing Committee on Public Safety and National Security earlier this week, Jones said the centre believes the country can test equipment and software for security vulnerabilities well enough that there’s no need to follow Australia and America’s lead with a blanket ban on Huawei.

In what looked like a dig at Australia’s decision to block the vendor from bidding into its 5G rollout, Jones said the Canadian government’s “very advanced relationship” with telcos is “different from [that of] most other countries”.

“We have a programme that is very deep,” he said, particularly in guaranteeing the resilience of “next-generation telecommunications networks”.

huawei store in shanghai , china http://www.shutterstock.com/gallery-511162p1.html?cr=00pl=edit-00 by J. Lekavicius /Shutterstock - EDITORIAL USE ONLY

Huawei pleads with FCC to overturn US ban, says it’s ‘anticompetitive’

READ MORE

Last month Australia formalised a ban that covered both Huawei and ZTE. Prime minister Scott Morrison (treasurer at the time) and comms minister Mitch Fifield advised that it won’t be possible to protect networks from Chinese government interference, via China-owned vendors, in the 5G world. The company has long been blocked from participating in Australia’s National Broadband Network, and earlier this year Oz took over construction of a Solomon Islands cable to keep Huawei away from the build.

In the US, the FCC has drafted a rule blocking carriers who receive universal service funds from using Huawei kit, something Huawei is trying to overturn.

Jones’ statement came in response to a question asking: “Five Eyes allies have come out against Huawei… many people are wondering why Canada would not…” (about 16:50:35 in the audio published here).

He responded: “We have a very well-established relationship with all the telecommunications providers in Canada, to raise that resiliency bar regardless of the vendor.”

Jones emphasised the importance of looking at network security as “an entire system” – increasing networks’ resilience wherever a threat might come from, defending the supply chain, buying products known to be secure, and using them securely.

“It’s really trying to address all of the risks, and not just one specific one,” he added.

He also noted that blocking based on country doesn’t fit with how the telecommunications industry works, where “almost everything” is manufactured “around the globe”.

Jones said Canada was in touch with Australia and the US and has explained its testing regime, which uses Huawei-funded “white labs” where products are tested for interception backdoors or “kill switches”.

As The Globe and Mail explained, Huawei already works under some constraints in Canada. The company is not allowed to bid into telcos’ core networks, is blocked from federal government contracts, and is not allowed to manage equipment from offshore locations. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/26/canadian_security_boss_says_theres_no_reason_to_ban_huawei/