STE WILLIAMS

Former IT manager used employer’s computer to view child abuse

A former IT manager, 38-year-old Jacob Raines, of Missouri, is heading to federal prison for stealing proprietary source code and accessing his former employer’s computer to view child abuse imagery.

Raines worked for nearly 10 years as IT manager for American Crane Tractor Parts in Kansas City, Kansas. He stole source code that his former employer said was worth over $5,000. It’s not clear whether Raines actually gave the source code to a competitor, but prosecutors say he certainly had the connections and means to do so.

According to a release from the Western District of Missouri US Attorney’s Office, Raines was sentenced on Wednesday to six years in federal prison without parole.

On May 23 2017, Raines pleaded guilty to one count of computer intrusion and one count of accessing a computer in order to view child abuse imagery.

Raines had resigned as IT manager from American Crane in March 2014. When the new IT manager took over, he or she stripped out Raines’ passwords, among other security changes. But while they were using the computer that used to be assigned to Raines, they noticed that somebody had logged into it remotely and copied files to an off-site server.

In the days leading up to his departure, Raines copied the proprietary source code and file folders to his remote servers.

Police executed a search warrant at Raines’s residence on April 2 2015, for evidence of the computer intrusion and theft of trade secrets. They found copies of the proprietary source code on his home computer, in addition to an enormous cache of child abuse material. Investigators found that Raines had, since 13 November 2013, used peer-to-peer sharing software to look for the content.

He had more than 7,000 images and videos on a DVD. There were another 3,900 thumbnail images and 260 icon images of child abuse on the unallocated space on a hard drive of another computer. There were yet more – 6,000 images and 25 videos – on the unallocated space of a separate loose hard drive.

He could have potentially caused his former employer “significant financial harm,” the US Attorney’s Office said.

There was nothing “potential” about the harm that was done to those children.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/FFXGONxNW-w/

Linux laptop-flinger says bye-bye to buggy Intel Management Engine

In a slap to Intel, custom Linux computer seller System76 has said it will be disabling the Intel Management Engine in its laptops.

Last month, Chipzilla admitted the existence of firmware-level bugs in many of its processors that would allow hackers to spy on and meddle with computers.

One of the most important vulnerabilities is in the black box coprocessor – the Management Engine – which has its own CPU and operating system that has complete machine control. It’s meant for letting network admins remotely log into servers and workstations to fix any problems (such as not being able to boot).

The bugs – as security researchers discovered – allow for installing rootkits and spyware on machines that could steal or tamper with information. So, perhaps unsurprisingly, several vendors – including Lenovo – have been quick to patch the bugs.

Denver, Colorado-based System76, meanwhile, has just banned the Management Engine outright.

In a blog post Thursday, the firm wrote: “System76 will automatically deliver updated firmware with a disabled ME on Intel 6th, 7th, and 8th Gen laptops. The ME provides no functionality for System76 laptop customers and is safe to disable.”

It will apply to customers running Ubuntu 16.04 LTS, Ubuntu 17.04, Ubuntu 17.10, Pop!_OS17.10, or an Ubuntu derivative with the System76 driver installed.

Desktops are not affected by the ban – they’ll just receive ME patches “as they are available”.

The firm said the rollout would happen over time and customers will be notified by email prior to delivery.

“Disabling the ME will reduce future vulnerabilities and using our new firmware delivery infrastructure means future updates can rollout extremely fast and with a higher percentage of adoption (over listing affected models with links to firmware that most people don’t install).”

System76 did, however, note that Intel has the power to change device function and not allow manufacturers and consumers to disable ME, so this may not last forever.

Intel has not responded to a request for comment. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/01/system76_bans_bugridden_intel_management_engine/

High Court judge finds Morrisons supermarket liable for 2014 data leak

Morrisons is responsible for the leak of staff personal details by an ex-employee, the High Court ruled today.

A group of 5,518 employees took the supermarket to court, with Mr Justice Langstaff of the High Court’s Queen’s Bench Division, sitting at Leeds Crown Court, ruling that those affected can claim compensation for the “upset and distress” the leak caused.

The March 2014 leak compromised nearly 100,000 Morrisons employees’ payroll data, including bank details and salaries. The information was published online, but was taken down within hours.

The man responsible, Andrew Skelton, was jailed for eight years in 2015. The attack was allegedly in response to Skelton being accused of dealing legal highs at the company’s headquarters in Bradford.

Mr Justice Langstaff ruled that Morrisons is “vicariously liable” (responsible for the actions of an employee) in Skelton’s case. This principle usually applies to offences such as harassment or libel. The High Court’s decision marks the first time this has been applied in a case of misuse of information.

Nick McAleenan of JMW Solicitors, the claimants’ lawyer, said he and his clients “welcome the judgment and believe that it is a landmark decision”.

Morrissons intends to appeal the decision, which means discussions of how much compensation the affected employees are owed will be postponed until its appeal is concluded. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/01/morrisons_data_leak_ruling/

‘Blocking and Tackling’ in the New Age of Security

In a pep talk to CISOs, the chief security strategist at PSCU advises teams to prioritize resilience in addition to security.

INSECURITY CONFERENCE 2017 – Washington, DC – Prioritizing resilience is the new normal for modern cybersecurity leaders, said Gene Fredriksen, chief security strategist for PSCU, in his closing keynote remarks here at Dark Reading’s INSecurity conference this week.

“If we look at the recent breaches … the one thing we cannot afford to do is forget how to block and tackle,” he said to his audience of security pros. “We’re not just talking about security anymore; we’re talking about resilience.”

It’s no longer enough to be content that things are secure, Fredriksen explained. Systems have to be robust; they have to be able to withstand glitches in the infrastructure. “We need to look at developing systems that not just survive, but thrive,” he added.

This need is part of a “new normal” for CISOs. Unlike those who entered the role through networking or IT, CISOs of the future need to understand increasingly advanced and numerous actors and their motivations. “It’s gotten much bigger,” said Fredriksen of their concerns. CISOs need to network, talk with their peers, learn about new risks, and implement new controls.

“Traditional perimeter security doesn’t really cut it anymore,” he continued, or it’s so porous we can’t rely on it. It’s time to look at extensions outside the perimeter. As Fredriksen listed in his rules for successful cybersecurity, “evolution is always preferable to extinction.”

Fredriksen pointed to criminals’ uses for data analytics as an example of modern threats. Data from breaches like those at Equifax, social media platforms, and resources like press releases and company websites can give attackers sufficient information to take over a bank account: Social security numbers, account numbers, schools and mascots, family members, friends, pets, birthdate, job history, and hobbies are available to those who know where to look.

The Equifax breach is an example of what can happen when proper precautions aren’t taken. It doesn’t matter how big your company is, Fredriksen noted. Equifax has a 225-person security staff and it still took 138 days to issue a patch for the flaw leading to the breach, 78 days to figure out people were using their systems, and 117 days to notify the public.

Fredriksen shared a few additional security lessons framed in the context of the Equifax breach:

  • Visibility is key for detection and prevention: You cannot detect what you cannot see
  • Detection isn’t fast enough: One day is too long for an attacker to be in your system undetected
  • Secure data, not just the network: The root cause of the Equifax breach was a website vulnerability but data resided on the endpoint
  • Encryption is a must: It isn’t easy, and it takes time, but the benefits outweigh the risks
  • Secure vendor connections: This is your responsibility
  • We’re all connected: Data is linked; one breach can be leverage for the next and the next

Technology alone cannot make you secure, he continued, emphasizing the importance of people, process, and technology. His sentiments echoed those of Greg Touhill, the first US federal government CISO and Cyxtera president, who gave INSecurity’s opening keynote.

Fredriksen closed his presentation with a hint of optimism for business leaders. “The boards are getting very engaged,” he said of business involvement, adding that executives are “beginning to understand [security] is not just a technology issue.”

It’s critical for board members to understand the legal implications of cybersecurity and have access to cybersecurity expertise. Risk management should be given “regular and adequate time” on the board meeting agenda, and directors should set the expectation that management will establish an enterprise-wide risk management framework with adequate staff and budget.

If that sounds like a tall order, Fredriksen advised chatting with the CFO and CTO to learn how they discuss funding with the CEO. “Learn how things work in your company, and you can build that business case,” he said. If the CISO walks in with business acumen, people will listen.

“Show progress and get going,” he concluded.

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/risk/blocking-and-tackling-in-the-new-age-of-security/d/d-id/1330529?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Sallie Mae CISO: 4 Technologies That Will Shape IT Security

‘The world as we know it will vanish,’ according to Jerry Archer.

INSECURITY CONFERENCE 2017 – Washington, DC – IT security is will undergo a major transformation in the coming years due to four waves of technology changes, according to Jerry Archer, Sallie Mae’s chief information security officer.

In a keynote here this week at Dark Reading’s INsecurity conference, Archer said cloud computing is the first wave of technology changes affecting IT security. “We have a complex (IT) environment, so we abstract to make the complex simple,” Archer said. “But this abstraction creates gaps and that presents vulnerabilities.”

Although cloud computing is more secure than companies building a distributed environment for their data, there are a couple of problems they face in keeping their cloud secure. One problem is companies need to remember to “turn on the security,” Archer said, pointing to breaches companies have faced when misconfiguring their Amazon S3 buckets.  Another problem is they forget to keep their security “on” once they have initially taken that step, he said.

Cloud computing is also creating an environment where companies are deploying software-defined perimeters, where users must authenticate to the network based on risk-based calculations before they can connect to it. This reliance on behavioral analytics is creating a who, what, when, and where is happening in the cloud without the traditional visibility, he noted.

A second wave of technology that will affect IT security is artificial intelligence in networking, according to Archer. Companies will turn to AI to parcel out the workloads into smaller pieces that will be secured in lots of tiny containers, which Archer described as “rain,” and moved to policy-based computing clouds that perform specific tasks, he said.

Additionally, the data will go from micro-segmentation to nano-segmentation, with the containers enabling agile security services, he said.

Massively Integrated Systems of Smart Transducers (MIST) will be the third technology wave, in which data is everywhere and moves continuously: “Things will come and go out of my orbit.”

He added it will be a return to a larger environment that companies will need to secure, after having taken the steps to simplify their IT environment.

The fourth technology wave is quantum computing, which has many unforeseen implications for IT security as computers become exponentially more powerful, Archer said. But one area already being discussed is quantum computing’s impact on encryption and its expected ability to break it, he added.

“The world as we know it will vanish,” Archer said. “This may happen in three years, five years, or 10 years. I don’t know when but it will happen.”

Related Content:

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/mobile/sallie-mae-ciso-4-technologies-that-will-shape-it-security/d/d-id/1330531?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Deception: Why It’s Not Just Another Honeypot

The technology has made huge strides in evolving from limited, static capabilities to adaptive, machine learning deception.

Deception — isn’t that a honeypot? That’s a frequently asked question when the topic of deception technology arises. This two-part post will trace the origins of honeypots, the rationale behind them, and what factors ultimately hampered their wide-scale adoption. The second post will focus on what makes up modern-day deception technology, how the application of deception technology has evolved, and which features and functions are driving its adoption and global deployment.

Almost 15 years ago, Honeyd was introduced as the first commercially available honeypot and offered simple network emulation tools designed to detect attackers. The concept was intriguing but never gained much traction outside of organizations with highly skilled staff and for research. The idea was to place a honeypot outside the network, wait for inbound network connections, and see if an attacker would engage with the decoy.

Today’s attackers are more sophisticated, well-funded, and increasingly more aggressive in their attacks. Human error will continually result in mistakes for attackers to exploit. With breaches getting more severe, the population getting less patient, and the emergence of regulations and fines, in-network threat detection has become critical for every organization’s security infrastructure. So much so that FBR Capital Markets forecasts that the deception technology market as a detection security control will grow to $3 billion by 2019, three times its size in 2016.

The systemic problem is that organizations are overly dependent on their prevention infrastructure, leading to a detection gap once that attacker is inside the network. For the connected world we live in, it’s widely believed in the industry that it no longer works to focus only on keeping attackers out. The structure is also flawed when applied to insiders, contractors, and suppliers who have forms of privileged access. Alternatively, solutions that rely on monitoring, pattern matching, and behavioral analysis are being used as a detection control but can be prone to false positives, making them complex and resource intensive.

The concept of setting traps for attackers is re-emerging given its efficiency and the advancements in deception technology that have removed scalability, deployment, and operational functionality issues that previously had hampered the wide-scale adoption of honeypots. Consequently, companies across the financial, healthcare, technology, retail, energy, and government sectors are starting to turn to deception technology as part of their defense strategies.

Deception is still a fairly new technology, so it is not surprising that seasoned security professionals will ask, “Isn’t deception just a honeypot or honeynet?” In fairness, if you consider that they are both built on trapping technology, they are similar. Both technologies were designed to confuse, mislead, and delay attackers by incorporating ambiguity and misdirecting their operations. But that is where the similarity ends.  

Deception’s Evolution
Gene Spafford, a leading security industry expert and professor of computer science at Purdue University, originally introduced the concept of cyber deception in 1989 when he employed “active defenses” to identify attacks that were underway, designed to slow down attackers, learn their techniques, and feed them fake data.

The next generation of advancements included low-interaction honeypots, such as Honeyd, built on limited service emulations. The ability to detect mass network scanning or automated attacks (malware, scripts, bots, scanners), track worms, and low purchase costs were the principal appeal of low-interaction honeypots. However, honeypot adoption was limited, given a number of limitations and associated management complexity, such as the following:

  • Honeypots were designed for detecting threats that are outside the network and were predominately focused on general research vs. responding to the more critical need for in-network detection.  
  • Human attackers can easily figure out if a system is emulated, fingerprint it, and avoid detection from honeypots
  • These systems are not high interaction, limiting the attack information that could be collected and any value in improving incident response.
  • Attackers could abuse a compromised system and use it as a pivot point to continue their attack.
  • Honeypots are not designed for scalability, are operationally intensive, and require skilled security professionals to operate

Deception technology has made monumental strides in evolving from limited, static capabilities to adaptive, machine learning deception that is designed for easy operationalization and scalability. Today’s deception platforms are built on the pillars of authenticity/attractiveness, scalability, ease of operations, and integrations that accelerate incident response. Based on our own internal testing and from others in the emerging deception market, deception is now so authentic that highly skilled red team penetration testers continually fall prey to deception decoys and planted credentials, further validating the technology’s ability to successfully detect and confuse highly skilled cyberattackers into revealing themselves. 

Related Content:

Carolyn Crandall is a technology executive with over 25 years of experience in building emerging technology markets in security, networking, and storage industries. She has a demonstrated track record of successfully taking companies from pre-IPO through to … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/deception-why-its-not-just-another-honeypot/a/d-id/1330506?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Apple’s rocky week with passwords in High Sierra [VIDEO]

Apple experienced a high-pressure bug report this week – a way to bypass the root password, no less!

Then there was a superquick fix, and a problem with the fix, and a fix for the fix

…so here’s what happened and what we can learn from it:

(Can’t see the video directly above this line? Watch on Facebook instead.)

Note. With most browsers, you don’t need a Facebook account to watch the video, and if you do have an account you don’t need to be logged in. If you can’t hear the sound, try clicking on the speaker icon in the bottom right corner of the video player to unmute.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/6P0eGKTW8ok/

Protecting your data from ransomware

Supported Well, there’s a surprise. The National Audit Office’s report into the WannaCry ransomware and its effect on the NHS came out in late October. It points the blame at – wait for it – the NHS. Despite warnings, trusts had not prepared themselves with the basic patches necessary to avoid what ended up being an unsophisticated attack.

NHS Digital, the national provider of IT systems for UK public sector health and social care, had conducted voluntary cyber-preparedness assessments for 88 trusts before the ransomware hit. They all failed. “Trusts had not identified cyber-security as being a risk to patient outcomes, and had tended to overestimate their readiness to manage a cyber attack,” the report said.

How can you avoid protect yourself against ransomware? It’s an urgent question as organizations face a clear and present danger. The Metropolitan Police has joined the NHS and the UK Local Government Association in fingering ransomware as the biggest cyberthreat facing the public sector in 2018. The threat is so great that the Met now has 300 officers looking at the issue, the Register reported.

Ransomware is now attacking desktop and mobile operating systems in two ways. It will either encrypt the data on a device and demand a ransom to descramble it, or it will lock up systems altogether, rendering the entire device inoperable, along with its data and applications. In both cases, the effect on enterprises or government bodies can be huge. Ransomware can bring organizations to a grinding halt.

Protect yourself

Protection begins at the endpoint, with proper patching. Still one of the most overlooked tasks (as evidenced by the NAO report), it’s also one of the most basic aspects of cybersecurity hygiene. We know that in many cases, poor patching isn’t the result of laziness, but is more to do with change management and testing policy.

Organizations may be nervous about updating software without thoroughly testing it across their infrastructure, forcing them to wait until they are able to test it themselves, or until they are confident that that there are no reports of adverse effects from elsewhere.

In the interim, they can take advantage of ‘micro-patching’ systems now offered by some vendors, which protect software applications without making changes to the binaries. It’s not a permanent solution, but it can at least provide a stop gap until something more permanent can be done.

Most endpoint protection systems now feature ransomware protection, and Microsoft is also building its own anti-ransomware measures. Windows 10’s Fall Creators update includes a feature called controlled folder access, which blocks off access to specific folders from any applications other than those on an authorized list. It’s a form of whitelisting for folder access, and will go a long way towards helping protect your files. Researchers successfully tested the feature against the Locky ransomware. Protect yourself some more

These are all great protections, but true protection against ransomware requires the same, approach as protection against any serious cyberthreat: defence in depth. A survey of 832 IT pros by Druva this year painted a vivid picture of sustained and repeated attacks on organisations by malware writers employing different a variety of vectors. It emphasised the need to defend in depth.

More than half of survey respondents said their organisation had been hit more than four times, a third of attacks targeted servers and 60 per cent the end point, while 70 per cent had targeted multiple devices.

Illustrating the varied nature of the attacks were the examples of University College London and hosting company Nayana in South Korea. The latter found 153 of its Linux servers had been infected by the Erebus ransomware variant. UCL suffered a sustained and damaging ransomware attack in 2017 after a user on its network was thought to have released the code contained in a phishing attack.

Multi-layered protection will secure your data in these kinds of scenarios, so that if endpoint or server anti-ransomware protection fails, you can still recover your data. Regularly scheduled backups are crucial.

The temptation is to assume that services replicating endpoint data to the cloud will automatically protect your data. Nope. If ransomware encrypts data on your hard drive, then the encryption will also be replicated to your cloud-based data store. Kaboom. There goes your cloud data.

It’s true that some cloud-based services like Dropbox offer versioning, but all it takes is ransomware designed to repeatedly encrypt files and the versioned files will blow up, too.

Here’s one company that lost thousands of files when ransomware-scrambled files were replicated to its cloud-based data store. Luckily, its cloud-based service provider had the good sense to back up its clients’ data, but things could have been far worse.

Simply backing up your data to your own network can be dangerous, as some ransomware strains are programmed to seek and scramble files on network drives. You could back up manually to removable media, but this becomes less attractive as the data volume and frequency grows. It also doesn’t provide a clear method for backing up mobile devices that may be travelling outside your office network, or for backing up data stored in cloud applications.

That last point should worry Office 365 users. Last year, Microsoft was hit with Cerber, a ransomware strain delivered via a phishing attack that hit a proportion of the firm’s 18 million users, locking up their files. It took Microsoft several hours to respond to this attack and block it, by which point the damage had already been done for many.

Cloud-based backup

Cloud-based backup is a potential solution here, providing regular backups online to something other than network drives. Its advantages include the ability to program high frequency snapshots, so that you can maintain a narrow recovery point objective should you need to restore after a ransomware attack. Some of these solutions can also be programmed to provide backups across multiple devices, including mobile devices that may be away from high-speed connections for periods of time.

It can also be far easier to test a cloud-based backup solution than it is to test restoration from removable storage, because the cloud-based data will be available online. You don’t have to locate, load and transfer the removable media and hope that the physical formatting is still good. This is all well and good, but it’s vital to ensure that your cloud backup service is equipped with proper encryption.

The move to cloud, means data is stored on shared storage and you have no control over the physical storage, so no chance to shred the disks when they are retired. This makes encryption of the critical data essential. That means working with a cloud backup provider that doesn’t have access to your data by controlling your encryption key on your behalf.

Organize your files

Once you’ve established a solid backup workflow, it’s time to establish your need-to-restore list. Look at how you’re organizing and tagging individual files, perhaps related to business processes or sensitivity. Is there an easy way to identify and gather all the files that you can’t live without, and include them in the backup process automatically? Is storing them in a specific location workable?

It may be prudent to look at types of data here – perhaps files created in specific formats, or with other key characteristics such as those created since a certain date, or by a particular person or group. This is where file metadata comes into its own, Make this easier by using a file tagging system, along with a complementary file discovery tool to gather and categorise your existing files.

Let’s not forget virtual machines in this equation. With ransomware growing increasingly smart and aware of server-based infrastructures, protecting both physical and virtual machines is increasingly critical.

Equally important is the capacity to root out any malware that has succeeded in penetrating your organisation. It’s one thing to recover from an attack but you don’t want the code knocking about your network, resident on end points or buried in your data from where it might spring back to life. You’ll therefore need a comprehensive search capability that lets you find and remove code from end points, cloud applications and backed up snapshots.

Finally, use a monitoring solution to ensure that this strict new file management regime you’ve put in place stays in place. Agent-based endpoint monitoring can help you check that files are being stored in the right place and therefore that the appropriate files are being backed up properly.

A critical part of this solution is anomaly detection – the capacity to spot things that are out of the ordinary. Of course, to achieve this you need to get a picture of what constitutes “ordinary”, so it’s important to map the behaviour of the data in your system. Once you have that map, you can spot anomalies such as large amounts of data changing on a device that might be down to encryption and therefore suggest an attack is underway. IT can use this information to start backing up and restoring data from the point when an attack began.

Ransomware is getting nastier, and more pervasive. So you have to get smarter, and more resilient. Don’t be like the 88 NHS trusts who were convinced that ransomware wasn’t a threat. The most destructive problems are the ones hiding in plain sight. By putting multi-layered defences in now, you’ll save yourself some serious headaches in the future.


SUPPORTED BY: Druva

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/30/protecting_your_data_from_ransomware/

Stop us if you’ve heard this one: Russian hacker thrown in US slammer for $59m bank fraud

A Russian hacker already facing a lengthy prison stay in the US has been sent down for another 14 years for heading up an “organized cybercrime ring” that racked up $59m in damages across America.

Roman Valeryevich Seleznev, aka Track2, the 33-year-old son of a Russian MP, was sentenced after being convicted of one count each of racketeering and conspiracy to commit bank fraud. Though each charge carries a 168 month sentence, Seleznev will serve the terms concurrently along with a 27-year stretch he was given in April for separate fraud charges.

He will also have to pay restitution of $50,893,166.35 and $2,178,349 for the two latest convictions, this time in the courts of the Northern District of Georgia and the District of Nevada.

The sentencing comes after Seleznev copped to helping run an identity theft and credit card fraud ring through the Carder.su site. Seleznev and others were believed to have racked up more than $59m from trafficking in stolen identities and credit cards. They also conspired to hack and defraud credit card companies and payment processors.

“His automated website allowed members to log into and purchase stolen credit card account data,” US prosecutors said of Seleznev on Thursday.

“The defendant’s website had a simple interface that allowed members to search for the particular type of credit card information they wanted to buy, add the number of accounts they wished to purchase to their ‘shopping cart’ and upon check out, download the purchased credit card information.”

While most of the stolen card numbers were resold for about $20 apiece, others were used in the infamous 2008 ATM hijack that resulted in a total of $9.4m in cash being pulled from more than 2,100 cash machines around the world.

Seleznev was among the key figures nabbed in the Operation Open Market, a year-long effort by multiple law enforcement agencies in the US government that ended up netting 55 arrests and 33 convictions so far.

These won’t be the last charges Seleznev faces, either. The US DoJ notes that the convicted fraudster is also the defendant in another pending case being heard in the Western Washington district court. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/01/roman_seleznev_track2_jailed/

Lawsuits Pile Up on Uber

Washington AG files multimillion-dollar consumer protection lawsuit; multiple states also confirm they are investigating the Uber breach, which means more lawsuits may follow.

It’s been quite a week for Uber as the lawsuits from its recent high-profile breach keep on coming. The popular ride-hailing company has been under fire ever since it was disclosed that the company took more than a year to notify consumers of a breach, after which it allegedly paid hackers $100,000 to keep the attack quiet. The hack reportedly affected 57 million people worldwide and exposed names and driver’s license numbers of some 600,000 drivers in the United States.  

First, on Monday, the city of Chicago and Cook County filed a lawsuit asking the court to fine Uber $10,000 a day for each violation of a consumer’s privacy. The suit contends Uber took much too long to report the breach.

Next, on Tuesday, Washington State Attorney General Bob Ferguson filed a consumer protection lawsuit against Uber, asking for penalties of up to $2,000 per violation. The lawsuit alleges that at least 10,888 Uber drivers in Washington were breached, so the lawsuit could result in millions of dollars of penalties.

On top of the two lawsuits from state and local governments, Uber has also been hit with two class-action lawsuits. Both cases were filed last week. The first, Alejandro Flores v. Raiser was filed in federal court in Los Angeles. The second lawsuit, Danyelle Townsend and Ken Tew v. Uber, was filed in federal court in San Francisco.

Multiple state governments also say that they are conducting investigations into the Uber breach. Dark Reading has confirmed ongoing investigations by the states of Connecticut, Massachusetts, Missouri, and New York.   

The lawsuit by the state of Washington was seen as significant, because it was the first lawsuit against Uber filed by a state government. Under a 2015 amendment to the state’s data breach law, consumers must be notified within 45 days of a breach, and the Attorney’s General’s office also must be notified within 45 days if the breach affects 500 or more Washington residents. Tuesday’s lawsuit was the first one filed under the revised statute.

“Washington law is clear: When a data breach puts people at risk, businesses must inform them,” Ferguson said in a press release. “Uber’s conduct has been truly stunning. There is no excuse for keeping this information from consumers.”

Craig Spiezle, chairman emeritus of the Online Trust Alliance, says the Uber case may spark renewed calls for national data breach legislation. In the past, there’s been a general consensus for such a measure because companies must grapple with the cost of  handling the compliance requirements of 48 separate state data breach laws.

“The European Union has a data breach notification requirement of 72 hours,” says Spiezle, who worked closely with Attorney General Ferguson on the data breach law in Washington. “While three days is really not enough time, I think Washington’s 45-day law is very generous. I’ve actually been on the record calling for a notification period of 10 days.”

The last time the federal government talked seriously about national data breach legislation was in early 2015. At the time, the Obama administration called for a notification period of 30 days. Legislation sponsored that year by Sen. Tom Carper (D-Del) and Sen. Roy Blunt (R-Mo.) would have required companies to notify federal agencies and consumers of a breach that affects more than 5,000 consumers. Few other details were released, such as which agencies companies should report to first, the Department of Homeland Security or the FBI, and the issue slowly died as the 2016 election year morphed into 2017, the nation’s first under the Trump administration. 

In response to this most recent Uber case, Sen. Richard Blumenthal (D-Conn.) last week called for the Federal Trade Commission to investigate the Uber breach and impose strict penalties. And Sen. Mark Warner (D-Va.) has expressed support for national data breach legislation. A spokesman for Sen. Warner would offer no new details and would only say national data breach legislation “continues to be a top priority” for the senator.

Efforts to communicate with Sen. John Thune (R-S.D.) were unsuccessful. Sen. Thune chairs the Senate’s Commerce, Science and Transportation committee, which could potentially play an important role in any national data breach legislation. 

Related Content:

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/lawsuits-pile-up-on-uber/d/d-id/1330530?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple