STE WILLIAMS

The quantified employee: new ways to be watched at work

Is your employer watching you? “Of course,” you probably assume. But you might not know the half of it.

Last week, Bloomberg BusinessWeek hung a story on last year’s events at the Daily Telegraph, a UK newspaper. There, employees discovered mysterious “OccupEye” black boxes beneath their desks. Said devices tracked exactly when each desk was occupied – ostensibly to determine exactly when heat or air conditioning should be provided. And, in the right environments, Bloomberg notes, devices like these do seem to promise significant energy savings. Bloomberg and Enlighted – one of OccupEye’s competitors – point to a New York company that has saved 25% in energy costs by outfitted 1,000 sensors in light fixtures.

Good reporters, however, are natural skeptics. And the Telegraph’s reporters – represented by the National Union of Journalists – managed to get OccupEye’s devices removed. But the overall trend is most assuredly towards more ubiquitous surveillance, especially where laws allow it.

The quintessential example, of course, is the US. There, as Workplace Fairness points out:

Employers can legally monitor almost anything an employee does at work as long as the reason for monitoring is important enough to the business. Employers may install video cameras, read postal mail and e-mail, monitor phone and computer usage, use GPS tracking, and more.

There are a few slightly tougher state laws — for example, in California and Maine. But, broadly speaking, Bloomberg only slightly oversimplified by observing: “Your boss can legally track you everywhere but the bathroom.”

In the US, employee outrage over all this seems, shall we say, muted. Last year, Pew polled Americans on how they would feel if an employer responded to workplace theft by implementing video surveillance with facial recognition technology, and then kept the footage for possible use in employee performance reviews. Some 54% of Americans found that acceptable; only 24% said it was flat-out unacceptable.

Given this, enormous investments are being made in new technologies aimed at gaining deeper insights into employee behavior – both anonymously via aggregated metadata, and potentially, linked to individuals. Some providers of the technology can be cagey about whether they will permit corporate customers to disable anonymization. Even so, as people analytics enthusiasts Josh Bersin, Joe Mariani, and Kelly Monahan admit, “the sheer bulk of data that sensors collect makes meaningful anonymization difficult. After all, it takes as little as four pieces of metadata to sufficiently identify an individual from digital exhaust.”

What kind of information can now be captured? According to Canadian Business, Humanyze’s microphone-equipped smart badges track employee movements, creating “a heat map of office activity”. That helps companies plan more effective office redesigns.

While Humanyze doesn’t track the content of conversations, it does track how often employees talk to each other – as well as each employee’s proportion of talking to listening.

Combined with big data technologies, Humanyze promises a whole raft of additional applications, such as better predictions of when valued employees are planning to quit, so companies can intervene to keep them. It’s hard to see how such an application can be kept entirely anonymous.

One Humanyze customer wondered if the next generation of people analytics hardware won’t track heart rates and body temperature “to better understand how employees manage stress”.

That customer wouldn’t be alone; a few months ago, Bloomberg reported that “companies including JPMorgan Chase and Bank of America have [explored] systems that monitor worker emotions to boost performance and compliance”.

Behavox already uses “emotional analysis of telephone conversations” to help generate “a worker’s overall behavioral picture,” Bloomberg notes. “When a worker deviates from established patterns – shouting at someone he’s trading with when previous conversations were calm – it could be a sign further scrutiny is warranted.”

It’s no surprise that emotional tracking might find an early application in trading rooms, where traders can easily be responsible for hundreds of millions of dollars in trades every day, and rogue traders have brought major financial institutions to ruin. But one can easily envision its use in any environment where high-stakes sales and decisions are being made.

Of course, at the very opposite end of the organization, it’s already there. As Technology Review reports, Cogito’s call center software analyzes the raw audio of conversations, “automatically assessing the dynamics”. TR quotes Cogito CEO Josh Feast:

Conversation is like a dance. You can tell whether people are in sync, and it turns out this is a much better measure than language.

Clearly, in the future, no customer service agent (or trader) will ever dance alone in the dark. Will you?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wZkvzCCBabE/

Where does the buck stop when there’s a security breach?

So your company network is compromised and there’s a security issue. Who’s responsible, the IT specialists? The board? According to research from BAE Systems, mostly each thinks the other is where the buck stops.

The figures break down as a third of C-suite directors thinking the IT team is responsible for IT security, while 50% of IT professionals think it’s within the purview of the board. Call us old-fashioned, but we’d welcome a bit of co-operation.

So we asked some of our contacts, both in the UK and the US, what they thought.

Business bodies

The first thing to make clear is that everybody we asked was more comfortable with the idea of prevention than fixing a problem after it had caused damage. Oliver Parry, head of corporate governance at the UK’s Institute of Directors, was among them. He said:

As with other principal risks to a business, responsibility of outlining this [prevention] strategy should fall with the board. It is crucial that executives understand the importance of having non-executive directors with a digital background.

It doesn’t stop with the board, though, he added:

Lasting cybersecurity only comes from embedding good practice throughout the culture of an organisation, starting from the top. No system or person alone can prevent indefinitely the threat of a cyber-attack. With human error invariably the most obvious vulnerability, it is training and awareness that should be the focus of an organisation’s efforts, rather than pre-emptive work to ensure somebody else gets the blame.

Tom Thackray, director of innovation at the UK’s CBI, concurred:

From talking to businesses across the country, it’s clear that cybersecurity isn’t just an issue for the tech team, or one member of the board. It’s a cross-business, joint effort. Executives need to be able to ask the right questions about cyber security in order to get the best out of their teams, but ultimate accountability will differ depending on the size and type of organisation.

In the US, the National Association of Company Directors (NACD) directed us towards its freshly minted publication from last month, Cyber Risk Oversight in its Directors’ Handbook. This points to a guideline from the National Institute of Standards and Technology, developed under an executive order from President Obama, aimed at public-service entities but which can be adopted by the private sector voluntarily. It is available here in its entirety but the bottom line is that it takes a step by step methodical top-down approach, starting with risk assessment.

The handbook also has an extensive section on the relationship between the board and the chief information security officer (CISO). It offers detailed breakdowns of how to make this best practice happen:

Many board members now seek to establish an ongoing relationship with the CISO, and include the security executive in discussions about cybersecurity matters at full board and/or key committee-level meetings.

Team effort

Nobody is doubting the veracity of the BAE Systems data, but one of the features of asking a polarised question about who’s responsible in a crisis is that you’ll get a one-or-the-other answer. Our totally unscientifc findings, based on our small handful of people we spoke to, suggest that in the real world people have a more constructive, less us-and-them view of how to handle a security crisis.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/r0paqfTfoEM/

Beeps, roots and leaves: Car-controlling Android apps create theft risk

Insecure car-controlling Android apps create a heightened car theft risk, security researchers at Kaspersky Lab warn.

Boffins at the security software maker made the warning after putting Android apps from seven (unnamed) car makers through their paces, uncovering a raft of basic security flaws in the process.

During recent years, cars have started actively connecting to the internet. Connectivity includes not only their “infotainment” systems but also critical vehicle systems, such as door locks and ignition, which are now accessible online.

The list of the security issues discovered by Kaspersky Lab’s boffins includes:

  • No defence against application reverse-engineering.
  • No code integrity check – important because it enables criminals to incorporate their own code in the app and replace the original program with a fake one.
  • No rooting detection techniques. Root rights provide Trojans with almost endless capabilities and leave the app defenceless.
  • Lack of protection against app overlaying techniques. This helps malicious apps to show phishing windows and steal users’ credentials.
  • Storage of logins and passwords in plain text.

Upon successful exploitation, a hacker could gain control over the car, unlock the doors, turn off the security alarm and, theoretically, steal the vehicle. In each case the attack vector would require some additional preparations, like luring owners of applications to install specially-crafted malicious apps that would then root the device and get access to the car application.

“The main conclusion of our research is that, in their current state, applications for connected cars are not ready to withstand malware attacks,” said said Victor Chebyshev, security expert at Kaspersky Lab. “Thinking about the security of the connected car, one should not only consider the security of server-side infrastructure.”

“We expect that car manufacturers will have to go down the same road that banks have already gone down with their applications. Initially, apps for online banking did not have all the security features listed in our research. Now, after multiple cases of attacks against banking apps, many banks have improved the security of their products. Luckily, we have not yet detected any cases of attacks against car applications, which means that car vendors still have time to do things right,” he added.

More details on the research can be found in a post on Kaspersky Lab’s Securelist blog here.

The security of the apps compared unfavourably to comparable banking apps, according to third party experts.

Mike Ahmadi, global director of critical systems security at Synopsys, commented: “Banks are indeed more mature in their general approach to security, including the hundreds and often thousands of applications they must interface with on a daily basis. They have faced the [problem] of being a target for a much longer time than the automotive community has, and they take a very proactive approach, generally speaking, in addressing ongoing security issues.

“The automotive industry is still relatively new to both application management and security issues, comparatively speaking, and is certainly working hard to address issues as they arise. While the banking industry may be better prepared to address security issues, the automotive industry continues to learn how to manage the many security challenges it faces as their connected vehicles continue to proliferate,” he added. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/20/android_auto_app_insecurity/

Connected car in the second-hand lot? Don’t buy it if you’re not hack-savvy

Cars are smart enough to remember an owner, but not smart enough to forget one – and that’s a problem if a smart car is sold second-hand.

The problem is as simple as you could imagine: people shovelling apps and user services into cars forget that the vehicle nearly always outlives its first owner.

The global head of IBM’s X-Force Red penetration-testing team, Charles Henderson, created a flurry at RSA last week by relating how a connected car app could still access a car he traded in – two years after he’d sold it.

Without naming the machine’s maker, Henderson related the kinds of features beloved of high-end marques: “geolocation of the car, climate control, navigation control, it allowed me to remotely honk the car horn … and finally I could unlock the car.”

Henderson feels that none of that should have been possible years after he disposed of the car. Especially as he says he ran a factory reset, reset his garage door and had traded the old car through a factory dealership (which, he has explained in interviews like this with CNN should have revoked his access to the old car).

The situation is, Henderson says, a “catastrophic failure”, but it’s one that occurs all over Internet of Things products – cars, houses, light bulbs and the rest.

The problem is pretty obvious: much of the industry is treating products as consumables, and any attention that’s paid to security is focussed on the first buyer.

“The concept of access revocation only works if it’s implemented in a way that’s obvious for users. It must be intuitive”, he said.

Whether it comes from standardisation, regulation, or a consensus in the industry, Henderson says everybody – the dealer, the seller, and the next buyer – needs an obvious way to confirm access revocation.

“How can we expect the auto industry to do access revocation right when Fortune 500 companies don’t do it well internally?”

Also, and obviously, he added that the mobile phone provides at least one example of what the IoT industry has to do: implement a factory reset that actually works.

There’s a short version of his presentation in the video here.

+Comment: Henderson also suggested some kind of centralised platform is needed – to quote him, “identity management for devices is best served when it’s centralised.”

He’s probably right – but The Register can’t help wondering what that’s going to look like once a bunch of vendors have talked different vendors in different industries to implement different identity management and access revocation solutions.

It’s also hard to imagine how to meet Henderson’s wish for customers that are well-enough educated to know what to ask, let alone know how to protect themselves.

As Henderson said: even Fortune 500 companies have trouble managing access and identity revocation properly.

“Educate end users” has been a staple of the IT sector for decades. If it was going to work, we’d surely have evidence by now. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/20/connected_car_in_the_secondhand_lot_dont_buy_it_if_youre_not_hacksavvy/

Google bellows bug news after Microsoft sails past fix deadline

Google’s Project Zero has again revealed a Windows bug before Microsoft fixed it.

Project Zero operates under a “once we tell you about a bug you have 90 days to fix it or the kitten gets it or we reveal it to the world” policy.

On this occasion, the bug allows attackers to access memory using EMF metafiles, a tool implemented in the Windows Graphics Component GDI library (gdi32.dll) and which helps applications to use graphics. And once an attacker is in memory, things can get interesting.

Mateusz Jurczyk, the Google chap who found the bug and others like it in the past, writes that Redmond fixed similar messes he reported last year. But he also alleges that the fix for those flaws, MS16-074, didn’t completely address issues that allow access to memory. So he told Microsoft about the issue on November 16th, 2016, and waited. And waited. And waited until last week’s we-don’t-call-it-patch-Tuesday-anymore came and went because Microsoft needed more time to get a new patch dump just right.

At which point the 90-day policy kicked in and Google pulled the trigger, revealing the flaw to the world.

Microsoft doesn’t like it when this happens: back in November 2016 the company all-but-accused Google of giving criminals a helping hand by revealing a bug, while also saying the flaw in question wasn’t all that scary anyway.

The Register is yet to detect a response from Microsoft on this releases. If we do … you know the drill [We’ll either update this story and/or write a new one – Ed]. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/20/google_project_zero_discloses_microsoft_bug_again/

Florida Man jailed for 4 years after raking in a million bucks from spam

A marketer who used stolen email accounts to trouser more than a million dollars by spamming people has been sent down for four years.

Timothy Livingston, 31, was handed the 48-month term after he pleaded guilty to counts of conspiracy to commit fraud in connection with computers and access devices, conspiracy to commit fraud in connection with electronic mail, and aggravated identity theft.

The sentence was handed out on Thursday by Judge William J Martini in the New Jersey District court where Livingston pled guilty to the charges last year.

Based in Boca Raton, Livingston ran and operated a marketing company called A Whole Lot of Nothing LLC, which specialized in sending out bulk emails for clients.

“Livingston’s clients included legitimate businesses – such as insurance companies that wished to send bulk emails to advertise their businesses – as well as illegal entities, such as online pharmacies that sold narcotics without prescriptions,” the US Department of Justice said.

What the clients did not know (or chose not to ask) was how Livingston was able to spaff out so many emails. The marketer had an arsenal of botnet-controlled accounts and compromised servers he used to help send out the spam runs without being identified or detected by spam filters. He would then collect a commission every time one of his junk mail messages was converted to a sale.

Livingston’s codefendant, Tomasz Chmielarz of New Jersey, also admitted writing the malware used to infect and control the botnets.

By the time Livingston was finally caught, it is estimated his spamming network was able to rack up around $1.35m in ill-gotten gains. He will have to forfeit all of that as part of his plea deal. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/18/four_years_for_13m_spam_heist/

After Election Interference, RSA Conference Speakers Ask What Comes Next

Election-tampering called ‘a red line we should not allow anyone to cross.’

RSA CONFERENCE — San Francisco — As discussion about possible American collusion with Russian interference in the 2016 US presidential election heats up in Washington, the events have also been a hot topic here. RSA Conference speakers have not only tackled recent hacking events specifically, but discussed how they exacerbate the weaknesses of an already fragmented, lightly regulated voting system with highly irregular security practices.

The fundamental questions: what comes next and why does it matter to cybersecurity professionals? 

Rep. Michael McCaul (R-TX), chairman of the House Homeland Security Committee, said during a keynote session Tuesday that he was first briefed on election-related attacks in the spring, and has “no doubt” Russians undermined the election.

“This is a red line we should not allow anyone to cross,” said Rep. McCaul. 

“We must continue to call out Moscow for election interference. …  And if we don’t, I am certain they will do it again,” he said.

McCaul also said that there must be a response to this behavior, and the “strategies should not include just returning fire.”

These were thoughts echoed by John P. Carlin, chair of Morrison and Foerster LLP in a session called “Electoral Dysfunction” Wednesday. Until recently, Carlin was the US Department of Justice’s assistant attorney general for national security; he left the position in October. “I’m very concerned about repeated conduct,” by nation-state attackers, said Carlin.

During Carlin’s tenure, DOJ developed a cybercrime “deterrence playbook” to discourage nation-state attacks on the US by ensuring there would be consequences for them. For deterrence to work, Carlin explained, the government would not only have to make it clear that it would take action in respond to specific acts, but make it clear that “we are going to take actions until the behavior stops.”  

Michele Flournoy – founder and CEO of the Center for a New American Security, who served as Under Secretary of Defense for Policy from 2009 to 2012 – took aim at Russia and recent attacks specifically.

“We need to assess Russian with clear eyes,” said Flournoy, during a session on the future of security and defense Tuesday. She explained that after the Cold War, Russia did not integrate with global community as other members of the Eastern Bloc, and that since Putin took leadership of the country a second time he has pursued a campaign “against democracy” and an effort to deunify allies. 

“We owe it to ourselves to investigate [these attacks] further,” Flournoy said, saying that we need to “really map the extent of contact between the Trump campaign and Russia.” 

(Later that day, the New York Times reported that members of the Trump campaign had repeated contact with Russian intelligence before the election. Some legislators, including Senate Foreign Relations Committee Chairman Bob Corker, a Republican, has since suggested that recently ousted national security adviser Michael Flynn should testify before Congress, telling MSNBC “Maybe there’s a problem that obviously goes much deeper than what we now suspect.” President Trump has suggested the controversy is manufactured.)

How much of this really falls under the purview of cybersecurity, though? No evidence has been reported of voting machines themselves being exploited or attacked in the 2016 US presidential election. The hacks and information leaks that did occur were not particularly sophisticated from a technological standpoint.

Despite that, “it may eventually come to be seen as the biggest hack in history,” said Kenneth Geers, Comodo Senior Research Scientist and a NATO Cooperative Cyber Defence Center of Excellence Ambassador, in an interview with Dark Reading. Geers also spoke about the demonstrable connection between malware activity and significant political, socioeconomic events during a Comodo event here Monday and RSA presentations.  

Geers says one could “definitely draw a parallel” between Russian involvement in the US elections and the Ukraine election in 2014, because both included the hacking of political parties, doxing, and the information operations in social media – like the creation of fraudulent accounts and the spread of propaganda, which are not always seen as part of the American definition of “information security.” 

While attackers could focus their hacking efforts on e-voting machines themselves, Geers said, it would easier to discover than these other, subtler methods, Geers said.

Carlin echoed this sentiment. “Think of how effective this was, and it did not attack the [systems we use to vote.]”

There are other, practical reasons attackers wouldn’t go after voting machines. Mark Weber, vice president of labs at Coalfire explained in the “Electoral Dysfunction” session, although vulnerabilities have been found in machines before, many of them require physical access, or near access to the hardware. Therefore, it’s simpler “not to attack the infrastructure, but the things that access the infrastructure” – like voter databases, for example.

These attacks nevertheless cause distrust in the very democratic process.

In the same session, Pamela Smith, president of Verified Voting said the 2016 election showed that the US vote auditing and recount process is “worse than we thought.” There are roughly 6,000 voting jurisdictions in the US, all with their own rules. Some of the jurisdictions that were called upon to do a recount had no voter-verified paper trails, others had policies allowing them the option to re-run their machines’ tally instead of counting the paper votes, and others halted the recounts before they were completed. 

Related RSA Content:

 

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad … View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/after-election-interference-rsa-conference-speakers-ask-what-comes-next-/d/d-id/1328186?_mc=RSS_DR_EDT

Closing The Cybersecurity Skills Gap With STEM

As a nation, we should be doing more to promote educational programs that prepare today’s students for tomorrow’s jobs.

The growing number of cybersecurity threats and attacks expose the importance of engaging students in hands-on learning. Not only are cybersecurity threats increasing, they’re also becoming significantly more complicated.

Unfortunately, the number of skilled cybersecurity professionals isn’t keeping up. According to a report from Intel Security and the Center for Strategic and International Studies, 209,000 U.S. cybersecurity jobs went unfilled in 2015.

 More on Security Live at Interop ITX

Educational institutions from grade schools to universities can correct this problem by broadening hands-on classroom learning to address the need for well-trained cybersecurity professionals.

Here are five ways we can begin closing the cybersecurity skills gap:

1. Integrate STEM Education in Grade School
Active, hands-on STEM (science, technology, engineering, and math) learning complements traditional learning by offering a way for students to apply textbook concepts to real-life problems. And there’s proof that it works: A study released by the Amgen Foundation and Change the Equation shows students want more tangible learning opportunities. Survey respondents said common teaching methods, such as teaching exclusively from a textbook, are less engaging than hands-on methods. The survey also found that hands-on learning, such as experiments and field trips, are the most effective way to engage students.

The survey also found the following:

  • 81% of students are interested in science, with 73% expressing an interest specifically in biology.
  • Students who are interested in biology classes identified their teachers and classes as the most influential to their career decisions.

On a national scale, the U.S. government aims to increase STEM awareness through programs such as the National Initiative for Cybersecurity Education, which lets teachers access a variety of resources to help them develop STEM-related curricula. By introducing and promoting cybersecurity and STEM education early on in the classroom, students learn to address real social, economic, and environmental problems, and seek necessary solutions.

2. Equip College Students with Cybersecurity Skills
The Comprehensive National Cybersecurity Initiative is a government program that identifies goals to create a more comprehensive, updated national cybersecurity strategy. A key component of this initiative is expanding cyber education and placing courses in K-12 schools to create technologically skilled, cyber-savvy students. However, putting this kind of program in place would require another national strategy like the science and math education initiative of the 1950s.

Even though there isn’t a nationwide program to promote cybersecurity in K-12 schools, one high school is preparing its students for a lucrative STEM career. King William High School in Virginia offers a four-year track to help students build the fundamental skills necessary for entry-level employment in the cybersecurity field. The program’s students can graduate from high school with industry-recognized certifications and valuable cybersecurity skills.

Universities across the country are also starting to take notice of the increasing interest in cybersecurity, and many computer science degree programs are offering cybersecurity as a specific concentration. Students studying computer science often focus on information assurance and computer security, specifically learning about designing systems and strategies that safeguard information. Typical course topics include computer security, digital forensics, and machine learning.

Through partnerships or internships, STEM students have the opportunity to work with industry partners to gain real-world experience. For example, the Department of Homeland Security has a Cyber Student Volunteer Initiative in place for summer internships. This program gives undergraduate and graduate students the opportunity to work alongside leaders in the DHS, gaining valuable experience from work projects, real-life scenarios, and mentoring from DHS cybersecurity professionals.

Many cybersecurity internships look for students enrolled in a cybersecurity-related field, in STEM or computer science. Additionally, any experience students can get (such as working as a teaching assistant) will help set them apart while applying for internship opportunities. If students aren’t sure where to find these internship opportunities, they can also consult their college’s career center.

3. Drive Awareness of Cybersecurity Jobs
Studies show most millennials aren’t aware that jobs in cybersecurity even exist. Government, businesses, and our education systems must collaborate and thoroughly train the future generation of cyber defenders. Because many STEM colleges now offer cybersecurity as a degree concentration, driving awareness of these programs is key to increasing the pool of skilled workers.

But it’s not enough to increase the awareness of these jobs; you must be able to attract the talent that you need. Along with attending career fairs, businesses should find ways to attract digitally savvy college graduates. Many recent graduates list flexible scheduling, ongoing education, and continuous feedback as important factors when deciding on a job offer. If companies can tailor their programs to reflect what potential employees are looking for, they could attract and keep top-tier talent.

Technology company CSRA recently opened its Integrated Technology Center in Bossier City, Louisiana, with the goal of helping the federal government fight cyber terrorism. The company boasts an internship program in which about 85% of their interns stay on for a full-time job after graduation.

4. Instruct with Industry Tools and Technology
As new technology emerges, much of cybersecurity remains uncharted territory. However, implementing industry tools and technology into a student’s curriculum can be the solution we need to thwart new, unfamiliar threats.

To train students more effectively, many universities now offer cybersecurity labs with cutting-edge equipment like comprehensive risk assessment, incident management services, and encryption simulations.

5. Training the Current Cybersecurity Workforce
Industry, governments, academia, and nonprofits should work together to aggressively address the need for a skilled cybersecurity workforce. The U.S. Bureau of Labor Statistics says the demand for cybersecurity professionals will grow 53% by the end of 2018. To successfully prepare our workforce, we must upgrade our current cybersecurity professionals by providing on-the-job training and improving cybersecurity education programs.

Related Content:


Kyle Martin brings 11 years of storytelling experience to the content coordinator position at Florida Polytechnic University. In this role, Martin develops original content that showcases the university experience as a way to attract new students and faculty. He also lends … View Full Bio

Article source: http://www.darkreading.com/careers-and-people/closing-the-cybersecurity-skills-gap-with-stem/a/d-id/1328181?_mc=RSS_DR_EDT

Yahoo Explains Cookie Forgery Related To Two 2016 Breaches

Yahoo’s recent update on forged cookies is in relation to two, not three, security breaches announced last year.

Yahoo has provided further clarification on the cookie forging activity related to data breaches disclosed in September and December of last year.

Earlier this week, the company began to issue warnings to inform users of forged cookies used to steal their information. A Dark Reading report implied the forged cookies were used in a third incident; in fact, this activity was limited to the two 2016 breaches.

“Yahoo has connected some of the cookie forging activity to the same state-sponsored actor believed to be responsible for the data theft we disclosed on September 22, 2016,” a Yahoo spokesperson says.

The alerts sent this week are a continuation of Yahoo’s ongoing investigation into the cookie forging disclosed in December, not a new event. Both the September and December data breaches are related to the theft of user data in 2014.

Further, the company states, it believes the separate theft of user data in August 2013, which was disclosed on December 14, 2016, is likely unrelated to the incident announced on September 22.

Yahoo believes the investigation of this breach is in its final stages.

Shuman Ghosemajumder, CTO at Shape Security, says this update doesn’t add much to the overall story; Yahoo is simply continuing efforts promised in December.

What’s important to remember, he says, is this credential spill drives potential for millions of account takeovers on thousands of major websites. Cybercriminals will have an easier time increasing the number of compromised credentials far beyond the two billion spilled.

“Credential spills are one of the most widespread, yet misunderstood, security breaches,” says Ghosemajumder. Most people will focus on users’ Yahoo accounts, but the damage affecting those has been done. Yahoo will simply tell users to reset their passwords.

“The real issue now is that these passwords will be used to breach thousands of other websites unrelated to Yahoo, as cybercriminals use advanced automated tools to discover where users have used those same passwords on other sites, through credential stuffing attacks, the most common attacks on web applications and APIs today,” he explains.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/yahoo-explains-cookie-forgery-related-to-two-2016-breaches/d/d-id/1328205?_mc=RSS_DR_EDT

At Least 70 Organizations Targeted In Sophisticated Cyber Surveillance Operation

Most of the targets are in Ukraine, though a few have been spotted in Russia and elsewhere, CyberX says

At least 70 organizations across multiple industries including critical infrastructure, scientific research and media have been hit in a sophisticated cyber-surveillance campaign conducted by threat actors with potential nation state connections.

A majority of the victim organizations are Ukraine based, though a handful of targets have also been spotted in Russia, Austria and Saudi Arabia as well. 

Security vendor CyberX uncovered the operation after discovering malware used in the attacks in the wild and then reverse engineering it.

The company has dubbed the campaign Operation BugDrop because one of the methods employed by the threat actors behind it to collect data is to eavesdrop on conversations via the victim’s PC microphone. The tactic is highly effective because a computer microphone, unlike a video camera, is almost impossible to block without actually disabling the associated hardware, CyberX noted in a blog.

The focus of BugDrop appears to be to capture a range of sensitive information from targets via audio recordings of conversations and via documents, screenshots and passwords from victim system, according to CyberX.

The targeting of the victims is similar to that of Operation Groundbait, a cyber surveillance campaign uncovered by ESET last May.

As with the Groundbait campaign, many of the victims of BugDrop are located in Luhansk and Donetsk, two states that have proclaimed themselves to be independent from Ukraine and are regarded as terrorist states for that reason by the government there. Many of the tactics, techniques and procedures used in the BugDrop campaign — including the use of spear phishing emails and malicious macros — are also similar to those used in Groundbait.

Even so, BugDrop appears to be a more sophisticated and better-resourced operation than Groundbait, says Phil Neray, vice president of industrial control security at CyberX.

For example, the operators of BugDrop are using DropBox to store data exfiltrated from victim systems, making it harder to spot the illegal activity. “DropBox is a cloud-based service and it is very easy to upload data to it without having any firewalls or monitoring systems see anything suspicious or unusual” Neray says. All data stored in DropBox is also encrypted.

The malware itself is stored on a free web hosting service, which makes it harder to track the threat actors behind it. In contrast, the operators of Groundbait hosted their malware on a command and control server on a domain they had created thereby giving investigators a way to get clues to their identity from the registration details.

Operation BugDrop also employs a sophisticated malware detection evasion technique known as Reflective DLL Injection to inject malicious code on victim systems, Neray says. The method involves loading malicious code into memory without calling the usual Windows API call so as to bypass security verification of the code. The same approach was used in the BlackEnergy campaign on Ukraine’s electric grid, and in Stuxnet attacks in Iran, he noted.

Another pointer to the sophistication and resources available to the operators of BugDrop comes from the amount and type of data being collected and presumably analyzed.  At least 2 GB to 3 GB of unstructured data are being uploaded to the DropBox accounts daily.

This means the operators of BugDrop must have a sizeable infrastructure for storing and decrypting the data and access to skilled human analysts for extracting value from the data, Neray says.  “If you think about it, this is not like credit card data,” he says. “You need to have human analysts to look at the data.”

The organizational and logistical planning required for analyzing unstructured data at this scale daily suggests nation-state level capabilities he says.

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/at-least-70-organizations-targeted-in-sophisticated-cyber-surveillance-operation/d/d-id/1328206?_mc=RSS_DR_EDT