STE WILLIAMS

Security Forecast: Cloudy with Low Data Visibility

Businesses are moving more sensitive data to the cloud but struggle to monitor and manage it once it’s there.

A need for greater flexibility, speed, and convenience is driving more businesses to the cloud. Less than 25% had their applications, data, and infrastructure in the cloud two years ago but 44% have them cloud-based today, and 65% will be cloud-based two years from now.

The data comes from a study by Google Cloud and the MIT SMR Custom Studio. Researchers polled more than 500 IT decision-makers and found security is also a key driver of cloud adoption: 44% of respondents moved to the cloud because of their increased confidence in security. Overall, 74% of participants are more confident in cloud security.

In Dark Reading’s upcoming 2017 Cloud Security Survey, 62% of IT leaders report moderate to heavy usage of cloud services and applications. Thirty-six believe the cloud is, or eventually will be, more secure than their on-premises IT environments. Twenty percent say they are moving more data to the cloud, partly because of its strong security capabilities.

A Change in Perspective
“It used to be that security was one of the things holding organizations back from the cloud,” says Rob Sadowski, trust and security marketing lead for Google Cloud. “What we’re starting to see, and what we saw in the research, is the script is almost flipping.”

While Sadowski admits the confidence is partly due to improved documentation, he says two-thirds of respondents attribute their confidence to direct experience in the cloud. The top most consistent two reasons for moving to the cloud include “increased flexibility in business processes and vendor choices,” and the “ability to integrate with new tools and platforms.”

For many, the transition is gradual. Many businesses start out by doing some of their test and development work in the cloud, says Sadowski. They learn how to build new applications in the cloud. Once they achieve a certain level of comfort, they think about other workloads.

“It could be bulk storage, file sync-and-share or collaboration, or sharing data and collaborating,” he says. “As you work with more organizations that are distributed around the country and around the globe, having that data in the cloud enhances their productivity.”

Over the past 5 to 10 years, the amount of data organizations generate “is exploding,” Sadowski continues. After they feel comfortable with testing and development in the cloud, businesses move more important data to understand its growth, apply analytics, and gain insight.

But what happens to all of that data once it’s moved to the cloud? Many aren’t so sure, and poor visibility could be putting their data at risk.

Data Visibility Remains a Challenge
The confidence boost doesn’t mean the cloud is completely secure, or that all companies are fully on board. In Google’s study, 63% of respondents who chose not to move data to the cloud cited security concerns. A survey by Threat Stack and Enterprise Strategy group found 31% of respondents are unable to maintain security as cloud and container environments grow, and 62% are seeking greater visibility into public cloud workloads.

“It is a concern for organizations,” says Sadowski of visibility, which is a challenge as personally identifiable information and healthcare records move to the cloud. Businesses may be responsible for disclosing the movement of data to the cloud because sensitive data types are often regulated.

Participants in a new SANS survey say they lack visibility, auditability, and controls to actually monitor everything in their public clouds. More than half (55%) say they are hindered from doing adequate forensic and incident response activities because they can’t access logs, or underlying system and application details, in cloud environments.

One-third of Dark Reading’s cloud survey respondents say visibility is a “mixed bag” and they have good visibility in some areas but none in others. One-quarter say they have good visibility overall with a few blind spots; 11% say “most of our data is invisible to us.”

Sadowski advises looking into auditability tools to have greater visibility into who is accessing data and what they are doing with it. This would let you set audit and edit permissions to ensure activity aligns with corporate policies, as well as query the data and make sense of it.

He also emphasizes the importance of encryption to directly protect data. “Encryption is highlighted by many as something they care about, but it’s a difficult solution to implement in a lot of cases,” Sadowski notes.

Nearly 80 of Google’s respondents said it was somewhat or very important to manage the strength of their encryption, or manage the encryption process itself. However, whether the company or cloud provider controls the encryption keys is “a really big bone of contention”, says Michael Fuller, associate principal at consultancy The Hackett Group, in the report.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/cloud/security-forecast-cloudy-with-low-data-visibility/d/d-id/1330239?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Inmarsat Disputes IOActive Reports of Critical Flaws in Ship SATCOM

Satellite communications provider says security firm’s narrative about vulnerabilities in its AmosConnect 8 shipboard email service is overblown.

Two critical flaws in a shipboard satellite communication platform from British SATCOM firm Inmarsat allow threat actors to take control of the system and potentially attack other networks on a ship, IOActive warned in a disputed report Thursday.

The vulnerabilities exist in Inmarsat’s AmosConnect 8 (AC8) shipboard email client service and cannot be fixed since the company has discontinued support for the platform, IOActive said in an advisory Oct. 26.

“The vulnerabilities pose a serious security risk,” IOActive said in the advisory. “Attackers might be able to obtain corporate data, take over the server to mount further attacks, or pivot within the vessel networks.”

Inmarsat itself described the report as over-the-top and incorrect. “The story that IOActive have been putting out is very misleading,” a spokesman for the company told Dark Reading. “The service their report focused on is no longer available and cannot be accessed by customers. The theoretical threat they identified would have been very hard to achieve,” he claimed.

Inmarast’s AC8 platform is a satellite communication system that enables services such as email, instant messaging, and Internet services for crewmembers onboard a ship at sea.

IOActive said it found a Blind SQL injection vulnerability and a backdoor account on AC8 that gives attackers a way to gain complete control of the server. The SQL injection error is present in the login form for the platform and would give attackers access to usernames and passwords stored in plaintext on the underlying server. The second vulnerability involves a backdoor account with full system privileges on the AmosConnect server that an attacker can access via a task manager tool using a hardcoded password in the system.

The vulnerabilities that IOActive discovered are not directly exploitable over the Internet. An attacker would require access to a ship’s IT networks to take advantage of the vulnerabilities. But attackers who do gain access to the network could use the vulnerabilities to take control of the platform and use it to potentially hop on to other ship networks.

“There are several ways in which an attacker might be able to get access to that network and that highly depends on the architecture of the vessel,” says Mario Ballano, principal security consultant at IOActive and the author of the report issued today. “But typical ways might include WiFi cracking, via malware on BYOD devices, via malware on USB memory sticks, via other vulnerabilities in satellite equipment,” and other ways, he notes.

Typically, the different networks on a ship, such as the navigation systems network, industrial control systems network, IT network, and SATCOM network are segmented from each other. But sometimes they are not and AmosConnect could be exposed to another ship network thereby putting that at risk as well.

But according to Inmarsat, AC8 is no longer in service. The company said it had begun to retire the platform even prior to IOActive’s report and had in fact informed customers the service would be terminated this July. “Inmarsat’s central server no longer accepts connections from AmosConnect 8 email clients, so customers cannot use this software even if they wished to,” the company claimed.

Inmarsat said that when IOActive informed it of the vulnerabilities in early 2017, the company issued a security patch even though the product was nearing end of life. IOActive meanwhile says it found the vulnerabilities in Sep. 2016 and sent a vulnerability report to Inmarsat last October. The company claims that Inmarsat acknowledged the issues last November itself.

According to Inmarsat, the vulnerabilities that IOActive disclosed would also have been very difficult to exploit since they require direct access to a shipboard PC running the AC8 email client. “To exploit the flaws an intruder would first need to gain access to the ship and then to the computer. Remote access, while a remote possibility, would have been blocked by Inmarsat’s shoreside firewalls, the company claimed.

Related content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/inmarsat-disputes-ioactive-reports-of-critical-flaws-in-ship-satcom/d/d-id/1330242?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Mr. Robot eps3.2_legacy.so – the security review

We’re going back… all the way back to the Five/Nine hack.

Here’s something to get us properly in the mood, as often seen via the very-necessary closed captions for this show:

(brooding music)

There.

WARNING:SPOILERS AHEAD – SCROLL DOWN TO READ ON

 

The Attribution Trap

Since this episode mainly filled in the blanks on what happened to Tyrell for most of season 2 when he was curiously off-screen, there’s not a lot from a technical point of view to cover here. One theme that runs throughout this episode is that seemingly everyone surrounding fsociety was, in fact, explicitly working for Dark Army.

Cisco, Darlene’s networkingly-named boyfriend who she gave the femtocell to modify? He handed it over to Dark Army.

Tyrell? Writing Android malware for Dark Army.

Dom’s boss in the FBI? Getting his hands very bloody for Dark Army.

Leon, Elliot’s sitcom-loving buddy? Informing for Dark Army.

Keeping that in mind, we have more evidence than ever that the massive Five/Nine hack that is fsociety’s claim to fame was, in fact, facilitated behind the scenes entirely by Dark Army – one could even argue that they were solely responsible, not fsociety (but I’ll let you debate that in the comments).

Still, the revelations in this episode underline an important tenet in the murky world of cybercrime: Attribution is hard, a lot harder than people realize.

It’s tempting to want an open-and-shut case when a crime happens. It’s satisfying to point the finger at someone definitively to try to get closure when a hack occurs, but the uncomfortable reality is that correctly identifying the source of an attack can be nigh-impossible.

The reason is simple: It’s easy for skilled hackers to cover their tracks or completely misdirect.

Sometimes a group will take credit for an action they didn’t take, sometimes an attack is unleashed that’s (arguably) not even ready to be deployed, sometimes – as we saw in this episode – it’s not even clear to the criminal actors involved in an attack, who’s really pulling the strings.

This is why many in the information security field are skeptical of attribution claims when a big hack or malware attack occurs, and often why cybersecurity experts push back on legislative proposals for actions (like hack-backs) that hinge on attribution – it’s frighteningly easy to get attribution wrong.

Other notes

  • Seems to be a recurring theme, but we saw a decent amount of social engineering in this episode. Notably, Irving, our favorite social engineering car salesman from episode one this season, got Tyrell to trust him thanks to a tacky coffee cup and a well-told lie about fictional kids. He’s so good at his job it’s easy to forget how dangerous he is.
  • Very briefly we saw the name of the malware Tyrell wrote for the femtocell (which was the star of the show in season 2): android_knox_exploit.rb – likely this was malware targeting the vulnerable KNOX platform in certain models of Samsung Android phones.
  • We got two brief glimpses of Tyrell engaging in some good ol’ fashioned network mapping (of E-Corp infrastructure no doubt). They were probably the neatest hand-drawn network maps I’ve ever seen, though given how meticulous his character is I’m not surprised. I wonder if we’ll see the digital version he made later pop up again this season.
  • The web-enabled baby monitor Tyrell was using to see his child, was he viewing it through the OEM web portal or did he hack into it? Anyone catch that detail?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OqoV9c_sMs0/

5 paths to a career in cybersecurity

October is National Cybersecurity Awareness Month (NCSAM) and this week’s theme is The Internet Wants You: Consider a Career in Cybersecurity.

We’ve already given you tips on how to get a job in cybersecurity, but we also wanted to prove that it is possible and demonstrate some of the routes in, and highlight the myriad roles in this industry.

We asked a number of people working in different roles at Sophos how they made their way into cybersecurity.

1. Breaking the enigma of cryptography

Threat Researcher 2, Dorka Palotay

I did my Bachelor degree in Mathematics, Algebra and Number Theory were my favourite classes. But, we had one particular lecture on the breaking of Enigma in World War II and from that point on I knew I wanted to study Cryptopgraphy. So I gained my Master’s degree in Security and Privacy, specialising in Advanced Cryptography.

In the last semester I had to do an internship. I was looking for a place where I could deepen my knowledge of computer security and use my experience in cryptography. This is how I found Sophos and started working with ransomware. It was here I realised that I really enjoy reverse engineering and fighting against malware, so after my studies I stayed at Sophos and continued my work here.

2. The hacking bug

Senior Information Security Engineer, Luke Groves

I first became interested in Cybersecurity in my teens, if I remember correctly right around the time I first watched the movie ‘Hackers’ (it’s still a great film!). I soon learnt that Cybersecurity wasn’t quite as flashy as Hollywood portrayed it, but I had the bug. I went on to study Computing at university and afterwards found a job in technical support at a security company I had never heard of called Sophos. That was nearly 14 years ago.

After a number of positions in support I landed a role in the security team, protecting Sophos. My current job as a Penetration Tester involves offensive security, where I ‘attack’ Sophos to try and improve our defenses. It really is great fun and one of the best things about Cybersecurity is that it is always changing – there is always more to learn.

3. Building a service desk from scratch

Senior Manager of IT Support, Nikki Cook

I didn’t take the college or university route but instead joined the Royal Air Force. My trade was Telecommunications, which later evolved into IT and Communications.

I served for just over 4 years in the RAF before becoming an IT Administrator, but it wasn’t what I expected so I began looking for something that would really interest me. My search lead me to the position of IT Service Desk analyst. It was in this role that I started to notice I was a natural leader. I got itchy feet, wanting to progress and get stuck in where there were gaps in managing the team around me. I signed up with a recruitment agency who put me forward for a Team Leader position, which offered me the exciting opportunity of building a Service Desk team from scratch.

The service desk proved to be a success and I was promoted to Service Delivery Manager. My role evolved to managing SLA’s, project rollout, managing 3rd party provider relationships and budget but I felt my career here had reached its peak. Then came a call about a Service Desk Manager role at Sophos. Prior to this I’d heard of Sophos as an Anti-Virus software provider but I have to say I wasn’t fully aware of the scale of what the company provided. I am so glad I took that call.

Now I manage the UK Service Desk, which supports internal colleagues with their IT issues. My team are great example setters and regularly run training sessions for the Global Team, I’m really proud of them.

4. Statistics meets computer power

Data Scientist, Hillary Sanders

When I was in college, I had a very nebulous vision of what I wanted to do with my life, despite my much clearer world views. Then I took a statistics class, and it was beautiful. I listened to lectures with a sense of rapture: finally, someone was describing the world in such a way that was elegant, true, and just made sense.

After that class, I tacked on a second major of Statistics and got involved in undergraduate research.  I worked with a graduate student to programmatically track changes between different versions of legislation, and in doing so learned how to program. That was when it really hit: statistics combined with the power of computers is just – well – ridiculously amazing. After college, I spent a few years as a data scientist in San Francisco, and eventually left to take some time off. For fun, I taught myself web development, built a recipe website, and in doing so learned about web security. That got me interested in software security in general, which helped me luck into my current job!

Now I work on the research data science team at Sophos, where we build deep learning models to detect never-before-seen malware, and it’s my favorite job so far. It’s incredibly interesting, the people I work with are fantastic and blocking malware is decidedly non-evil – a big plus for me.

5. Drawing success in IT

Acting Naked Security Editor, Mark Stockley

It’s been a long journey so I’ll focus on the hardest part – getting a foot in the door, any door, of a computer security company.

At school I was training for the rowing team two or three times a day and spent lessons trying to stay awake or looking for something to eat. I drew cartoons obsessively (apart from in art lessons, where displays of artistic talent or enthusiasm were strongly discouraged).

I spent three years at college, specialising in cartoon, comic strip and caricature so that I could get a job as a gym instructor.

After a year of pretending to know more than I did about pecs I did something radical: I had literally nothing that would interest an employer on my CV, so I made the CV itself interesting by turning it into a great big, hand-painted cartoon Ork.

It worked. I got an interview for a job illustrating textbooks and lied through my teeth about being able to use Corel Draw.

A few years before, I’d attended a college lecture by an incredibly grumpy man from a swanky design agency in London. He mentioned he built websites for people. At the time I’d thought, “I didn’t know you could get paid to do that. If somebody this unimpressive can do it, how hard can it be?

Now I had a job, a desk… and a computer! This was my chance. I’d stay at the office until the cleaners kicked me out, teaching myself MacroMedia Director and HTML on the company computers.

By the time I’d been made redundant for the second time, at twenty six, I’d taught myself HTML and JavaScript, how to program in Perl, and completed an Open University course in interface design and evaluation.

I decided not to be cowed by redundancy and made a list of everything I wanted from my next job. Top of the list was a job working for a computer security company.

I found a job advert for one on a recruitment site but by this time I’d been burned hard by box-ticking recruiters so I figured out who the company was and applied directly.

You’ve heard from us but now we’d love to read your cybersecurity journeys in the comments below. And if you’ve had any funny BIT (before IT) roles tell us on Twitter:

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aAlw8jmaIqI/

Google wants you to hack Play Store apps, and it’s paying

Google’s had a rough summer, polluted-apps-wise.

Less than 24 hours after the launch of Android Oreo in August this year – the latest update to its mobile operating system – Google had to pull some 500 apps from its Play Store.

It’s not that all the apps themselves were malicious. Rather, they all used a software development kit (SDK) called Igexin that could, among other things, spy on victims by latching onto otherwise benign apps and downloading malicious plugins. But more to the point, a lot of people picked up a case of Yuck in Google Play. By the time Google scrubbed them, the apps had been downloaded more than 100 million times.

When SophosLabs dived into Google Play to see what sort of nastiness they could pull out, researchers found at least five types of Play Store malware in August 2017 alone, including spyware, banking bots and aggressive adware. Thousands of apps contained these malicious payloads and had infected millions of users.

Google Play isn’t the only Google marketplace that’s been having some trouble with dodgy third-party code.

Earlier in October, Google was also embarrassed when a fake adblocker – one that posed as the massively popular Adblock Plus – wound up sneaking past its security checks, weaseling its way into the the Chrome Web Store, Google’s site for third-party Chrome browser add-ons.

The “adblocker” turned out not to be an adblocker at all. Rather, ironically enough, it was adware. It served ads. To people who wanted to block ads.

You can imagine Google gnashing its Googley teeth over that one. At the time, it said it had plans to improve the vetting of its browser extensions:

This app was able to slip through the cracks, but we’ve identified the reason and are addressing it.

More broadly, we wanted to acknowledge that we know the issue spans beyond this single app. We can’t go into details publicly about solutions we are currently considering, but we wanted to let the community know that we are working on it…

Of course these problems aren’t unique to Google, they turn up everywhere vendors provide walled garden access to apps, plugins, add-ons or whatever else they call the bits of somebody else’s code you can use to extend their products.

In most cases the security of a walled garden beats not having a walled garden, but keeping the bad stuff out is an on-going and evolving struggle.

Google’s latest tactic to clean up Dodge is putting its money where its mouth is. It announced on Thursday that it’s launching a bug bounty program for qualifying vulnerabilities found in specific Play Store apps.

Google is partnering with HackerOne, a bug bounty program management website, to offer a bonus of $1000 for developers of popular Android apps who find qualifying vulnerabilities.

The Google Play Security Reward Program page on HackerOne shows which apps are eligible. At this point, the list of apps includes popular apps such as Alibaba, Dropbox, Snapchat and Tinder, for example. Google says that as more developers opt in, more apps will be listed.

In addition to third-party apps, Google is including its own apps.

Vineet Buch, director of product management for Google Play Apps and Games, said in an interview with Reuters that automatic software scans just can’t match a person’s ability to discover “a truly creative hack.”

Why should Google reach into its own pocket to pay for fixes to third-party apps? Because they’re mucking up the whole space, Buch said:

We don’t just care about our own apps, but rather the overall health of the ecosystem. It’s like offering a reward for a missing person even if you don’t know who the missing person is personally.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PUan4wx2VOk/

EU law bods closer to baking new ‘cookie law’ after battle

MEPs have today voted in favour of moving on with legislation that aims to give users more rights over websites that wish to track them.

The highly anticipated vote came after a week of political wrangling by centre-right MEPs, who have said that the ePrivacy regulation as it stands would “stifle innovation”.

The committee tasked with leading the policy’s progress through parliament passed the rules last week – 31 in favour, 24 against – but opposition MEPs then forced a full vote on the legislation in today’s plenary session of the European Parliament.

The latest vote was again a close-run affair, with 318 for, 280 against and 20 abstentions, but it gives parliament the mandate it needs to begin negotiations with the Council.

The move has been welcomed by privacy campaigners and pro-privacy MEPs, who believe the proposed rules – which will update a directive last amended in 2009 – are necessary to prevent companies excessive snooping on users.

The proposals extend rules on telcos to over-the-top services like WhatsApp, give users the right to object to being tracked, and ensure that, even if a user rejects cookies, they must be allowed access to the site.

But debate about the proposals has become an increasingly bitter battle between those who want increased privacy, and the advertising and telco industry.

Opponents argue that it will it impossible for companies to make money from online services – although the rules are related to tracking, rather than simply online ad sales.

But EU lawmakers – including Andrus Ansip, Commission veep for the digital single market – who are in favour of the position say it will offer more flexibility, while offering users more rights.

Paul Bernal, a privacy and IT expert at the University of East Anglia, told The Reg that the result suggested that the parliament had resisted such lobbying, saying it “has effectively seen through their arguments”.

He added that the position adopted by the Civil Liberties, Justice and Home Affairs committee (LIBE), and now the parliament as a whole, “may even be one of the strongest pro-privacy positions we’ve seen in law”.

“The old argument that privacy is dead has been blown out of the water. Clearly, privacy still matters to people a lot.”

However, some have argued that this means the legislation is not properly balancing consumer and company needs.

“The openness for cookies and avoiding the situation that all our consents should go via big browsers is key for the publishers, especially for small local websites,” said Michal Boni, MEP for the European People’s Party and a member of the LIBE committee.

He urged all parties, groups and institutions to work together on “improving the text, looking for the balanced solution…between innovation and respect of the citizens’ rights and expectations” as it enters the next stage of the EU policy-making trilogue. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/26/eu_lawmakers_move_closer_to_new_cookie_law_after_parliamentary_battle/

A Checklist for Securing the Internet of Things

IoT devices promise endless benefits, but they also come with serious security issues. Use this checklist to make sure your company stays safe.

Hollywood is known for portraying outlandish scenarios. This past summer, The Fate of the Furious depicted scenes in which a cybercriminal controlled thousands of connected cars from an aircraft to create a massive vehicle pile-up on the streets of New York City. While many of the foreboding scenes we see on the big screen will probably never come to life, the number of breaches associated with connected devices is on the rise.

From connected cars to smartphones, some sort of smart device or application links nearly every aspect of modern society. According to Gartner, there will be 8.4 billion connected “things” in use in 2017. Another study from PricewaterhouseCoopers found that more than half of enterprise leaders are not investing in an Internet of Things (IoT) security strategy.

Increasingly, company leaders are seeing the possibilities that IoT provides. A McKinsey report from July found that 92% of executives believe that the IoT will have a positive impact on business over the next three years. Still, many companies are struggling to fully embrace the IoT, in part due to security concerns.

In September 2016, a Mirai botnet distributed one of the largest and most disruptive distributed denial-of-service attacks in history, which stalled service to popular websites such as Netflix. With more IoT devices being added each day, more ways to connect are being created and there are more ways for bad actors to exploit vulnerabilities.

And policymakers have recognized these risks. Recently, the U.S. Senate introduced the Internet of Things Cybersecurity Improvement Act of 2017. The bill takes steps toward enforcing stricter cybersecurity regulation for connected devices the government purchases. Similar steps to ensure the security of devices and applications should be taken by private sector enterprises.

Securing the IoT begins with identity management. Every new connected device has an identity that must be authenticated and authorized to protect the security of the device and the networks it touches.

Here’s a checklist for securing IoT:

1. Manage the Device Life Cycle
A company would never knowingly give a previous employee access to current corporate data. Likewise, a company should never allow a device to stay on its network after access is no longer needed.

Throughout the life cycle of every device, enterprise IT security teams must manage not only who has access to the device but also what actions the device is allowed to perform at what time. When the device is no longer necessary, the connection should be terminated.

2. Monitor Behavior
When it comes to connected devices, it isn’t always clear when a device is compromised. Today, nearly all employees have their smartphones with them at work. These personal devices are often unsecured and could become vulnerable due to malicious applications.

Using risk and behavior analytics, the enterprise can accurately and efficiently monitor how IoT devices are behaving in order to identify whether the device has deviated from its normal limits. Any deviation can promptly signal a compromised device.

We can learn from how the credit card industry addresses fraudulent activity across accounts. When it comes to transactions, once an action is deemed unordinary from the customer’s general spending habits, the credit card company restricts access to the card. This entire process is based on behavioral analytics that are used to determine the amount of risk associated with abnormal behaviors.

3. Authorize Device-User Interaction
The nature of IoT devices encourages interaction between devices and users and between the devices themselves. But each of these interactions must be authorized. This means that security teams must be able to authorize not only which users have access to certain devices, but also authorize the actions those devices are facilitating.

4. Authenticate Device Connections
When your family connects to your Wi-Fi router at home, every person uses the same password credentials to gain access. Under this premise, the network believes that every login is the same user.

When it comes to IoT devices, an automated authentication process must be in place to verify a unique identity for each device. In this past year’s Mirai botnet attack, default credentials were used to compromise the network and gain access. If security teams can’t distinguish between devices based on their identity, then they can’t accurately address threats and mitigate risks.

5. Govern User Permissions
Similar to human access, we need the ability to revoke device access and control the level of risk associated with any given device. This is done by controlling the levels of permissions that authorize users to access connected devices.

Governing user permissions is not a one-step process. Enterprises must be able to govern permissions in real time for security and legal purposes. The use of street cameras across the US has sparked a series of lawsuits over the security of the personally identifiable information that is stored in the camera’s data. As IoT devices become more widely used, there will be an increased need for governance to ensure private information doesn’t get into the hands of the wrong people.

With Gartner estimating that there could be 50 billion connected devices in existence by 2020, our approach to device security must evolve. Approaching IoT with identity in mind will make our connected world — and your enterprise — a safer place to be.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

 

With more than 15 years of experience in security and identity management across roles in engineering and architecture, Naresh Persaud is responsible for CA Technologies’ security products. As a solution architect, Naresh has devoted much of his career to following the … View Full Bio

Article source: https://www.darkreading.com/iot/a-checklist-for-securing-the-internet-of-things/a/d-id/1330209?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Why Data Breach Stats Get It Wrong

It’s not the size of the stolen data dump that is important. It’s the window between the date of the breach and the date of discovery that represents the biggest threat.

Earlier this month, Yahoo announced that it had drastically underestimated the impact of the data breach it reported in 2016. In December 2016, the company reported that, in addition to its previous breach of 500 million accounts, an additional 1 billion accounts had been compromised in a separate breach. Now, it believes that all of its accounts were compromised — affecting over 3 billion users.

How did Yahoo get it so wrong? And what do these revised breach numbers mean? To understand this, we need to examine the dismal science behind calculating the impact of data breaches.

In one out of four cases, third parties discover the breach, typically after being affected by it or seeing the data distributed in darknets. In other cases, internal investigations discover anomalous behavior, such as a user accessing a database he shouldn’t. Either way, it can be difficult to determine how much data was stolen. When a breach is discovered by third parties, it represents only a sample of exposed data, and the attacker may have accessed additional data.

In breaches found by internal probes, seeing that an attacker accessed one file or database does not mean other resources weren’t accessed using methods that would appear differently to a forensic investigator (for example, logging in as one user to access one file, while using a different account to access another). There is a great deal of detective work — and estimation — required to describe the scope of a breach.

The round numbers that Yahoo provided illustrate this perfectly. Few consumer companies have sets of user information stored in neatly defined buckets of 500 million and 1 billion. At first, Yahoo likely found evidence to suggest certain data was accessed and had to extrapolate estimates from there. But there is always an element of estimation, or to put it another way, guesswork. Yahoo’s latest statement reads: “the company recently obtained new intelligence and now believes […] that all Yahoo user accounts were affected by the August 2013 theft.” Companies are compelled to make, and revise, educated guesses at each stage to demonstrate control of the situation, and to be as transparent and responsible as possible. So it’s not surprising that these figures often grow.

The Lag on Lag Time
Another challenge that affects a company’s ability to get these figures right is lag time. The breaches reported by Yahoo happened years earlier. There was also a significant lag between breach detection and public notice with Equifax — nearly six weeks. Sophisticated attackers are not only adept at finding system vulnerabilities, but they typically have plenty of time to access data carefully and hide their tracks. Add in several years of buffer time for a company’s own available analytics to atrophy for useful forensic purposes (e.g., regularly cleared log files) and detecting unauthorized access and estimating damage becomes even more difficult.

In many ways, lag time is the most critical problem. The days, weeks, and years that pass before breaches are discovered (if they are discovered at all), gives attackers all the time they need to extract full value from the data they have stolen. In the case of stolen usernames and passwords, they are used in credential stuffing attacks to compromise millions of accounts at banks, airlines, government agencies, and other high-profile companies. Businesses are starting to follow recent NIST guidelines that recommend searching darknets for “spilled” credentials. But, by then, the original attacker has already used the credentials to break into accounts. At that point, the credentials are commoditized and are becoming worthless.

So, how should we think differently about data breach statistics? First, we need to remember that huge breaches may have already occurred but remain undiscovered. By definition, we can discuss only the breaches we know about. The more sophisticated the attacker, the greater the likelihood that it will take time to detect their breach. The second thing to remember is that a data breach is like a natural disaster, in that it has follow-on effects throughout the Internet ecosystem and our economy, enabling credential stuffing, account takeover, identity theft, and other forms of fraud. The indirect impact of data breaches is harder to quantify than the scope of the original breaches, and may outstrip the original breach in total harm by orders of magnitude.

The larger a data breach is suspected to be, the more attention it receives. But the scope of the problem is vast and hard to quantify; the projected numbers are just the tip of the iceberg in representing the risk consumers and business users face. It’s not the size of the stolen data dump that we need to focus on. It’s the window between the date of the breach and the date of discovery that represents the real danger zone. This is when cybercriminals are doing the most harm, using stolen data to break into more accounts, steal more data and identities, and transfer funds. The smart move for every corporate user or consumer is to create strong passwords, never reuse them across sites, monitor financial accounts, and be cautious with all data and shared online services.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

 

Shuman Ghosemajumder is chief technology officer at Shape Security, a security company located in Mountain View, California. As one of the largest processors of login traffic in the world, Shape Security prevents fraud resulting from credential stuffing attacks, when breached … View Full Bio

Article source: https://www.darkreading.com/endpoint/why-data-breach-stats-get-it-wrong/a/d-id/1330227?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

30% of Major CEOs Have Had Passwords Exposed

One in three CEOs have had passwords leaked through online services where they registered with a corporate email address.

Thirty percent of CEOs at top global companies have had their passwords leaked through online services where they used their corporate email addresses to register, as discovered a new study by F-Secure. When a service is hacked and a leader’s password for the service is exposed, it increases the likelihood for targeted cyberattacks.

Researchers studied company email addresses for CEOs representing more than 200 of the biggest companies across ten countries. They discovered 81% of those leaders have had some form of personal information, such as email address, phone number, address, or birthdate, leaked through spam lists and exposed marketing databases.

The most common previously breached services associated with company email addresses were LinkedIn (53%) and Dropbox (18%). Countries with the greatest amount of CEO information exposed include the USA, the UK, and the Netherlands, all at 95%.

Read more details here.

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/informationweek-home/30--of-major-ceos-have-had-passwords-exposed/d/d-id/1330234?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hop on, Average Rabbit: Latest extortionware menace flopped

As the dust settles from Tuesday’s Bad Rabbit ransomware outbreak, it’s already clear that it is far less severe than the WannaCrypt and NotPetya infections from earlier this year.

Bad Rabbit claimed notable victims including the media agency Interfax and was largely contained in Russia and Ukraine, as previously reported.

According to ESET, 65 per cent of the victims are in Russia, 12.2 per cent in Ukraine. The nasty also hit some other Eastern European countries as well as Turkey and Japan.

Bad Rabbit spread from a network of compromised websites set up by the hackers in preparation for the attack. The dropper, which posed as a Flash Player installer, was downloaded by users when they visited infected websites through a drive-by download (a common hacker tactic). Carrier websites included argumentiru[.]com, which covers current affairs, news and celebrity gossip in Russia and its neighbours, among several others.

Bad Rabbit also attempted to spread to other machines on the same network using worm-like functionality.

Like NotPetya, Bad Rabbit made use of a custom version of the Mimikatz password recovery tool as well as SMB network shares to spread across machines on the same network.

Security experts found that Bad Rabbit did not use EternalBlue – the stolen and leaked NSA-created exploit previously abused by both NotPetya and WannaCry – to spread. Instead it relies on local password dumps, as well as a list of common passwords, in attempts to hop from an infected machine to other Windows PCs.

Once executed, the malicious code acted like a traditional ransomware, encrypting files before demanding a ransom to decrypt them – a relatively modest 0.05 BTC (around $280).

Infection attempts ceased and attacker infrastructure – both 1dnscontrol[.]com, the dropper delivery site, and sites containing the rogue code – were taken offline around six hours after the ransomware began spreading, according to a count by researchers at Cisco Talos.

Since Russia was the origin of the attack, by the time the US had woken up it had already been blocked by signature-based antivirus and identified by products that relied on generic or behaviour-based malware detection.

CrowdStrike’s analysis found that Bad Rabbit and NotPetya DLL (Dynamic Link Library) share 67 per cent of the same code, prompting speculation that the same group might be behind both attacks. This attribution is sketchy, at best. Bad Rabbit is similar to NotPetya in that it is also based on the earlier Petya ransomware. Major portions of the code appear to have been rewritten.

Recovery of infected machines might be difficult but not impossible. Some experts reason that the intent may have been disruption rather than the profit-making cybercrime associated with ransomware strains such as Locky.

“Bad Rabbit appears to be a disruption campaign designed to look like a ransomware campaign, similar to NotPetya and WannaCry,” commented Allan Liska, senior solutions architect at threat intel outfit Recorded Future. ®

Bootnote

The hackers behind the ransomware seem to be fans of Game of Thrones as the source code contains references to dragons from the popular TV series (Drogon, Rhaegal and Viserion). The as-yet unidentified crooks also allude to a human character, “GrayWorm”, as the product name for the .exe file.

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/26/bad_rabbit_post_mortem/