STE WILLIAMS

Departing MI5 chief: Break chat app crypto for us, kthxbai

British spies are once again stipulating that tech companies break their encryption so life is made easier for state-sponsored eavesdroppers.

The head of the domestic spy agency, Sir Andrew Parker, demanded that companies such as Facebook compromise the security of their messaging products so spies could read off the contents of messages at will.

Although Sir Andrew linked this need to serious crimes such as terrorism, the principle of a technical backdoor is that once open to spies, it’s open to anyone who knows it exists.

Calling the world of encrypted messaging apps a “Wild West” that is “inaccessible to authorities”, Sir Andrew told ITV in a pre-recorded interview: “Can you provide end-to-end encryption but on an exceptional basis – exceptional basis – where there is a legal warrant and a compelling case to do it, provide access to stop the most serious forms of harm happening?”

In the interview, summarised by ITV itself as well as other news outlets, Sir Andrew also claimed that MI5 is not interested in the products of dragnet mass surveillance. He told the broadcaster: “We do not approach our work by population level monitoring – looking for, you know, signs of: ‘Out of this 65 million people, who should we, you know, look a bit more closely at?’ We do not do that.”

On a technicality, he may be right: that role is mainly reserved for GCHQ, which does the dirty work of automated spying on the entire population of Britain, as the Snowden revelations confirmed in 2013. Having “collected” everyone’s online conversations and trawled through them for snippets of interest, GCHQ passes the highlights to MI5 and overseas UK spy agency MI6.

The tension between frictionless reading of criminal suspects’ messages and protection of freedoms in the digital era is one where the English-speaking world outside the US has become angrier and angrier with American tech firms, which politely refuse to compromise their products. In Australia this public sector anger boiled over into outright denial of mathematics, with technically illiterate politicians convincing themselves that shouting “Make it so”, Star Trek-style, can create a technical means of letting police and spies read your messages whilst shutting out everyone else.

Current UK home secretary Priti Patel is firmly anti-encryption, with the social conservative having banged on about paedoterrorists shortly after her appointment last summer.

A GCHQ plan to silently add the government as an authorised “third user” to online conversations, whose sole merit was that some actual thought and technical knowhow had been put into it, was dismissed last year by an international coalition of tech companies and big infosec names. The main tension between privacy activists and state security agencies is that the latter prefer the ease of dragnet surveillance over applying for judicial permission to target individuals on a case-by-case basis. Privacy activists say a lack of per-case controls leads to innocents being wrongly caught up in surveillance.

MI5 was found by a secretive British spy court in 2018 to have been breaking the law for years. Thanks to the unique way in which MI5 is subject to the law, neither the agency nor any individuals associated with it were held accountable. The Investigatory Powers Tribunal’s (IPT) judges were all but falling over themselves to tell MI5 it would be walking free from court.

A year later the same court granted MI5 de facto immunity from the law, presumably to apologise for its previous public ruling. Judges drew a line between a newly devised legal “power” to commit crimes in direct defiance of the law and “immunity from prosecution.” Apparently one doesn’t equal the other, though even the IPT was too embarrassed to explain in its published judgment why that is.

Sir Andrew is stepping down in April, along with National Cyber Security Centre founding chief Ciaran Martin, whose service ends at some point this summer. Both their replacements will be appointed by the current government.

Sir Andrew Parker’s full interview is due to be broadcast on ITV’s Tonight programme tomorrow. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/26/mi5_chief_itv_interview/

5 Ways to Up Your Threat Management Game

Good security programs start with a mindset that it’s not about the tools, it’s what you do with them. Here’s how to get out of a reactive fire-drill mode with vulnerability management.

The basis of a good security program starts with a mindset that it’s not about the tools, it’s what you do with them. This mindset is most evident when critical vulnerabilities are released and everyone scrambles to mitigate exploitation.

Most recently, we saw this following the release of the latest critical Windows vulnerability (CVE-2020-0610 and others), which some folks have nicknamed CurveBall. The vulnerability affects Windows CryptoAPI and how Windows handles Elliptical Curve Ciphers (ECC) as part of this service. Microsoft also released two Remote Code Execution (RCE) bugs that are equally important. 

It’s critical that companies get out of a reactive fire-drill mode and work toward cyber resiliency. Here are five recommendations for getting there.

Develop a VTM Strategy
One of the most important business strategies for a security program should be around vulnerability threat management (VTM). VTM strategies should include effective, timely, and collaborative reporting of actionable metrics. Avoid simple items such as the number of vulnerabilities on Windows systems and focus on meaningful items such as remediation rates of exploitable vulnerabilities on critical systems.

It’s important to keep in mind that VTM is a culture and an operational mindset. An effective VTM program should be implemented in concert with the larger security operations organization to mitigate threats and reduce threat actors’ overall attack landscape. It goes beyond scanning for vulnerabilities and telling IT ops to “not suck at patching.”

I recommend splitting your VTM strategy into two phases: detection and response. Detection aims to ensure effective, risk-based reporting and prioritized vulnerability mitigation by gathering all your data, validating the results, and applying a business risk. Automation can make this process easier. Further, using the Observe-Orient-Decide-Act (OODA) loop continually reduces the time it takes to locate and inform IT ops and development teams where corrective action needs to take place.

Response is where the rubber meets the road and where many of us pass on the work to other businesses to assist in applying patches or hardening systems. To that end, ensure the correct solution (mitigation or corrective action) is recommended by the VTM team and that the agreed-upon solution has been tested and won’t break production.

In deploying the solution, it’s critical that IT ops and development get prioritized patching and that we provide as few false positives as possible. Trust is earned through transparency and repetition, but it can be destroyed through bad data in an instant.

Know Your Inventory
Knowing where your assets are and who owns them is the basis of an effective and efficient VTM program. Inventory management is a common struggle, partially because VTM teams use a combination of sources to identify where assets live. There are widely available tools to automate and integrate inventory systems so you can avoid time-consuming inventory pulls or maintaining manual spreadsheets. I also recommend partnering with the leaders across your business lines to ensure that when new systems are spun up, the VTM program is effective.

Implement, Then Continually Improve
Don’t wait for the sky to fall to realize that you needed to practice. Just like any other part of an effective security organization, your VTM program should constantly improve. I’ve been a big fan of OODA loops for years.

They are highly effective when leveraged to continually improve an operational program where every initial Observation exits the loop with an Action to adjust the next Observation. If you’ve seen the same thing twice, you’re failing. Leverage cyclical processes to continually improve VTM operations and continually measure your own effectiveness.

Step Up Your Vendor Management
While we cannot simply run vulnerability scans or penetration tests against our vendors, we can put contractual obligations in place with vendors that have access to our sensitive data to secure it appropriately.

Rights to audit are key in any contract. I see many large financial institutions conducting audits on client programs. It’s a great way to validate how effective a program is, but keep in mind that it’s also very expensive to operationalize.

Finally, don’t be shy in working with your vendors. Build relationships with their security and IT organizations so that when a critical vulnerability is released, you know whom to call, and it’s also not the first time you have spoken.

Build a Professional Network
When I first entered the security field several decades ago, collaboration between security organizations in different companies was taboo. Today, it’s required. This sounds simple but is key: As a CISO or security leader, you must have an external network of peers to collaborate with. We must put egos aside and ask each other simple questions around the common problems we all face.

The release of new security vulnerabilities is only going to continue in the coming weeks and months. The most successful (and secure) companies will be able to look outside their network for actionable information and develop internal strategies to stay ahead of increasing threats.

Related Content:

Wayne Reynolds is an Advisory CISO for Kudelski Security, where he works with executives and program leaders to help businesses drive security programs to align with the business and maximize proactive threat mitigation to best serve the enterprise as a … View Full Bio

Article source: https://www.darkreading.com/5-ways-to-up-your-threat-management-game/a/d-id/1337087?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Kr00k Wi-Fi Vulnerability Affected a Billion Devices

Routers and devices with Broadcom and Cypress Wi-Fi chipsets could be forced to sometimes use encryption keys consisting of all zeroes. Now patched, the issue affected a billion devices, including those from Amazon, Apple, Google, and Samsung.

RSA Conference 2020 – San Francisco – A vulnerability in the way that two Wi-Fi chipsets handled network interruptions and encryption keys could have given attackers the ability to decrypt some of the network packets sent by more than a billion common wireless devices and routers — including those from Amazon, Apple, and Samsung, security firm ESET said at the RSA Conference on Wednesday.

Found in late 2018, the vulnerability — dubbed Kr00k and assigned CVE-2019-15126 — can force part of the wireless communication between devices to use all zeroes for the encryption key, allowing the attacker to eavesdrop on a limited amount of wireless data. The National Vulnerability Database assigned the vulnerability a base score of 3.1, which makes it low severity.

“If an attack is successful, several kilobytes of potentially sensitive information can be exposed,” Miloš Čermák, the lead ESET researcher into the Kr00k vulnerability, said in a statement. “By repeatedly triggering disassociations, the attacker can capture a number of network packets with potentially sensitive data.”

The vulnerability expands on research that ESET over the past 18 months on home Internet of Things devices. In October 2019, the company revealed that older Amazon Echoes and Kindles were vulnerable to the Key Reinstallation Attack, or KRACK — a 2-year-old issue that allows bad actors to perform a man-in-the-middle attack.

The latest attack affects far more devices — more than a billion, according to ESET, including older Amazon Echo devices and Kindles, Apple iPad mini 2, various older Apple iPhones, the Apple MacBook Air, and various Google Nexus smartphones. At least two models of the Samsung Galaxy are affected, as are Raspberry Pi 3 and the Xiaomi Redmi 3S.

In addition, Wi-Fi access points from Asus and Huawei are vulnerable as well, potentially leaving some device communication vulnerable if the hotspot base station remains vulnerable, ESET stated in its analysis of the issue.

“This greatly increases the attack surface, as an adversary can decrypt data that was transmitted by a vulnerable access point, which is often beyond your control, to your device, which doesn’t have to be vulnerable,” Robert Lipovský, an ESET researcher working with the Kr00k vulnerability research team, said in a statement.

Related to the KRACK vulnerability discovered in 2017, the vulnerability affects both WPA2-Personal and WPA2-Enterprise protocols. ESET first reported the vulnerability to Amazon more than a year ago, and with further research, discovered in July 2019 that the Cypress Wi-Fi chipset was the source of the security issue. In August, the company confirmed that the widely used Broadcom Wi-Fi chipsets were vulnerable as well.

The attack abuses an implementation flaw in the chipsets. When a Wi-Fi connection is lost — either because the device moves to a different network or due to interference — the device is “disassociated” from the network. The problem for the affected chipsets is that disassociation causes the key to be zeroed out and then any buffered data will be sent using that key of all zeros. By repeated disassociating the device from the network, an attacker can capture several data frames.

“Kr00k manifests itself after Wi-Fi disassociations, which can happen naturally, for example due to a weak Wi-Fi signal, or may be manually triggered by an attacker,” ESET’s Čermák said.

For more than a year, ESET has worked with device makers to create a fix for the issue and patch devices. Most users should be safe, but they should check that their devices have the more recent software updates, the company said. Even though the source of the bug is in the Wi-Fi chipsets, their behavior can be modified through the firmware. Patches for all affected devices made by major manufacturers have been released, ESET stated.

“To protect yourself, as a user, make sure you have updated all your Wi-Fi capable devices, including phones, tablets, laptops, IoT smart devices, and Wi-Fi access points and routers, to the latest firmware version,” Lipovský said.

The company urged device manufacturers that may not have patched their device to reach out to their chipset vendors for more information.

Related Content

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Wendy Nather on How to Make Security ‘Democratization’ a Reality.”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/kr00k-wi-fi-vulnerability-affected-a-billion-devices/d/d-id/1337151?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Next-Gen SOC Is On Its Way and Here’s What It Should Contain

The next-gen-SOC starts with the next-gen SIEM, and Jason Mical of Devo Technology and Kevin Golas from OpenText talk about what capabilities are required, including threat hunting and greater automation, and how security professionals should exploit the tools.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/next-gen-soc-is-on-its-way-and-heres-what-it-should-contain/d/d-id/1337157?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Open Cybersecurity Alliance Releases New Language for Security Integration

OpenDXL Ontology is intended to allow security components to interoperate right out of the box.

The Open Cybersecurity Alliance, an industry consortium working to provide a common framework for security technology, has announced its first release, OpenDXL Ontology, a language for helping security tools interoperate with minimal custom integration.

OpenDXL Ontology, contributed by McAfee, is an open source language based on the Open Data Exchange Layer (OpenDXL), an open messaging framework. OpenDXL Ontology is available for collaboration and open development on GitHub. The framework has already been adopted by more than 4,100 vendors and organizations as part of their security integration environments.

Read more here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Wendy Nather on How to Make Security ‘Democratization’ a Reality.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/open-cybersecurity-alliance-releases-new-language-for-security-integration/d/d-id/1337153?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Commonsense Security: Leveraging Dialogue & Collaboration for Better Decisions

Sometimes, good old-fashioned tools can help an enterprise create a cost-effective risk management strategy.

Mitigating risks related to security threats and vulnerabilities can be a tricky business. What do you prioritize? Where’s the cutoff in terms of how many tools and services you should use? What vulnerabilities might remain even after you’ve taken action?

There are also budget considerations, and for many organizations, a major shortage of available security skills to help address the growing number of threats. In fact, research from (ISC)² estimated a shortfall of more than 4 million cybersecurity experts worldwide, with 51% of respondents saying their organizations were at moderate to extreme risk because of the shortage.

To address these challenges effectively, we all need to take a more commonsense approach to security. Sometimes, honest dialogue and collaboration can help an enterprise create a cost-effective, real-world security posture. And sometimes, the commonsense answers are right in front of us, if only we take the time to look for them and act on them.

Let’s look at a few examples of how this approach works.

Assessing the Risks
A small startup company might have a budget that only supports $1 million per year for cybersecurity tools, services, and one dedicated security employee. But the team responsible for IT acknowledges that this approach will not be sufficient, given the growing security threats facing the company.

Rather than just making do with less and hoping for the best, the team takes a proactive, collaborative approach and explains the possible risks to the company’s senior leadership and board.

If the board assesses the situation and concludes that the risks are reasonable, it can approve the current strategy. Or it might say the risks are unacceptable and recommend doubling the budget and committing two staffers to security.

For example, I was on the board of a local, large business in Oregon. Company officials were debating whether to restrict the corporate website to just local web traffic in order to reduce the risk of an attack.

However, when it was pointed out that local businesses were responsible for only half of the company’s overall revenue, officials agreed it did not make sense to restrict traffic. Instead, they devoted more budget to securing the company and its website.

Another example from my own company illustrates this point. We used to offer a “freemium” product, a free, limited version of our software that’s great for generating leads. But we soon realized we were putting too many resources into managing this portion of the business. We also saw that the security exposure was too great and the strategy could backfire, hurting our reputation.

We decided to discontinue the version, redirecting the budget to marketing to attract enterprise customers, and ended up with much better results.

In another instance, I witnessed a CIO and CTO team face a ransomware attack. Over the course of a few hours on a Sunday, many computers used by the research and development team were compromised. All the data on these systems was encrypted as a result of the attack.

The attacker left a readme text file on a user’s desktop, stating the files had been encrypted, and to decrypt, the user had to acquire a tool. That would entail sending an email including the user’s personal identification, receiving a free test for decrypting a few files and then being assigned a price to recover the balance.

After receiving instructions from the attacker on how to pay for the decryption tool and then making the payment, the user would receive it. The message ended with a warning: “Do not try to do something with your files by yourself. You will [break] your data!!! Only we [can] help you!”

The good news for that particular team was that the company had a practice of separating production systems from RD computers, and it had all its data backed up to the cloud on an hourly basis.

These two basic but strong measures allowed the IT organization to ignore the attack and recover 90% of its data from backup within less than 24 hours. And because of the practice of separating production and RD systems, the company’s production was not harmed in any way.

That’s how common sense works with cybersecurity. Protecting systems and data doesn’t have to be complicated or involve going through a long chain of command to get approvals. It’s often a collaborative process and that involves clear explanations of the problems and how they can be solved.

Training’s Role
Providing training for employees so they can recognize and handle potential threats is critical and gamifying the experience helps retention. For instance, cyber ranges allow for complex IT environments that provide hands-on experiences in real-world scenarios. Through these, learners can be challenged to handle realistic threats with exact tools. This interactive training approach has proven to be a strong proactive solution in mitigating risks. 

Better training can help organizations teach employees to avoid risks and traps, such as falling prey to phishing attempts, using unprotected external devices, and installing unsafe software.

There’s never been a more important time for organizations to practice commonsense security and emphasize collaboration among stakeholders. New threats and vulnerabilities are emerging all the time, yet many companies are grappling with limited security budgets and the ongoing cybersecurity skills gap.

By being proactive about security and fostering an open, clear dialogue about threats and how to address them, companies can better protect their information assets. It’s common sense.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Wendy Nather on How to Make Security ‘Democratization’ a Reality.

Dr. Zvi Guterman co-founded CloudShare in 2007. He previously co-founded and served as CTO at Safend, a leading endpoint security company, and performed as a chief architect in the IP infrastructure group of ECTEL, a leading provider of monitoring solutions for IP, … View Full Bio

Article source: https://www.darkreading.com/risk/commonsense-security-leveraging-dialogue-and-collaboration-for-better-decisions/a/d-id/1337089?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How to Prevent an AWS Cloud Bucket Data Leak

Misconfigured AWS buckets have led to huge data breaches. Following a handful of practices will help keep you from becoming the next news story.

(image by conceptualmotion, via Adobe Stock)

When it comes to public cloud services, there’s Amazon’s AWS and then there’s everyone else. With an estimated 39% market share, it’s a rare cybersecurity professional who won’t encounter AWS at some point in their career. And there’s no question that protecting data in AWS begins with understanding bucket security.

By the most simple definition, a bucket is a place to put stuff. You can think of a bucket as a directory or a folder (depending on your point of reference), but there are two key things to know about any bucket you create. First, unlike many cloud resources, a bucket exists somewhere — you specify the geographic region where your bucket will sit. This is important if particular regulations and jurisdictions are important to you.

Next, there are a ton of ways to get stuff into and out of your bucket. This makes buckets incredibly useful — and also presents the most obvious security risks for bucket use.

Fortunately, from Amazon and a number of other sources scattered around the Internet, there are a handful of crucial practices that, if followed, will keep your data from sloshing out of its bucket and landing all over the web.

The cautionary tale

Recent cloud bucket data leak catastrophes like the Capital One breach show that there both cloud users and cloud service providers like AWS have roles to play in their own security. 

Capital One “gives us a very good example of how dangerous an improperly configured cloud environment can be,” said John Burke, senior IT security engineer for Iberia Bank, in a recent Dark Reading webinar. “They’ve been a great example of a company that is cloud-first in everything that they do … So they’re a good example of what you can do in the cloud, but they’re also a good example of what can happen if things go wrong. … The take-away from it is that you really have to know how all of this is configured in order to protect it correctly.”  

“Just because misconfigurations is a customer problem doesn’t mean that the [cloud service provider] vendors can’t play a role there,” says Cloud Security Alliance VP of Research John Yeoh. “Are they making sure that we have safe and secure default configurations? Are they giving us notifications? The bells should be ringing if we have sensitive information in a folder that’s publicly exposed.”

Fortunately, Amazon and other cloud platform providers are now doing some of the things Yeoh suggests. Here’s what you need to know about the opportunities and the pitfalls.

Public means public

If you set up your first Amazon bucket and simply use the default configuration, you will create a reasonably secure place to leave your data. By default, an S3 bucket is private: to allow anyone (or everyone) to access your data, you have to explicitly grant them permission to do so. (This was not always AWS’s default, but was an improvement made by the company for security purposes.)  

The thing is, if you want to grant access to an individual, you have to know a key piece of data about them — you have to know their user ID within the AWS environment. “Bob from accounting” is rarely, if ever, a valid user ID, so many users just set the access permission to “public” and hope that no bad people (or processes) will find and access their data. Most of us will have heard that hope is not a plan. It’s a darned poor access control list (ACL) as well.

There are levels of “public” access that can be granted — from simple read privileges all the way up to full bucket administration. It’s a sliding scale of badness from a security perspective, but even at its most basic the lesson is quite clear: Unless you really, truly, want anyone with a computing device to be able to at least see your data, don’t set the bucket access to “public,” and don’t allow anyone else to do so either.

“Modern authentication is not a simple switch to enable,” cautioned Burke. “So configuration of it must be done correctly or malicious actors are going to figure out a way to get around it.” He also recommended enabling multi-factor authentication for cloud instances and applications wherever it is available.

Policies, policies

There are policies that govern human behavior and policies that govern bucket behavior. Let’s begin by looking at the former and then discuss how those can have an impact on the latter.

One of AWS’s great virtues is the ease with which buckets and assets can be created and configured. This is one of its greatest vulnerabilities, as well, because individuals with no security training (and little security awareness) can create, configure, and manage buckets along with the data stored in them. Those individuals need policies to guide them toward safe digital shores.

One of the most important policies is this: Public is public, private is private, and never the twain should meet. In practice, this means that any objects (data) that will be made public should be in a public bucket. Any objects in a private bucket should themselves be private. Huge problems can arise when private and public mix and mingle because it’s all too easy to encounter the accidentally public object under those circumstances.

Next up should be a policy on who gets to set up buckets and put objects into those buckets. The key here is that individuals should only be allowed to move into an S3 bucket that data for which they are responsible. It’s not really that security should have an easy individual to target with blame should something go wrong, but that any individual creating and populating a bucket should feel a sense of responsibility about the data and its fate.

Finally, there should be policy-based reviews of the public/private labeling of both objects and buckets on a regular basis. There have been too many stories of buckets set to “public” for web application development or third-party partner access reasons, then left in that configuration after the project ended, to feel comfortable with the idea of leaving a configuration in an unexamined state. Mandatory status reviews at the end of projects and encounters, and on a regular calendar basis, will go a long way toward limiting the damage of unintentional public status.

Configuration matters

And then there are the questions of how individual buckets and their objects are configured. It begins with how access is configured and goes on from there. Depending on your application and organizational needs, access can be controlled through bucket policies, identity and access management (IAM policies), and ACLs. Each has its place, but the goal should be to simplify configuration as much as possible. Keep public and private separate (see above), use ACLs when small populations need access, and make use of groups whenever possible. These will help keep the access configuration as un-complicated as possible.

When you have access figured out, encrypt the contents of the bucket. AWS offers both its own encryption and a system for managing external encryption keys. Whichever you choose, make sure that an access breach will return nothing more than useless data to the attacker.

And as with all data and application information, keep a log of changes and access. AWS offers both types of logging through its CloudTrail service. While a full transaction log could quickly get unwieldy for a large public bucket, change logs are a must for all buckets and transaction logs are critical for sensitive data stored in buckets. Whether you use an AWS service like CloudWatch or an external service to analyze the logs, having the ability to track changes and activities will go a long way toward limiting the damage that configuration challenges can cause.

“There are some new companies out there now that are helping organizations to make sure that their AWS and cloud environments are secured and configured correctly,” said Burke. He also recommends, a close look at cloud access security brokers (CASBs) for help. “CASBs will help you and the security team detect things like password spray attacks, anomalous activity, login attempts from foreign or hostile nations. They can also help identify other cloud applications, so if your employees are using cloud apps that may or may not be sanctioned…you can use a CASB to identify those.”

Related content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/application-security/database-security/how-to-prevent-an-aws-cloud-bucket-data-leak--/d/d-id/1337093?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Taking a GPS tracker off your car isn’t ‘theft,’ court rules

A suspected meth dealer is off the hook for at least one of the charges he’s facing: that he “stole” the GPS device that police stuck on his car to track his movements.

That’s what the supreme court in the US state of Indiana ruled last week. On Thursday, Chief Justice Loretta Rush handed down an opinion with which four justices concurred: that affidavits accompanying warrants had failed to establish probable cause that the suspect – Derek Heuring – had stolen the tracking device placed on his SUV by police who suspected he was dealing methamphetamine.

The tracker had been streaming out Heuring’s location data for six days. Then, it abruptly stopped. For 10 days, police couldn’t track their target’s movements. A technician with the GPS manufacturer said that the “satellite was not reading,” which may have been caused by the device having been unplugged and plugged back in.

It needed a hard reset, the technician said, so a deputy tried to retrieve the device from Heuring’s Ford Expedition.

It wasn’t there. So police applied for warrants to search both Heuring’s home and his father’s barn, where they suspected that Heuring had put the GPS device. They argued that the disappearance of the small, inconspicuous, plain black plastic box meant that Heuring had stolen it.

While they searched for the tracker, police came across drugs, drug paraphernalia, and a handgun, according to court documents. Each time they did, they sought and obtained warrants to search the house and barn for narcotics. When they did, they found the GPS device. Heuring was arrested and charged with several offenses, but he moved to suppress the evidence, arguing that the initial warrant was issued without probable cause that evidence of a crime – the theft of the device – would be found.

Last year, a trial court denied Heuring’s motion, but the supreme court showed last week that it was more sympathetic to his lawyers’ rationale. Without probable cause for the initial warrant, any evidence that turned up in subsequent searches would have to be suppressed, the supreme court decided.

Removing a small, unmarked plastic box from your personal vehicle is most certainly not probable cause that you committed a crime, Heuring’s lawyers argued. How would their client have known what that device was or who it belonged to, and why would he have left it on his car? How would he be expected to know that the unmarked box belonged to the police?

Justice Steven David, from oral arguments that took place in November:

I’m really struggling with how is that theft.

What Rush wrote in her opinion:

We agree with Heuring. The initial search warrants were invalid because the affidavits did not supply probable cause that the GPS device was stolen.

Police were just going by “speculation” or “a hunch” that Heuring stole the device, she wrote, which “fall far short of showing probable cause.”

What the affidavits show, at most, is that Heuring may have been the one who removed the device, knowing it was not his – not that he knew it belonged to law enforcement.

The upshot: at least if you live in Indiana, the Hoosier state, your right to peel mysterious devices off your car is looking pretty healthy. From the opinion:

To find a fair probability of unauthorized control here, we would need to conclude the Hoosiers don’t have the authority to remove unknown, unmarked objects from their personal vehicle.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/FhoNPMhPXqg/

Switch to Signal for encrypted messaging, EC tells staff

Imagine that you work in government or at an NGO – both places that want to keep their communications private.

Understandably, given that governments these days use powerful spyware to surveil political activists, NGOs, and each other, you and your colleagues use an encrypted messaging app.

There’s a good chance that you’ve gone with WhatsApp, which has been a trailblazer in end-to-end encrypted messaging. As early as 2016, The Guardian was referring to the app as a “vital tool” to conduct diplomacy – an app with which diplomats could “talk tactics, arrange huddles, tweak policy – and send Vladimir Putin emojis.”

But given recent events, you have to wonder: what happens if holes develop in that supposed cone of silence?

Like, say, the stupidly simple social engineering hack that the UN said was used – allegedly by the crown prince of Saudi Arabia – to infect Amazon CEO Jeff Bezos’s phone with personal-message-exfiltrating malware, with one single click?

Or the zero-day vulnerability in WhatsApp that allowed attackers to silently install spyware just by placing a video call to a target’s phone? Or, as happened this past weekend, the way that WhatsApp and parent company Facebook shrugged off responsibility for private groups being indexed by search engines, thereby rendering them easy to find and join by anybody who knew the simple search string?

What happens, at least in the case of the European Commission (EC), is that you tell your staff to move over to Signal. Last week, Politico reported that earlier this month, the EC took to internal messaging boards to recommend moving to the alternative end-to-end encrypted messaging app, which it said “has been selected as the recommended application for public instant messaging.”

The EC didn’t mention WhatsApp, per se. It didn’t have to. Security experts have been pointing out reasons why it’s a potential national security risk for a while. Besides its recent and not-so-recent security flubs, there are privacy issues that come with being swallowed up by Facebook. One of WhatsApp’s co-founders, Brian Acton, left the company after the Facebook acquisition, saying that Facebook wanted to do things with user privacy that made him squirm. In his words: “I sold my users’ privacy.”

As Politico notes, privacy activists favor Signal not just because of its end-to-end encryption. Bart Preneel, cryptography expert at the University of Leuven, told the news outlet that, unlike WhatsApp, Signal is open-source, which makes it easy to find security flaws and privacy-jeopardizing pitfalls:

It’s like Facebook’s WhatsApp and Apple’s iMessage, but it’s based on an encryption protocol that’s very innovative. Because it’s open-source, you can check what’s happening under the hood.

Signal is recommended by a who’s who list of cybersecurity pros, including Edward Snowden, Laura Poitras, Bruce Schneier, and Matthew Green. “Use anything by [Signal’s protocol, called] Open Whisper Systems,” as Snowden is quoted as saying on the app’s homepage, while Poitras praises its scalability.

Cryptographer Green says he literally started to drool when he looked at the code. While WhatsApp is based on Open Whisper Systems, it’s not open-source, so it’s not as easy to spot something that goes awry. Another plus of Signal: unlike WhatsApp, it doesn’t store message metadata that could expose users in worldwide data centers. Nor does it use the cloud to back up messages, further exposing them to potential interception.

Sorry, WhatsApp, but you just don’t induce drooling among cryptographers.

Unlike WhatsApp, Signal is operated by a non-profit foundation – one that WhatsApp co-founder Brian Acton put $50 million into after he ditched Facebook – and is applauded for putting security above all else. Like, say, in October 2019, when it immediately fixed a FaceTime-style eavesdropping bug. It fixed the bug in both Android and iOS on 27 September – the same day on which it was reported.

It’s not just Signal’s reputation and WhatsApp’s problems that have pushed the EC into recommending that Signal become the private messaging app of choice – also motivating the Commission are multiple high-profile security incidents that have rattled officials and diplomats.

EC officials are already required to use encrypted email when exchanging sensitive, non-classified information, an official told Politico. The recommendation to use Signal mainly pertains to communications between EC staff and people outside the organization, the news outlet reported, and is a sign that diplomats are trying to bolster security in the wake of recent breaches.

The EC isn’t the only governmental body to dump WhatsApp in favor of Signal. As The Guardian reported in December 2019, the UK’s Conservative party switched to Signal following years of leaks from WhatsApp groups.

What’s ironic, of course, is that governments have been hounding companies to put backdoors in all of these products. While law enforcement in multiple governments have been demanding an end to encrypted messaging that they can’t penetrate, they themselves are increasingly turning to ever more reliable forms of encrypted messaging.

What’s good for the gander isn’t quite up to snuff for the goose, apparently.

But while WhatsApp suffers in comparison to Signal, and while at least two government outfits have shed it in favor of Signal, WhatsApp still matters. It’s one of the messaging apps that’s at the heart of the encryption debate. Facebook, alongside Apple, has stood up to the US Congress to defend end-to-end encryption, in the face of lawmakers telling the companies that they’d better put in backdoors – or else they’ll pass laws that force an end to end-to-end encryption.

As Politico reported, in June 2019, senior Trump administration officials met to discuss whether they should seek legislation to ban unbreakable encryption. They didn’t come to an agreement, but such laws are undeniably on the table.

That matters. Regardless of which messaging app the EC switches to, or the Tories, they’re all liable to being outlawed if the world’s superpowers get their way and legislate backdoors into existence. As goes WhatsApp and Apple encryption, so goes Signal, or Wickr, or any other flavor of secure IP messaging.

And, of course, so goes the stronger security that some government bodies are, ironically enough, moving to embrace.

Watch it, goose and gander, before you wind up cooking both yourself and your own sensitive communications.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mRJrv_YSfXw/

Rotherwood Healthcare AWS bucket security fail left elderly patients’ DNR choices freely readable online

A leak of 10,000 records at a Leicestershire care home provider exposed elderly patients’ wishes not to be resuscitated, detailed care plans and precisely how much councils paid for individual patients’ care.

Not only did Rotherwood Care Group, trading as Rotherwood Healthcare, leave an Amazon S3 bucket accessible to everyone on the internet, the company’s website privacy policy consisted solely of lorem ipsum placeholder text.

The leak came from an S3 bucket that was left unsecured. The Register was alerted to it by a security researcher who also informed his local branch of the GCHQ-sponsored Cyber Protect network.

When The Register contacted Rotherwood to ensure the open data was closed off prior to publication of this article, the company responded with lawyers’ letters.

Rotherwood Healthcare's online privacy policy

Rotherwood Healthcare’s online privacy policy. It can be read here.

Lorem ipsum, sometimes known as lipsum, is default placeholder text used in design and publishing.

The unsecured S3 bucket appeared to be powering Rotherwood’s internal system, a CRM-style software suite that looks to be used to capture and store essential data about staff and patients alike.

Around 10,000 individual files were left exposed in the bucket. Among those were internal care plan audits. Prepared for care home staff to assess whether care plans themselves were fit for use, these documents not only included patients’ full names and health conditions but also their “DNACPR” (resuscitation) choices.

Scans of what appeared to be staff members’ passports and birth certificates were also in the bucket, along with job interview questions and answers.

Emails from local councils confirming exactly how much they were paying towards residents’ stays were also freely accessible to anyone visiting the bucket, along with details about patient-specific specialist medical equipment.

To its credit, the business closed off the bucket from public access within a day of being informed.

A Rotherwood spokesman told The Register: “We at Rotherwood Group take the protection of personal data very seriously. Once we became aware of a security issue affecting some data held on our cloud-based system, we took immediate steps to rectify it. We are not aware of any data misuse and we are continuing to investigate this matter, including liaising with the ICO.”

There is no excuse in this day and age for AWS buckets to be left unsecured. Amazon provides tools for detecting and closing off inappropriately opened buckets – you can find a plain-English overview of these tools here – while plenty of companies have been caught out exposing sensitive personal and commercial information alike since the inception of public cloud storage.

One such example was Teletext Holidays last year, which was exposing call recordings that included credit card number tones generated from phone keypad inputs. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/26/rotherwood_healthcare_data_leak_10k_records_aws/