STE WILLIAMS

Real-World Threats That Trump Spectre & Meltdown

New side-channel attacks are getting lots of attention, but other more serious threats should top your list of threats.PreviousNext

If you judged the severity of a security vulnerability by its number of mentions in the press and social media — a silly thing to do, by the way — side-channel exploits would seem to be the end of the computing world. But does the reality of the situation really match the hype?

Many side-channel attacks actually require both technical sophistication and patience. Speculative execution side-channel attacks like Spectre and Meltdown require quite a lot of each of those qualities. Other types of side-channel attacks, such as the recent page cache vulnerability, require less sophistication on the part of the attacker but are more easily thwarted via software updates.

The bottom line is that there are many other cyber threats that your organization most likely will face today than a side-channel attack, security experts say. The more severe threat candidates encompass common attacks and vulnerabilities, as well as user behavior – most of which have been responsible for real-world business losses. The new generation of side-channel vulnerabilities are mostly still – as far as we know – the stuff of research, not crime.

That doesn’t mean you should ignore side-channel threats. Mounir Had, head of Juniper Threat Labs, noted that the “latest side-channel attack is severe, in my opinion.”

With that caution, here is a rundown of threats that are more imminent than those splashy side-channel attacks. 

(Where do you place Spectre, Meltdown, and their malicious kin in your hierarchy of threats? Which threat keeps you up at night? We’d love to know your thoughts — the comment section is open).

(Image: Andrew Stefanovskiy — Adobe Stock)

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/attacks-breaches/real-world-threats-that-trump-spectre-and-meltdown/d/d-id/1333663?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Fact and Fiction of Homomorphic Encryption

The approach’s promise continues to entice cryptographers and academics. But don’t expect it to help in the real world anytime soon.

The history of homomorphic encryption stretches back to the late 1970s. Just a year after the RSA public-key scheme was developed, Ron Rivest, Len Adleman, and Michael Dertouzos published a report called “On Data Banks and Privacy Homomorphisms.” The paper detailed how a loan company, for example, could use a cloud provider (then known as a commercial time-sharing service) to store and compute encrypted data. This influential paper led to the term “homomorphic encryption.”

What Is Homomorphic Encryption?
Homomorphic encryption describes any encryption scheme in which a particular operation on two ciphertext values gives an encrypted result, which, when decrypted, maps to the result the operation would have been in plaintext. Because the keys are needed only during the initial encryption and final decryption, complete privacy of the inputs and outputs is maintained during the computation process.

The purpose of homomorphic encryption is to allow computation on encrypted data. The process describes the conversion of data into ciphertext that can be analyzed and worked with as if it were still in its original form.

Why It Works (At Least in Theory)
Homomorphic encryption can be a significant asset to your business compliance and data privacy efforts. We all know the regulatory landscape changed dramatically in 2018. The EU’s General Data Protection Regulation (GDPR) came into force last May, the California Consumer Privacy Act (CCPA) is scheduled to be implemented on January 1, 2020, and as many as 40 other states are considering data privacy laws. GDPR was notable because of its penalty of 4% of global revenue for serious infractions, such as not having sufficient customer consent to process data. The CCPA is a “mini-GDPR” with unprecedented power for consumers to control the collection, use, and transfer of their own data.    

These new regulations came about as the result of significant breaches and abuses of data over the past few years. There is an undeniable data breach fatigue these days. But not every breach comes about as the result of outside attackers. There are other threat models to consider. Insider threats and privileged access account for about 35% to 60% of breaches, according to industry reports. Anthem and MyFitnessPal fell victim to this type of attack. Even using traditional encryption (encryption at rest and transparent data encryption, for example), database administrators have access to all of your data in the clear. They have access to the crown jewels.

Homomorphic encryption can also be a business enablement tool. It can allow cloud workload protection (“lift and shift” to cloud), cloud/aggregate analytics (or privacy preserving encryption), information supply chain consolidation (containing your data to mitigate breach risk), and automation and orchestration (operating and triggering off of encrypted data for machine-to-machine communication).

Where Homomorphic Encryption Falls Short
Homomorphic encryption originally slowed mathematical computations down to a crawl. The initial performance impact was 100 trillion times slower (not a typo). There has been significant performance improvement since then, but the latest figure is that about 50,000 end-to-end transactions can be performed in a certain range of time. That amount is still too small in today’s fast-moving world.

Furthermore, homomorphic encryption requires application modifications. You need prior knowledge of what type of computation was being performed (additive, multiplication, etc.) Businesses with less predictable or more free-form operations will have to rewrite or modify applications to make homomorphic encryption viable. Again, that’s not feasible for businesses at scale.

Finally, there are still questions about the overall encryption strength (encryption entropy). Homomorphic encryption exposes valuable properties in achieving this mechanism without decrypting this data, and without access to keys. There are still some open questions about the encryption strength using a scheme like this.

Putting It All Together
Homomorphic encryption is a long way off from real-world enterprise implementation, but there has been substantial progress in the areas of differential privacy and privacy preservation techniques. There are tools that can deliver homomorphic-like encryption without the inherent drawbacks that homomorphic encryption brings, so that businesses can mandate a higher security standard without actually breaking processes or application functionality.

Businesses should not have to sacrifice speed for security — it’s not a zero-sum dynamic. When done properly, security can actually accelerate your business. Eliminating that friction between security and business leadership builds trust between departments and creates better security outcomes.

The promise of practical homomorphic encryption continues to entice cryptographers and academics. Although it is a rapidly developing area to watch, in practice, its poor performance to date makes homomorphic encryption impractical to implement in enterprise environments in the near future.

Related Content:

Ameesh Divatia is Co-Founder CEO of Baffle, Inc., which provides encryption as a service. He has a proven track record of turning technologies that are difficult to build into successful businesses, selling three companies for more than $425 million combined in the service … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/the-fact-and-fiction-of-homomorphic-encryption/a/d-id/1333691?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

DNC targeted by Russian hackers beyond 2018 midterms, it claims

The Democratic National Committee (DNC) has filed a civil complaint accusing Russia of trying to hack its computers as recently as November 2018.

In its court filing, the DNC argues that not only did the campaign and several Trump operatives collude with Russia to steal electronic information, but that Russia was still attempting to hack DNC systems in the run up to last year’s midterm elections.

The filing describes an alleged Russian cyberattack campaign that began in July 2015 and which stole information after a hack in April 2016, when the Russians allegedly placed proprietary malware known as X-Agent on the DNC network. It claims that they monitored the malware in real time and collected data including key logs and screenshots. Using malware called X-Tunnel, the hackers exfiltrated several gigabytes of DNC data over the following days to a computer located in Illinois leased by agents of Russia’s GRU military unit, it says.

Russian operatives then placed a version of X-Agent on a DNC server in June that year and hacked DNC virtual machines hosted on Amazon Web Services (AWS) in September to steal voter data, the filing also alleges.

The DNC filing also accuses Russia of an ongoing campaign against the Democrats following the election, dating back to Robert Mueller’s 2017 appointment as head of the special counsel investigation into possible ties between the Trump campaign and the Russian Federation. Russia used multiple fake social media accounts to discredit Mueller as corrupt, the filing alleges, citing reports prepared for the Senate Intelligence Committee.

The DNC accuses Russia of trying to hack the network of Democratic Senator Claire McCaskill, along with the networks of two other midterm candidates, in 2017. They allegedly spoofed notification emails from McCaskill to her staff, asking them to visit a page purporting to be the US Senate’s Active Directory Federation Services (ADFS) login page.

Spear-phishing attempts

In October 2018, the filing says that WikiLeaks released a list showing the location and operational details of AWS servers around the world.

A month later, it said that Russian operatives tried to hack DNC emails again:

In November 2018, dozens of DNC email addresses were targeted in a spear-phishing campaign, although there is no evidence that the attack was successful. The content of these emails and their timestamps were consistent with a spear-phishing campaign that leading cybersecurity experts have tied to Russian intelligence. Therefore, it is probable that Russian intelligence again attempted to unlawfully infiltrate DNC computers in November 2018.

The lawsuit lists several co-defendants including the Trump campaign, Wikileaks and the Russian Federation. Also included are Julian Assange, Donald Trump Jr., Jared Kushner, former campaign chair Paul Manafort, Republican lobbyists Roger Stone and Richard Gates, campaign operative George Papadopoulos, and several figures associated with the Russian Federation. The filing stops short of directly suing the President himself.

The lawsuit accuses Russia of a catalog of crimes under various state laws, including trespass and conversion. It also says that Russia violated the Computer Fraud and Abuse Act, the Stored Communications Act, and the Digital Millennium Copyright Act (DMCA) via its hacking operations. Along with Wikileaks and Assange, it also stole trade secrets under the Defend Trade Secrets Act, the filing adds.

The DNC is also suing Wikileaks, Assange, the Trump campaign and Trump’s associates for violating the Wiretap Act by snooping on the DNC’s electronic communications.

It accuses all defendants of violating the Uniform Trade Secrets Act, and the Racketeer Influenced and Corrupt Organizations Act (RICO), alleging cyber-espionage, theft of trade secrets, obstruction of justice, and witness tampering.

The DNC wants statutory and punitive damages, along with the financial gain that the defendants earned from their violations. Perhaps more important is its request for a declaration that the defendants hacked its computers and used the information to impact the 2016 election.

This filing is the latest in an ongoing case that court records show began in April 2018. This amended complaint adds the information about the latest alleged hacks. Jared Kushner, WikiLeaks and Roger Stone are among several defendants that have moved to dismiss the case. Russia has also argued that the court does not have jurisdiction over it due to the sovereign equality of states, and that attempting to assert it in this case violates international law.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cuzS3Yk8n1s/

WhatsApp fights the spread of deadly fake news with recipient limit

As of July 2018, dozens of mob lynchings sparked by rumors – many about child abduction – that had been spread virally on social media had led to 33 deaths and at least 99 injured in 69 reported lynchings. The wave of violence tore through countries including Myanmar and Sri Lanka but was mostly in India.

At least 18 of those incidents were specifically linked to WhatsApp.

In an effort to limit the type of message forwarding that fuels such fake-news wildfires, in July WhatsApp launched a test in which it limited forwarding of chats to 5 people in India, where people forward more messages, photos and videos than any other country in the world.

WhatsApp also imposed a larger limit globally of 20 recipients. At the same time, WhatsApp also removed a quick-forward button next to media messages in India, and it added a feature to more clearly label forwarded messages.

Now, the private-messaging app is taking those changes, including the lower limit of 5 forwarded messages, worldwide. On Monday, Victoria Grand, vice president for policy and communications at WhatsApp, said at an event in the Indonesian capital of Jakarta that the change went into effect immediately. Reuters quoted her:

We’re imposing a limit of five messages all over the world as of today.

WhatsApp’s head of communications Carl Woog told Reuters that starting on Monday, WhatsApp would roll out an update to activate the new forward limit. Android users will receive the update first, followed by iOS.

Woog told the Guardian that the makers of the app, which has around 1.5 billion users, settled on five “because we believe this is a reasonable number to reach close friends while helping prevent abuse.”

WhatsApp users could previously forward messages to 20 individuals or groups.

According to the Guardian, WhatsApp says the changes it launched in India reduced forwarding by 25% globally and more than that in India.

Critics have said that Facebook’s steps to deal with fake news by using third-party fact checkers has actually caused fake news to migrate to other platforms – most particularly, to WhatsApp.

In an op-ed published in the New York Times in October, Brazilian researchers Cristina Tardáguila, Fabrício Benevenuto and Pablo Ortellado called on WhatsApp to keep fake news from poisoning the country’s politics by restricting forwards, broadcasts (WhatsApp allows users to send a single message to up to 256 contacts at once, enabling small groups to carry out large-scale disinformation campaigns), and to put a limit on the number of users allowed in new groups.

WhatsApp’s forwarding mechanisms have also been blamed for helping disinformation campaigns in that they strip out senders’ identities: messages forwarded to a new recipient have previously been marked as forwarded in light grey text, but besides that subtle difference, they’ve looked just like an original message sent by a contact.

Critics have said that the design of the forwarding feature up until now has enabled fake news and rumors to spread virally, with little to no accountability. That lack of accountability is compounded by the fact that, as WhatsApp itself has made clear in its ongoing court battles with governments, the company can’t see the contents of users’ chats. One of its biggest selling points – end-to-end encryption – ensures that’s the case.

India has in the past threatened to hold WhatsApp accountable for fake-news inspired violence and, like other countries before it, has called on WhatsApp to enable the “traceability” of provocative or inflammatory messages when an official request is made.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/fXwD_JbUbLk/

Bicycle-riding hitman convicted with Garmin GPS watch location data

A homicidal cycling and running fanatic known for his meticulous nature in tracking his victims has been undone by location data from his Garmin GPS watch.

Police in Merseyside, in northwest England, announced that a jury last week found Mark Fellows, 38, guilty of two gangland murders: that of “career criminal” John Kinsella last year and gang member Paul Massey in 2015. Fellows was sentenced to life in prison without parole.

Kinsella was gunned down on 5 May 2018 by a masked hitman on a bicycle who was wearing a high-visibility vest with yellow markings and black tape that CCTV cameras easily picked up.

Steven Boyle, 36, also found guilty in the killing of Kinsella, gave testimony against Fellows and acted as his spotter in the slaying, according to the Liverpool Echo. Boyle received a sentence of 33 years to life.

GPS watch

As the Liverpool Echo reported in December, during a search of Fellows’ home following Kinsella’s killing, police had seized a Garmin Forerunner 10 GPS watch. A prosecutor pointed out that the seized watch matched one Fellows had been wearing in photos taken during a road race – the Bupa Great Manchester Run – on 10 May 2015.

Investigators had taken the watch to Professor James Last, an expert in satellite-based radio navigation, to see whether the gadget had ever been to the area near Massey’s home.

It had. Professor Last testified that the Garmin had been near the victim’s home on 29 April 2015: almost two months before he was gunned down with a submachine gun. Prosecutors said that the trip was Fellows’s reconnaissance mission.

The watch mapped out Fellows’s journey and his escape route: a 35-minute journey from Fellows’s home to a church where he had lain in wait for Kinsella to take a walk with his pregnant girlfriend, then Fellows’s return path across a field towards woods and a railway line.

The Garmin also provided useful evidence regarding its wearer’s speed. Professor Last told the court that the Garmin wearer initially traveled around 12mph, suggesting that he was on a bike.

Professor Last said that the Garmin wearer slowed down to about 3mph on the grassland area, consistent with walking. Besides the Garmin GPS data, Professor Last also examined a TomTom Start satellite navigation system that police found in a car, and that prosecutors say was strongly associated with Fellows.

The TomTom data – the “Tomtology report” – showed that it often set off from an area near Fellows’s home and visited two locations that the prosecutor said were of interest in the investigation: one close to the home of a man with a van that prosecutors said Fellows used, and another area in which a mobile phone tied to Fellows was used, as the Liverpool Echo reports.

Given that CCTV repeatedly caught footage of Fellows on a bike, clad in the luminous yellow markings and covered with the black tape of his high-visibility jacket, police had already suspected that they knew who killed Kinsella.

Kinsella’s killing had commonalities with Massey’s murder. But it was the Garmin data that tied the two together, the Echo reports: the location data provided “key evidence” in the Massey murder, said the local paper.

Other device-based convictions

Other convictions based on location data have included the pivotal Carpenter v. United States, which concerned a Radio Shack robbery and the privacy of the phone location data that got the robber convicted. In June 2018, the Supreme Court ruled it unlawful for law enforcement and federal agencies to access cellphone location records without a warrant.

The legal arguments in Carpenter have gone on to inform subsequent decisions, including one from last week in which a judge ruled that in the US, the Feds can’t force you to unlock your phone with biometrics, including your finger, iris or face.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0URTbhF4hTk/

Rogue websites can turn vulnerable browser extensions into back doors

When was the last time you checked the permissions asked for by a browser add-on?

It’s a blind spot: we might know that app permissions can be risky but when it comes to extensions for browsers such as Chrome and Firefox there is a tendency to worry about it only when someone discovers a malicious extension doing something it shouldn’t.

But it’s not only malicious extensions that can be a problem, as highlighted by a newly published study by Université Côte d’Azur researcher, Dolière Francis Somé, which analyses deeper-level APIs.

Extensions can do things that websites can’t. Websites are protected and restricted by Same Origin Policy (SOP) policy – the layer that restricts websites on different domains from sharing data.

Somé was interested in whether a rogue extension could bypass these basic SOP protections by exploiting privileged browser extensions, maliciously gaining access to user data, browsing history, user credentials, or to download files in storage.

Sure enough, after analysing 78,315 Chrome, Firefox and Opera extensions that used the WebExtension API using a mixture of static analysis and manual review, the answer in 197 cases was yes, it could.

All told, 171 of the 197 were Chrome Extensions, which reflects the much greater number of extensions available for this browser rather than any inherent security advantage of Firefox and Opera. 16 and 10 extensions were found for these browsers respectively.

Should we be worried?

Given the very small numbers of vulnerable extensions discovered, at first glance perhaps not. More than half of the rogue extensions had fewer than 1,000 installs each, with only 15% having more than 10,000 installs each.

And yet many of these extensions were doing things that seem hard to justify, including 63 bypassing SOP, 19 executing code, and 33 Chrome examples that could even install other extensions.

Somé says that browser makers have been made aware of the extensions called out by the test, with Mozilla removing all of those named, Opera removing all bar two, and Google still in discussions about whether to remove or fix the Chrome ones (the full list can be found at the end of the research paper).

Solutions?

The easiest answer would be to stop extensions from communicating with web pages as they please, although this might also block legitimate actions.

Alternatively, extensions could (should?) be better vetted by browser vendors to check on their behaviour, while the extensions themselves could be forced to declare which websites they planned to interact with.

Concludes Somé:

Browser vendors need to review extensions more rigorously, in particular take into consideration the use of message passing interfaces in extensions.

The devil’s advocate might argue that the real problem is the whole extensions architecture, which is only now slowly being patched up.

In addition to being able to abuse APIs at a deeper level, many Chrome extensions have got into the habit of demanding high-level permissions during installation, such as the ability to “read and change all your data on the websites you visit.”

On the other side, Google recently changed Chrome extensions’ permissions to limit them to specific sites defined by the user.

The best advice remains to install as few as possible and carefully check out the permissions they request.

Currently, this can be done on Chrome once an extension is installed via Extensions Details.

On Firefox, the permissions are listed when the user clicks the ‘Add to Firefox’ button, which many people miss.

For Opera, it’s Extensions Information.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Uq7meFBQ46E/

Get in the bin: Let’s Encrypt gives admins until February 13 to switch off TLS-SNI

If you’re still using TLS-SNI, stop: a year after a slip-up allowed miscreants to claim Let’s Encrypt certificates for domains they didn’t own, the free certificate authority has announced the final sunset of the protocol involved.

In January 2018, Let’s Encrypt discovered that validation based on TLS-SNI-01 and its planned successor TLS-SNI-02 could be abused. As we explained at the time: “A company might have investors.techcorp.com set up and pointed at a cloud-based web host to serve content, but not investor.techcorp.com. An attacker could potentially create an account on said cloud provider, and add a HTTPS server for investor.techcorp.com to that account, allowing the miscreant to masquerade as that business – and with a Let’s Encrypt HTTPS cert, too, via TLS-SNI-01, to make it look totally legit.”

HTTPS key

Let’s Encrypt plugs hole that let miscreants grab HTTPS web certs for strangers’ domains

READ MORE

The SNI extension to the TLS protocol is supposed to validate the name presented by the server, something particularly important when a single IP address is serving a large number of websites. As we noted last year, the opportunity for abuse arises if the hosting provider doesn’t verify ownership of a domain.

Let’s Encrypt’s response at the time was to block TLS-SNI for new accounts. However, it decided to continue support for certificates already issued.

That’s going to end on February 13, 2019, the organisation has now confirmed.

In the blog post, Internet Security Research Group executive director Josh Aas explained that anyone still using TLS-SNI need to switch to DNS-01 and HTTP-01 as their validation mechanism.

“We apologize for any inconvenience but we believe this is the right thing to do for the integrity of the Web PKI,” Aas concluded. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/22/lets_encrypt_gives_admins_until_february_13_to_switch_off_tlssni/

Stalk my pals on social media and you’ll know that the next words out of my mouth will be banana hammock

The phenomenon of “prescient Facebook advertising”, so beloved of conspiracy theorists who think social networks listen to your microphone, might instead simply be evidence of how good Facebook’s algorithms have become.

That’s one of the surprising hypotheses to arise from research by boffins from the University of Vermont and the University of Adelaide, published today in Nature Human Behaviour.

Speaking to The Register, co-author Lewis Mitchell of Adelaide University said the chief conclusion of the work he conducted with Vermont’s James Bagrow and Xipei Liu related to how well individuals can be predicted not from their own posts, but those of their friends.

Analysing posts from as few as eight of your friends is enough to predict what you’ll write next, with up to 64 per cent chance of success.

Mitchell told El Reg the research looked at a question relevant in a post-Cambridge Analytica world: how effective is deleting your account in protective privacy?

Not very much, the trio found. By focusing on people’s networks of friends, their startling discovery was that close to 65 per cent of the time, with eight or nine friends’ feeds to analyse, they could predict the next thing you would say.

Mitchell summarised the research question as: “How well can a model do, at predicting the next word out of your mouth?”

To answer that, the group analysed more than 30 million Twitter posts from nearly 14,000 users, focusing on 927 “ego-networks” of the “ego” (the person at the centre of the network) and their 15 most frequently mentioned Twitter contacts.

They mined that data to measure “entropy” (how much information is needed to make an accurate next-word prediction), perplexity (a measure of uncertainty about the “unseen” words), and predictability II (the probability of an “ideal algorithm” – a statistical spherical cow, if you will – of making a correct prediction).

Mitchell said this provided an information theory, but one in which the fundamental unit is words, rather than 1s and 0s.

As noted in the paper, the 64 per cent predictability (that is, 0.64 probability of a correct prediction) the researchers report isn’t quite a “64 per cent accuracy”. Rather, “this predictability provides a method-independent upper bound on prediction accuracy”.

As few as eight or nine contacts is sufficient to reach that level of predictability, the paper stated.

The good news: that’s not a permanent state of affairs. If you deleted your account, the quality of inference that can be made according to your (former) friends’ posts will deteriorate over time.

“There’s a recency effect,” Mitchell said. “If you delete your account, the information will wash out, and the predictability about you from your friends’ posts will eventually decay.”

However, he said, the research clearly made the case that it’s misguided to overemphasise individual responsibility for privacy. Privacy is clearly a collective responsibility.

“The responsibility lies with FB and Twitter, the social media platforms – they’re the ones that have that network-level view, it’s something the network has to grapple with.”

Prescient advertisements

What’s this got to do with those advertisements that creep people out by seemingly intruding on their recent real-life conversations?

Mitchell said work like his group’s provides a theoretical underpinning to understand how good the “black box” of a social media algorithm can be.

“After this, I’m not surprised by the ability of Facebook to recommend ads as if they had the microphone turned on,” Mitchell told The Register.

He said people are so predictable from their friends’ posts, “you could conceivably build language models about the types of statements someone might make, and in principle use that for recommendations”. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/22/facebook_predictability_research/

Shadow IT, IaaS & the Security Imperative

Organizations must strengthen their security posture in cloud environments. That means considering five critical elements about their infrastructure, especially when it operates as an IaaS.

Shadow IT, the use of technology outside the IT purview, is becoming a tacitly approved aspect of most modern enterprises. Yet, with the vast adoption of software-as-a-service and infrastructure-as-a-service (IaaS) approaches, shadow IT presents increased security challenges that can create major risk. To further complicate things, because organizations aren’t centrally controlling these solutions and tools, their vulnerabilities often go undetected for far too long. If individuals and internal teams continue to introduce outside tools and solutions into their environments, enterprises will have to adopt a smart path to ensure they operate securely.

The evolution of shadow IT is a result of technology becoming simpler and the cloud offering easy connectivity to applications and storage. As this happened, people began to cherry-pick those things that would help them get things done easily. Internal groups began using Google Drive for team collaboration and storage; employees used their personal phones to access secured enterprise resources; development teams grabbed code from shared repositories. All of these use cases, and many more, are examples of finding and adopting usable, efficient, and cheap strategies to get things done.

A similar phenomenon is now common with platform-as-a-service and IaaS platforms (public clouds) — new development initiatives are taking place without being officially vetted by IT. Department-level initiatives are making speed and business goals the top priorities, and teams are applying complementary technology to support their needs. It’s empowering for development teams and it adheres both to newer styles of development as well as legacy ones that use agile methodologies. The difference here is that an entire model is being used and it affects the infrastructure on which the organization operates. It’s also opening up potential access to the private data that’s created, stored, and transacted within that infrastructure.

In most things in life, there is a level of forgiveness, but technology isn’t among them, and this is where shadow development efforts are problematic. The issue is that all technology and its resulting activity must adhere to companywide security goals and requirements. This can’t be a case of asking for forgiveness after the fact because slip-ups can lead to disastrous results.

Enterprises in which shadow IT exists must find a way to instill security as part of all IaaS development processes, but in a way that still allows for speed and agility. Developers dig speed, and they also like it when their technology efforts directly support business goals. It’s important to give them that without sacrificing security.

The issue requires creating a mindset around security that is embedded into the organization’s DNA, one that is accompanied by specific, actionable approaches that are baked into the overall security operations. That’s a tall order, but innovative organizations need to strengthen their overall security posture in cloud environments. To do that, however, requires that organizations consider five critical elements about their infrastructure, especially when it operates as an IaaS:

1. Don’t Neglect Development’s Need for Speed
Every organization that is technologically mature is simultaneously running three environments: development, quality assurance, and production. They each serve different aspects of the application continuum and are interdependent, but to maintain an adequate level of security, they must establish and adhere to specific security requirements. At some point, new applications and tools, ones brought into the organization through team-level and individual use, will find their way onto the dashboard of IT. There must be a set of requirements and best practices around security so these tools can be seamlessly added into the organization’s environment with as little disruption as possible.

2. Change Is the New Normal
What’s deployed in public clouds is ephemeral: Containerized applications or virtualized environments are brought up and down constantly and do not rely on fixed IPs and port numbers. Any ancillary shadow applications and tools that are tied to these resources must be monitored to ensure that their security controls coexist with those of the underlying cloud platform, especially as it changes quickly.

3. Respect the Complexity
The nature of what needs to be protected in the cloud is more complex and made of raw building blocks. Organizations adopt public clouds such as Amazon Web Services (AWS) in order to rapidly leverage innovative technologies available on demand without having to invest significantly in new skills. No two deployments on AWS are alike, and security solutions need to account for the fact that teams using shared repositories may be coming from different access points.

4. Prioritize Compliance
Although it may be a challenge to get teams to think about security when using shadow applications, invoking the need to be compliant can help. An organization often cannot operate when they are noncompliant with industry standards because rights and privileges are revoked, which can effectively kneecap the operational abilities of the company. Were this to happen, it could likely lead to companies completely banning shadow tools and services. Awareness of such draconian consequences can help drive home a message that security comes first.

5. Best Practices Are Best
As in all things, it’s best to stick to the simplest course of action. Just as you would with any organizational priority, instill a culture of security throughout the company and educate best practices into the way teams and individuals operate. The more people know about the effects of poorly constructed versus proper security operates, the better chance you have that they will adhere on their own to the right way to do things.

Related Content:

 

Sanjay Kalra is Co-Founder and Chief Strategy Officer at Lacework, leading the company’s overall strategy for innovation, business development, channel, strategic partnerships, and customer success, drawing on more than 20 years of success and innovation in the cloud, … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/shadow-it-iaas-and-the-security-imperative/a/d-id/1333673?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Tim Cook demands a way for users to delete their personal data

Apple CEO Tim Cook continued to fight for data privacy last week.

Writing in an op-ed for Time magazine, Cook lambasted what he called the “shadow economy” of data brokers that silently collect our information online, package it and sell it to willing buyers.

One of the biggest challenges in protecting privacy is that many of the violations are invisible…

The trail disappears before you even know there is a trail. Right now, all of these secondary markets for your information exist in a shadow economy that’s largely unchecked – out of sight of consumers, regulators and lawmakers.

Cook made explicit demands on the government. He said that we need new federal standards for data collection and called on the Federal Trade Commission (FTC) to create a data-broker clearinghouse: a place where companies that collect and sell personal information would be required to register and which would give consumers a way to find exactly who collected what and to easily stamp it all out of existence:

We believe the Federal Trade Commission should establish a data-broker clearinghouse, requiring all data brokers to register, enabling consumers to track the transactions that have bundled and sold their data from place to place, and giving users the power to delete their data on demand, freely, easily and online, once and for all.

Right now, Cook said, there’s no federal standard protecting consumers. He called on the US Congress to pass “comprehensive federal privacy legislation” and reiterated four principles that he presented to a global body of privacy regulators last year:

First, the right to have personal data minimized. Companies should challenge themselves to strip identifying information from customer data or avoid collecting it in the first place. Second, the right to knowledge – to know what data is being collected and why. Third, the right to access. Companies should make it easy for you to access, correct and delete your personal data. And fourth, the right to data security, without which trust is impossible.

Cook didn’t name names, but you can read between the lines to see Facebook’s and Google’s privacy indiscretions throbbing away. He’s repeatedly called for privacy-friendly business models for the technology industry’s top firms.

For one, in a speech at the 40th International Conference of Data Protection and Privacy Commissioners, Cook last year warned regulators that the technology industry was building an “industrial data complex“ that was getting out of control. He’s also criticized Facebook for having a business model that involves selling users’ data.

Cook’s subsequently been called out for hypocrisy. Alex Stamos, the former security chief at Facebook, shot back at Cook in October, pointing out that Apple’s privacy-centric approach isn’t evenly distributed. Apple’s practices in China are a prime example, he said in a Tweetstorm, calling on Apple to…

…come clean on how iCloud works in China and stop setting damaging precedents for how willing American companies will be to service the internal security desires of the Chinese Communist Party.

In July, Apple moved the storage of its iCloud data for China-based users to Tianyi, the cloud storage arm of state-owned telco China Telecom, raising privacy concerns.

Cook’s is just one voice among many that are clamoring for regulation. The most recent push to regulate big tech companies came from Senator Marco Rubio, who on Wednesday announced a new bill, titled the American Data Dissemination Act.

Such legislation may sound consumer- and privacy-friendly at first blush, but the devil’s in the details. Last week, the Information Technology and Innovation Foundation (ITIF) – a leading technology policy think tank supported by Google, Amazon and Facebook – suggested to lawmakers that any new federal data privacy bill should preempt state privacy laws and repeal the sector-specific federal ones entirely.

Putting the fox in charge of the hen house?

But the proposal would also override existing laws, some of which are legislative landmarks, such as the Health Insurance Portability and Accountability Act (HIPAA), the Family Educational Rights and Privacy Act (FERPA), and the Children’s Online Privacy Protection Act (COPPA), among others.

Senator Richard Blumenthal, who in the past has called for the FTC to investigate companies like Google for potential COPPA violations, said that trusting tech companies to write their own privacy regulations is like putting the fox in charge of the hen house:

If Big Tech thinks this is a reasonable framework for privacy legislation, they should be embarrassed. This proposal would protect no one – it is only a grand bargain for the companies who regularly exploit consumer data for private gain and seek to evade transparency and accountability.

Big tech cannot be trusted to write its own rules – a reality this proposal only underscores. I look forward to rolling out bipartisan privacy legislation that does in fact ‘maximize consumer privacy,’ and puts consumers first.

Amidst all this debate over privacy, Cook said, we can’t lose sight of “the most important constituency: individuals trying to win back their right to privacy.” From his article:

Technology has the potential to keep changing the world for the better, but it will never achieve that potential without the full faith and confidence of the people who use it.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ll6hM7GB6Q8/