STE WILLIAMS

Barracuda Buys Bot-Battling Tech from InfiSecure

The intellectual property acquired will add to Barracuda’s bot-detection capabilities.

Barracuda announced today that it has acquired bot-detection technology from InfiSecure Technologies.

Barracuda acquired the intellectual property and certain other assets from InfiSecure, but did not acquire InfiSecure’s shares. InfiSecure will not continue as a separate company.

The technology will allow Barracuda to better detect bots that more closely mimic human behavior, including those bots written for specific applications and those that are “low and slow” bots — bots that generate messages at a pace that echoes human typing rates, rather than the rapid-fire messages possible with automation.

InfiSecure’s technology is based on machine-learning functions that can profile applications to provide application-specific protection from bot attacks.

For more, read here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/barracuda-buys-bot-battling-tech-from-infisecure/d/d-id/1335513?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Android users menaced by pre-installed malware

How does malware find its way on to Android smartphones and tablets?

By some margin, it’s by way of Google’s Play Store, which despite repeated efforts to clean it up remains a recurring source of dodgy apps that sit somewhere between suspiciously misleading and downright malicious.

But according to a Black Hat presentation by Google Project Zero researcher Maddie Stone, there’s another route that’s nearly impossible for users to defend themselves against – malicious apps that have been factory pre-installed.

It starts with the sheer number of apps that now come with Android devices out of the box – somewhere between 100 and 400.

Criminals only need to subvert one of those, which has become a particular problem for cheaper smartphones using the Android Open Source Platform (AOSP) as opposed to the licensed ‘stock’ Google version that powers better-known brands.

Chamois botnet

She cited several instances encountered while doing her old job on Google’s Android Security team, including an SMS and click fraud botnet called Chamois which managed to infect at least 21 million devices from 2016 onwards.

The malware behind it proved harder to defeat than anticipated, in part because the company realised in March 2018 that in the case of 7.4 million devices the infection had been pre-installed in the supply chain.

Google was able to reduce pre-installed Chamois to a tenth of that level by 2019 but, unfortunately, Chamois was only one of several supply chain security issues it uncovered.

Others included 225 device makers either leaving diagnostic software on devices offering backdoor remote access, modified Android Framework code allowing spyware-level logging, or installing apps that had been programmed to bypass Google Play Protect (GPP) security.

Some of this was inadvertent, a case of OEMs messing around with settings to make their lives easier, but it was dangerous enough for Google to assign the issue a CVE number and software fix that outlawed the bypass in early 2019.

Supply chain complexity

The issue of supply chain malware has been rumbling away at a low level for some time, but this is the first time someone from Google has drawn attention to the issue in so much detail.

As Stone admits, stopping the problem is tougher than achieving the same thing for rogue apps that make it on to the Google Play Store, because detection must happen at a lower level beyond the knowledge of traditional security apps.

It’s also an inherent part of the complex OEM Android supply chain – contrast that with Apple, which controls the entire process for its iPhone.

With the cat now out of the bag regarding supply chain attacks on Android, Stone would like to see more third-party research into this software layer.

While a useful suggestion, this shouldn’t distract us from the fact that most users are still more likely to encounter bad apps in the one place many assume they won’t – Google’s Play Store.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/vAwqo-IqEMU/

Chrome Incognito mode detection fix busted by researchers

Remember that Chrome update that stopped websites from detecting Incognito mode? Well, researchers claim to have found a way around it.

Chrome’s Incognito mode is supposed to let people use computers for browsing sessions without affecting that computer’s history or polluting the browser with session cookies. That means you can search for something on a computer without it showing up there, which is useful for everyone from victims of domestic abuse through to people searching for gifts.

People also discovered another use for incognito mode, though: getting past paywall sites. Incognito mode’s cookie blocking enabled people to start a fresh session with each visit. Visitors to metered paywall sites that provide a certain number of stories for free before demanding a subscription could effectively reset the meter each time they accessed the site.

Sites got wise to this and figured out a way to spot Chrome browsers in Incognito mode. In regular browsing sessions, Chrome uses the chrome.fileSystem API to read and write to the local filesystem. Google disabled that API in Incognito Mode because it never reads or writes cookies during those sessions.

Sites realised that they could check the existence of chrome.fileSystem in visiting Chrome browsers. If they asked for it and got an error message, they knew that the user was browsing in incognito mode and could respond with a message like this:

When Google launched Chrome 76, it stopped sites from using this technique by introducing a new kind of reader/writer for file streams. This wrote cookies to browser memory rather than to the local file system, where they would persist. This still looked like chrome.fileSystem to the querying website, meaning that the site wouldn’t get the missing API error message when looking for that API, and so couldn’t determine whether the visiting browser was in Incognito Mode.

Or, so Google thought – until now.

Security researcher Vikas Mishra noticed a loophole in the temporary storage that Chrome uses to hold website resources. He found that the temporary storage will allow a fraction of the device’s overall available storage for non-Incognito windows, but only allows 10% of the device’s memory for windows running in Incognito Mode, and caps it at 120Mb.

Sites viewed in non-Incognito Mode on most modern devices will quickly exceed 120Mb of temporary storage. Therefore, he inferred that any window with a 120Mb storage limit must be in Incognito Mode. He also helpfully provided a script that websites could implement.

Another researcher, Jesse Li, used a timing attack to detect Incognito Mode. He based it on the difference in speed of writing to memory rather than disk. He benchmarked write speeds by repeatedly writing large strings to the browser in both modes. The difference between the two benchmarks is large enough that he can tell which kind of storage system he’s writing to.

The problem is that the benchmark, which must be run each time a browser visits, takes “on the order of minutes or tens of seconds” to get enough data. Background processes may introduce noise that muddies the benchmarks, and if you’re using a system where the disk is memory (such as an OS run on a USB key), then it won’t work.

These are interesting exercises, but the first is easy to fix by adjusting a threshold, while the second is difficult to implement in practice. The real question here is how we got to a point where browser vendors and popular online publishers find themselves in an arms race to break browser-based privacy – and that involves a far longer discussion of the business models underpinning the web.

Surely there has to be a better model that works for everyone?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/DYQIHCGdjmU/

Hacked devices can be turned into acoustic weapons

It’s bad enough that our devices can listen to us, whether it’s to use ultrasound to track us (even if we’re on an anonymous network) or whether it’s voice assistants picking up on our private conversations (including with human contractors listening in).

Now, PricewaterhouseCoopers (PwC) security researcher Matt Wixey brings us news of attacks that can make our devices’ embedded speakers scream at us, be it at inaudible, high-intensity frequencies or audible sounds at hearing-damaging volumes.

On Sunday at the Defcon security conference, he presented a talk on what he calls acoustic cyber-weapons.

Wixey, head of research at PwC’s cyber security practice, said that his experiments were done as part of his PhD research at University College London, where he delves into what he calls “unconventional” uses of sound as applied to security – including digital/physical crossover attacks that use malware to create physical and/or acoustic harm.

REALLY LOUD STUFF MAKES YOUR HEAD EXPLODE

If you aren’t already aware of how much damage given sounds can cause, in his slideshow for the Defcon talk, Wixey annotated a decibel chart from Survival Life to show what level of sound will cause…

  1. Your eyes to twitch – 100 dB, or somewhere between a chainsaw and a lawnmower.
  2. Your lungs to collapse/death imminent – 188 dB.
  3. Your bones to shatter and your internal organs to rupture – 194 dB.
  4. Instant death – 200 dB, or the sound of Windows XP starting up*.

(*I’m fairly sure the Windows XP reference is just a joke. But if you want to see what level of noise will cause your eardrums to rupture, check out this training manual from Purdue University.)

Wixey talked about how inflicting “aural barrages” can cause both psychological and physiological effects, from neurasthenia, cardiac neurosis, hypotension, bradycardia, nausea, fatigue, headaches, tinnitus, ear pain and far more.

Wired quoted him:

I’ve always been interested in malware that can make that leap between the digital world and the physical world. We wondered if an attacker could develop malware or attacks to emit noise exceeding maximum permissible level guidelines, and therefore potentially cause adverse effects to users or people around.

If you keep melting your speakers, we won’t buy you more toys

Wixey told the BBC that he and his team used custom-made viruses, known vulnerabilities and other exploits to force a collection of devices to emit dangerous sounds for long periods of time.

Wixey didn’t specify which name brands they preyed on, but the devices included a $1,000 laptop upon which the team inflicted malware (remote and local), a $200 mobile phone that also got the remote and local malware treatment, a $50 Bluetooth speaker, a $200 smart speaker for which they exploited a known control-audio vulnerability, $400 headphones that were susceptible to multiple attack vectors, and other, even cheaper gadgets with embedded speakers.

It doesn’t really matter which brand names are susceptible to catching on fire or burning a hole in your eardrums, since their susceptibility was pretty agnostic, Wixey said. Though we don’t know the brand names, we do know that many consumer devices do all sorts of things via ultrasound.

In September 2017, for example, scientists from China’s Zheijiang University proved it’s possible to control voice-activated programs on Siri, Alexa, and other voice-activated programs by using inaudible ultrasound commands.

As the New York Times reported in May 2018, further research showed that such technology could be used to unlock doors, wire money or buy stuff online, simply by hiding commands in white noise played over loudspeakers and through YouTube videos.

At any rate, back to Wixey and the experiments he subjected his toys to when he locked them into a soundproof container with minimal echo – called an anechoic chamber – and then subjected them to simple code scripts or slightly more complete malware he wrote to run on each device.

Some of his attacks leveraged known vulnerabilities in a particular device, which could be done locally or remotely in some cases, he said. Other attacks would require physical proximity or physical access. The results: after getting each device to play a particular tone for 10 minutes, one of them – a vibration speaker – vibrated so much that it kept falling over.

But it was the smart speaker that gave off the best fireworks: Wixey’s attacks caused it to melt. The speaker began to give off a burning smell, and further testing showed that it had been permanently damaged.

Wixey said that the manufacturers have all been informed, that they were responsive and cooperative, and that updates have been rolled out to fix the issues. He told Wired that he and his team are keeping the details close to the vest, given the ethics involved:

There are a lot of ethical considerations and we want to minimize the risk. But the upshot of it is that the minority of the devices we tested could in theory be attacked and repurposed as acoustic weapons.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/S9CML6xBVTQ/

Fake news doesn’t (always) fool mice

Mice can’t vote.

They can neither fill in little ovals on ballots nor move voting machine toggles with their itty bitty paws. That’s unfortunate, because the teeny rodents are less inclined than humans to be swayed by the semantics of fake news content in the form of doctored video and audio, according to researchers.

Still, the ability of mice to recognize real vs. fake phonetic construction can come in handy for sniffing out deep fakes. According to researchers at the University of Oregon’s Institute of Neuroscience, who presented their findings during a presentation at the Black Hat security conference last Wednesday (7 August), recent work has shown that “the auditory system of mice resembles closely that of humans in the ability to recognize many complex sound groups.”

Mice do not understand the words, but respond to the stimulus of sounds and can be trained to recognize real vs. fake phonetic construction. We theorize that this may be advantageous in detecting the subtle signals of improper audio manipulation, without being swayed by the semantic content of the speech.

No roomfuls of adorable mice watching YouTube

Jonathan Saunders, one of the project’s researchers, told the BBC that – unfortunately for those who find the notion irresistibly cute – the end goal of the research is not to have battalions of trained mice vetting our news:

While I think the idea of a room full of mice in real time detecting fake audio on YouTube is really adorable, I don’t think that is practical for obvious reasons.

Rather, the goal is to learn from how the mice do it and then to use the insights in order to augment existing automated fakery detection technologies.

Saunders told the BBC that he and his team trained mice to understand a small set of phonemes: the sounds that humans make that distinguish one word from another:

We’ve taught mice to tell us the difference between a ‘buh’ and a ‘guh’ sound across a bunch of different contexts, surrounded by different vowels, so they know ‘boe’ and ‘bih’ and ‘bah’ – all these different fancy things that we take for granted.

And because they can learn this really complex problem of categorising different speech sounds, we think that it should be possible to train the mice to detect fake and real speech.

The mice got a treat when they interpreted the speech correctly, which they did up to 80% of the time. Maybe that’s not stellar, but if you combine it with existing methods of detecting deep fakes, it could be valuable input.

State of the art

As it is, both humans and machines do well at detecting fakes. The researchers conducted a small user study in which participants were asked to differentiate between short clips of real speech vs. fake ones. The humans did OK: our species’ median accuracy was 88%.

That’s close to the median accuracy of 92% for the state of the art algorithms evaluated for the challenge: algorithms that detect unusual head movements or inconsistent lighting, or, in shoddier deep fakes, spot subjects who don’t blink. (The US Defense Advanced Research Projects Agency [DARPA] has found that a lack of blinking, at least as of the circa August 2018 state of the technology’s evolution, was a giveaway.)

In spite of the current, fairly high detection rate, we need all the help we can get to withstand the ever more sophisticated fakes that are coming. Deep fake technology is evolving at breakneck speed, and just because detection is fairly reliable now doesn’t mean it’s going to stay that way. Thus was difficult-to-detect impersonation a “significant” topic at this year’s Black Hat and Def Con conferences, the BBC reports.

An error rate hovering around 10% not only leaves a deluge of online-delivered fakery; it also means that the false positive rate will be fairly high, meaning that real news will be flagged as fake, the researchers noted.

And even with detection rates fairly high, be it via biological or machine means, convincing fakes are already out there. For example, experts believe that a family of dueling computer programs known as generative adversarial networks (GANs) were used to create what an AP investigation recently suggested was a deep fake LinkedIn profile of a comely young woman who was suspiciously well-connected to people in power.

Forensic experts easily spotted 30-year-old “Katie Jones” as a deep fake. But that didn’t keep plenty of well-connected people in the government and military from accepting “her” LinkedIn invitations.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/DOZxZWFWABA/

Header aches in Firefox, Tor, Brave and Chrome as HTTP opens new security holes

The HTTP Alternative Services header can be abused to conduct network reconnaissance and attacks, to bypass malware protection services, and to foil tracking defenses and privacy assumptions, according to a paper scheduled to be presented at the WOOT ’19 security conference on Tuesday.

Back in March 2016, the Internet Engineering Steering Group approved the HTTP Alternative Services header as a proposed web standard for situations when a web server needs to send a client to another service.

There are a variety of legitimate reasons to do this: a web server may be overloaded with requests, may be undergoing maintenance, or may determine that another server is closer (and thus quicker to respond).

As Mark Nottingham, co-chair the IETF HTTP and QUIC Working Groups, explained at the time, such redirection can be handled by DNS load balancing under short-lived HTTP/1.1 connections.

But DNS load balancing doesn’t work as well with HTTP/2, which is designed to maintain a persistent connection.

HTTP Alternatives Services was designed as an alternative method to point requests elsewhere. It allows a web server to return a header that specifies another server as the host of its resources, in effect deputizing the stand-in to act as the Origin, the first-party source of content.

“The ability to redirect clients to use another server in a transparent, persistent fashion brings some obvious security concerns,” said Nottingham in his post.

A paper titled “Alternative (ab)uses for HTTP Alternative Services,” by boffins Trishita Tiwari, who co-authored the paper while at Boston University and is currently a cyber-security PhD student at Cornell University, and Ari Trachtenberg, professor of electrical and computer engineering at Boston University, makes these obvious security concerns more evident.

HTTP headers are a way for clients and servers to communicate metadata during a request-response interaction. An Alternative Services header, returned with a response to a client request, might look something like this:

Alt-Svc: h2=”new.example.org:443″; ma=600

This tells the client that for the next ten minutes (600 seconds), it can connect to new.example.org on port 443 using the HTTP/2 protocol. The redirect information might also include an optional persist parameter indicating that the alternative service can be remembered after session and network changes.

According to Tiwari and Trachtenberg, Google uses Alternative Services to advertise an alternate endpoint for serving content via its UDP-based QUIC protocol. Meanwhile Facebook, among other websites, detects Tor client browsers and uses Alternative Services to point users to onion hidden service endpoints. The protocol is supported by desktop browsers like Brave, Chrome, Firefox, Opera and Tor, as well as various mobile browsers.

The researchers found that the Alt-Svc header can be used to force a Firefox or Tor user to scan any TCP ports of any host accessible to the victim. They reported this vulnerability to Mozilla, which issued a fix for CVE-2019-11728 in its July release of Firefox 68 (which also covers Tor, based on Firefox ESR).

They also implemented this attack on Chrome and Chromium-based Brave using QUIC as an Alt-Svc endpoint. The researcher say they’ve disclosed this to Google and discussions about mitigations are underway; QUIC is currently hidden from Chrome users behind an experimental flag and must be activated by the user.

“The basis of this attack is the observation that if a website specifies an Alt-Svc header to a secondary host with an HTTP/2/QUIC endpoint, then browsers immediately try to initiate a handshake with the secondary host, without performing any checks on the host or port,” the researchers explain in their paper. “The secondary host could even be a private IP or localhost, and the port could be on the browser’s HTTP port blacklist.”

Alt-Svc also provides a way to bypass the Safe Browsing system used by Brave, Chrome and Firefox. “[I]f a clean, whitelisted first-party specifies a black listed domain as its Alt-Svc, then the Safe Browsing checks are skipped and all content is loaded from the malicious domain,” the research paper explains, noting that the blocked site must present the certificate of the clean site, which would require collusion between the two.

The header also provides a way to avoid online site checking tools like VirusTotal, URLVoid, Sucuri and IPVoid, the researchers say, noting that security checks like Safe Browsing need to not only check the first-party domain but also the designated Alt-Svc domain before marking websites safe.

Distributed denial of service attacks are possible in Firefox and Tor because, unlike Chromium-based browser, they do not remember broken endpoints. Thus it’s possible to use an iframe to create a reload loop that would force repeated TLS connection attempts, creating a denial of service attack.

What’s more, Alt-Svc can be used to track people despite privacy protections. “By specifying a unique Alt-Svc for each user, and observing subsequent user requests, an attacker could track a user both as a first-party website and a third-party iframe or image,” the paper explains.

Along similar lines, network service providers can abuse Alt-Svc to extract otherwise unavailable web history data through the observation of resource loading.

In an email to The Register, Tiwari said Mozilla addressed the port scanning and DDOS vulnerability with a patch that reduced the surface area of the attack by preventing Alt-Svc connections to certain sensitive non-HTTP ports.

“While this does not entirely eliminate the underlying issue (as is often the case with patches for many side-channel attacks), this patch brought Mozilla’s exposure down to the same level as Chrome and Brave (which most would consider to be an acceptable level of exposure),” she said.

The Safe Browsing issues discussed in the paper have yet to be fully addressed. “I was unconvinced of the explanation that Google provided us on how they address this issue,” she said.

Someone peeking over their desk out of sight

Googlers hate it! This one weird trick lets websites dodge Chrome 76’s defenses, detect you’re in Incognito mode

READ MORE

“We have been trying to communicate with them, but they haven’t been particularly responsive about the concerns that we raised about their mitigations. This is really unfortunate given that Safe Browsing is employed by not just Chrome, but also Firefox, Brave, etc. and so is relied upon by a lot of people.”

Tiwari said that the security implications mentioned in Nottingham’s 2016 blog post were incorporated into the Alt-Svc spec.

“The spec does attempt to address these issues, but the mitigations proposed there (i.e., clearing the Alt-Svc cache when the user clears their browser cache) are not strong enough,” she said.

“Browser vendors understand this and are now proposing much stronger mitigations like cache isolation (which should, in my opinion, be included in the spec so that it is not at the mercy of individual browser vendors to implement it – user tracking has become a rising issue, and it is high time that these RFCs start requiring cache isolation upfront).”

“The rest of the attacks we show in the paper stem from how the Alt-Svc spec was improperly implemented, so, in a sense, the remaining attacks weren’t fundamental design flaws, but rather flaws in the way browser vendors implemented the design,” she said. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/13/header_banged_for_bafflingly_bad_behavior/

US insurers face SEC probe over web-access bungle that exposed ‘up to 885 million’ files

The American Securities and Exchange Commission is said to be investigating a US insurance company that allegedly left 885 million personal records accessible “without authentication to anyone with a web browser”.

As revealed by infosec journalist Brian Krebs in May this year, First American Financial Corporation was said to have leaked sequentially numbered documents including bank account numbers and statements, mortgage and tax records, Social Security numbers, wire transaction receipts, and images of driving licences. The firm disabled serving of the files after being told of the leak.

Regarding the SEC’s investigation, Krebs cited a letter sent to Ben Shoval, the property developer who originally noticed the leak earlier this year, from the commission’s enforcement division. The letter asked Shoval to “immediately preserve, and voluntarily provide us with” any documents he had from the time of the data leak.

As we reported in May this year, the unsecured records were said to have dated back to 2003, which goes some way to explain the sheer scale of the allegations.

A class-action lawsuit (PDF) has also been under way since late May, with the lead claimant similarly alleging that First American was using sequential document numbers to display information to customers – potentially allowing anyone to change a digit or two of a URL of one insurance-related document to gain access to another belonging to a stranger.

The complaint claimed:

It took no computer sleuthing to uncover numbers that will pull personal data; First American’s document identification numbers were sequential. Follow that sequence 885 million times — 1, 2, 3, 4, and so forth — and you could access all 885 million.

In mid-July, First American issued a statement claiming that it had identified just 32 customers whose “non-public personal information” was “likely accessed without authorisation”, and offered them free credit-monitoring services. This was a near-doubling of its previous estimate that only 14 customers’ information was accessed. We have asked the firm whether it will comment on the latest developments.

It is understood that the SEC investigation centres around a potential breach of securities (stock exchange and share trading) law.

Until relatively recently, the US’s breach reporting laws lagged behind enforcement regimes such as the EU’s, though some states such as California have stronger mandatory disclosure laws than most, as HSBC’s US arm found out the hard way in November.

Nonetheless, attacks by cybercriminals against banks are commonplace for the obvious reason. In February this year, the Bank of Valletta, Malta, pulled the plug on its entire internet access to thwart an attempted €13m cyberheist.

Closer to home, Tesco Bank was fined £16m by the Financial Conduct Authority in October 2018 after a 2016 hack saw £2.26m pinched from more than 9,000 hapless customers. ®

The class action suit is: David Gritz et al v. First American Financial Corporation

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/13/first_american_financial_corp_885m_data_breach_sec/

Moving on Up: Ready for Your Apps to Live in the Cloud?

Among the complications: traditional security tools work poorly or not at all in the cloud, and if a company screws up, the whole Internet will know.

Some people dread moving. Understandable. Others, however, get a real charge from the opportunity to purge, scale down, and take only what they need.    

As the digital landscape changes, organizations are doing some purging of their own as they move to the cloud. But that transition isn’t as easy as packing up dishes and linens, putting boxes on a truck, and heading off to a new destination.

Migrating to the cloud is challenging because not only must organizations determine what they will need in the cloud, but all of those applications must then be rebuilt in a new environment. All the while, the attack surface in need of defense is expanding, and the rules that worked in a network environment might not be enough in the cloud.

“A major factor complicating cloud security is that traditional security tools work poorly or not at all in the cloud,” says Dan Hubbard, CEO at Lacework. “Even if the most skilled cloud migration staff members recognize that security is important and necessary, businesses are often fuzzy about the details of cloud security.”

Moving at the Speed of DevOps
The speed at which businesses now operate has directly impacted the speed at which applications are developed and moved to the cloud. Because no one has the gift of hindsight to truly prepare for threats in this new environment, security teams have their work cut out for them.

It’s no surprise, then, that a recent report from Bitglass found as more organizations transition to the cloud, 93% of survey respondents said they are at least moderately concerned about their ability to use the cloud securely.

Historically, security has come to what is often called the “far right” of the development process — meaning organizations often need to decide whether to slow down business in order to fix a vulnerability or sign off on a risk and allowing the application to go into production, explains Chris Carlson, VP of cloud agent platform at Qualys.

“The opportunity is for security leaders to use new security tools that integrate easily and well with these new cloud IT development production tools to actually make Web applications in the cloud even more secure than they were when they were on-premise,” he says.

Rethinking Security in the Cloud
When migrating to platform-as-a-service (PaaS) offerings, organizations are looking at a reimplementation of functionality against cloud offerings, according to Dr. John Michener, chief scientist at Casaba Security.

“These can be quite secure, but there is a major potential gotcha: If they screw up within a corporation, the exposure may be corporate-wide. If they screw up an access settings in the cloud, the exposure is likely to be Internet-wide, Micherner says.

In addition, reliable standards have yet to be established, says Dr. David Brumley, CEO of ForAllSecure and a professor at Carnegie Mellon University. People are still trying to determine the best ways to ensure policies are in place.

“The security principles that have traditionally existed on the network are still critical in the cloud, but the cloud exacerbates age-old problems,” he says. “Organizations still need to do access control and ensure the protection of data in the event that something does get in your system.”  

The most successful organizations are rethinking how they perform security and moving to a combination of security as a centralized governance and tooling organization with security distributed within the development teams, according to Hubbard. “With that solutions need to fit into these modern deployments, straddling the needs of both security and application development,” he says.

If possible, organizations should first migrate to appropriate SAS implementations, such as Office 365, Salesforce, and SAP HANA, Casaba Security’s Michener advises. “I would expect organizations to keep legacy apps in-house or migrate them to [Internet-as-a-service] instances that are equally as convenient and economical, but custom in-house apps would be the target for PaaS migration,” he says.

Keeping Up with the Business
In many organizations, applications were moved to the cloud before their security teams really knew what the threats in the cloud were. Now they’re playing catch-up, while more applications are migrating. While doing so, organizations need to remember that security is not a “true” or “false,” Brumley says. Companies are migrating to the cloud to increase accessibility and be able to iterate faster, but at the same time security will become more difficult simply by definition.  

To get ahead, it’s important to build defense in depth and have a strategy in place, says Kory Daniels global director, iSecOps at Trustwave. “The organization should have a framework that allows an adaptive and agile mentality of identifying proper use cases,” he says.

“If we have the ability to identify the key things we want to protect, then we look at our ability to ensure we have the proper controls in place to lock down data to the best of our ability without inhibiting business,” Daniels says.

And because there’s no way of truly knowing what all the threats are in the cloud, “it’s important to have the people, process, and technology from detection and response to ensure that you can at least identify where the threats are being exploited,” he adds.

Related Content:

Image source: blackboard via Adobe Stock

 

Kacy Zurkus is a cybersecurity and InfoSec freelance writer as well as a content producer for Reed Exhibition’s security portfolio. Zurkus is a regular contributor to Security Boulevard and IBM’s Security Intelligence. She has also contributed to several publications, … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/moving-on-up-ready-for-your-apps-to-live-in-the-cloud/b/d-id/1335490?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

2019 Pwnie Award Winners (And Those Who Wish They Weren’t)

This year’s round-up includes awards into two new categories: most under-hyped research and epic achievement.PreviousNext

Image Source: Black Hat USA 2019

Image Source: Black Hat USA 2019

The annual Pwnie Awards recognize people and organizations for making a mark, one way or the other, on the information security industry.

The awards ceremony, held at the Black Hat USA security conference, bears little resemblance to the Oscars, Grammys, Emmys, or pretty much any other awards show. There’s no glitz or glamour. The dress code is strictly informal; shorts and T-shirt are perfectly acceptable sartorial choices. Judges lightheartedly B-box and/or thigh-slap the drumrolls, and the awards themselves recognize not just excellence in the field of information security, but also the more dubious distinctions and epic fails.

For those who win — in the excellence category, that is — the awards are both peer recognition and an affirmation of their contributions to the broader security community. For those selected for some of the less desirable Pwnies (lamest vendor response, for instance), the awards are often both a rebuke and reminder to improve their acts.

This year, Pwnies were awarded in 10 different categories, including two new ones: Most Under-Hyped Research and Epic Achievement. Here’s a complete listing of the winners in each of the categories.

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full BioPreviousNext

Article source: https://www.darkreading.com/2019-pwnie-award-winners-(and-those-who-wish-they-werent)/d/d-id/1335492?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

History Doesn’t Repeat Itself in Cyberspace

The 10th anniversary of the US Cyber Command is an opportunity to prepare for unknowns in the rapidly changing cybersecurity landscape.

Ten years ago, GPS on phones was just becoming available. Self-driving cars were secretly making their way into traffic, and most people hadn’t even heard of 3D printing. This was when the US Cyber Command was created to direct and coordinate cyberspace planning and operations to defend and advance national interests with domestic and international partners.

It’s an understatement to say things have changed a lot since 2009, especially the cyber landscape. Though the majority of its operations are classified, it’s not hard to imagine the Cyber Command has also gone through major changes over the past decade.

Anniversaries are usually an opportunity to reflect on the past and think about the future, but that’s tricky to do when most of the Cyber Command’s activities are essentially kept from the public’s eye. And while history is known to repeat itself, cyberspace — the epitome of constant change — bucks that trend. This secrecy, conflated with the dynamic cyber landscape, makes it difficult to accurately predict what the next decade might bring for the Cyber Command and technology in general. (Seriously, who could’ve foreseen that a social media platform conceived by a broken-hearted student in a college dorm room would end up being a tool for skewing elections of a world superpower?)  

After a recent (and rare) briefing at its new Joint Operations Center, a modicum of visibility emerged regarding the maturing Cyber Command’s new “defend forward” operating philosophy. With publicly announced plans to defend the 2020 elections from foreign interference, along with authorization to operate against overseas adversaries, it’s seems likely that the Cyber Command is stepping up its cyber warfare game, as it should. But will investment in its own technology infrastructure be commensurate with risks it faces?  

This 10-year milestone is exactly the right time to contemplate what may be said about the Cyber Command in 2029, and sentiment will hinge on technology decisions it makes in the near term. A decade from now, we’ll look back again across the entire cyber landscape to assess the efficacy of the command and many other federal agencies, especially as multicloud complexity increases and threats become increasingly hard to thwart.  

There are clues that point to what the future holds, and at least one thing comes into focus pretty clearly right now: risky behavior taking place in federal agencies across the board is a huge homegrown threat that the Cyber Command (and anyone conducting business online) cannot ignore.

A recent report revealed that digital transformation efforts of federal agencies are putting sensitive government data — your data — at risk. Nearly 70% of respondents in the report admit they’re not encrypting the data they’re supposed to be protecting. Even as agencies struggle with cloud complexity, the race for digitally transformative technologies is literally pushing security aside. And despite increases in data breaches and regulatory compliance, proper investment in data protection is low for agencies. Without a sea change, 2029 won’t mark a happy anniversary.   

Cyber Command’s work over the next 10 years will require an increasing level of interoperability of data and data-handling systems between federal agencies — something they’ve acknowledged. But without the most robust encryption security in place, data fusion that must take place between multiple federal agencies will continue to be risky and potentially expose secrets to adversaries who are also building up their own cyber forces, for good or evil.  

Cyber Command acknowledges it must focus on persistent innovation and rapid change. During opening remarks at a Cyber Subcommittee Hearing last year to review Department of Defense operational readiness, Senator Mike Rounds of South Dakota, a member of the Senate Armed Services Committee and chairman of the Cybersecurity Subcommittee, said cyber readiness issues revolve around several problems including “…the shortage of skilled, cyber-capable personnel” and concerns about being properly equipped with the right tools to respond to operational needs.

At a minimum, these pronouncements show Cyber Command recognizes the clear and present danger of not being prepared in the cyber theater of war. If 60% of federal respondents in the same report say they’ve been breached (with 35% in the past year alone), and only 30% are properly encrypting data, the wake-up call should be loud and clear: Investment in modern data solutions for modern architectures is critical to national and global security.

Data security professionals, federal or otherwise, face a ticking time bomb and must be constantly vigilant. Everyone — from the intern to the CEO — has data worth stealing and worth protecting. Without support and proper investment, the institutions they protect will remain at risk.

Related Content:

Nick Jovanovic has more than 18 years of experience as a technology expert with familiarity in a broad spectrum of data storage and security technologies. He is currently responsible for leading and growing the Thales CPL U.S. Federal sales team by providing federally … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/history-doesnt-repeat-itself-in-cyberspace/a/d-id/1335419?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple