STE WILLIAMS

For sale: iPhone hacking tool, one previous (not very careful) owner

Cellebrite phone-cracking devices, beloved by law enforcement, are available at bargain-basement prices on eBay, so you can get a gander at all the devices that the police have presumably been able to squeeze for data.

Here’s a second-hand Cellebrite UFED device showing off its capabilities, courtesy of security researcher Matthew Hickey:

Hickey is cofounder of training academy Hacker House. He recently told Forbes that he’d picked up a dozen Cellebrite UFED devices for dirt cheap and probed them for data, which he found… in spades.

What surprised Hickey was that nobody bothered to wipe these things before dumping them onto eBay, he told Forbes:

You’d think a forensics device used by law enforcement would be wiped before resale. The sheer volume of these units appearing online is indicative that some may not be renewing Cellebrite and disposing of the units elsewhere.

Yes, you would think that a very expensive forensics device such as Cellebrite’s UFED – reportedly, brand-new models start at $6,000 – that’s used by law enforcement to crack the encryption on (older) iPhone models, as well as on phones from Samsung, LG, ZTE and Motorola, would be wiped before resale… on eBay, for prices starting at $100.

Forbes reports that these valuable devices, for which US federal agencies including the FBI and Immigration and Customs Enforcement (ICE) have been paying millions of dollars, can be found, used, on sale for between $100 and $1,000 a unit.

Some Cellebrite history

Cellebrite got a lot of attention during the FBI vs. Apple encryption battle, which got particularly loud after the San Bernardino terrorist attacks. We never found out for sure what tool the FBI used to break into the terrorist’s iPhone, though it was reported that Cellebrite offered to do the cracking.

An FBI source subsequently denied that the bureau used Cellebrite to get into the iPhone. A court decision in October 2017 ensured that the FBI’s secret iPhone hacking tool would stay under wraps.

Regardless of Cellebrite’s role or lack thereof in the San Bernardino iPhone cracking case, its forensics devices have been used to break into a whole lot of mobile phones.

What’s on these bargain-bin babies?

When Hickey probed the UFED devices for data earlier this month, he discovered that they contained information on what devices they’d been used to search, when they were searched, and what kinds of data they got at. Forbes reports that mobile identifier numbers, like the IMEI code, were also retrievable.

Hickey says he also found what looked like Wi-Fi passwords left behind on the UFEDs. They could have been those of the police agencies that used the devices, or perhaps they were those of independent investigators or business auditors, Forbes suggested.

There could be other, far more valuable data on the devices. Hickey hasn’t had success at extracting any of the software vulnerabilities that Cellebrite uses to slip past Apple and Google’s protections… yet. The encrypted keys to do so should be extractable, though.

Why are the UFEDs available now?

That’s an easy one: they’re available now because there are new models out, with updated software. As of a year ago, Cellebrite could reportedly crack every iPhone up to the then-latest version of iOS, 11.2.6.

”Fairly poor” security on the units

Hickey managed to get the residual data left on the older model UFEDs by retrieving admin account passwords for the devices and taking them over: something he could do because their security was “fairly poor,” he said. He also found it simple to crack the devices’ license controls by relying on guides he found on online Turkish forums.

A hacker with chops could get up to plenty of no-good that way. From Forbes:

A skilled hacker could unleash the device to break into iPhones or other smartphones using the same information, [Hickey] said. A malicious attacker could also modify a unit to falsify evidence or even reverse the forensics process and create a phone capable of hacking the Cellebrite tech, Hickey warned.

Cellebrite is not amused

Sources from the forensics industry showed Forbes a letter from Cellebrite in which it warned customers about reselling its hacking devices, given that they can be used to access individuals’ private data.

The UFEDs should be returned to Cellebrite so they can be properly decommissioned, but it’s looking like police, and/or others who’ve possessed the devices, are putting them up for sale to anybody and everybody, regardless of the fact that they haven’t been wiped clean of the sensitive data they contain.

Forbes reports that cybersecurity researchers are now warning that valuable case data and powerful police hacking tools could have leaked as a result of the unwiped gadgets being put up on the auction block.

But as far as Hickey is concerned, his second-hand Cellebrite kit has a higher calling in store: he’s planning to rig them up to run the shooter classic Doom:

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zpXx04dyyLw/

Data-tracking Chrome flaw triggered by viewing PDFs

Researchers have spotted an unusual ‘trackware’ attack triggered by viewing a PDF inside the Chrome browser.

Security company EdgeSpot said it noticed suspicious PDFs, which seem to have been circulating since 2017, sending HTTP POST traffic to the tracking site readnotify.com.

The behaviour only happened when a user viewed a PDF using desktop Google Chrome – when opened in Adobe Reader the PDF’s behaviour returned to normal.

Data sent included the user’s IP address, the Chrome and OS versions, and the full path of the PDF on their computer.

While not the most fearsome-sounding exploit going, the design is similar to an attack discovered last April (CVE-2018-4993) designed to steal NT Lan Manager (NTLM v2) hashes via the Adobe and Foxit readers.

A second variant of this attack was later discovered by EdgeSpot in November, identified and patched as CVE-2018-15979.

Why would someone be interested in relatively innocuous data?

I’m speculating here, but one possibility might be to test the feasibility of using PDFs in this way in advance of a more significant campaign.

If so, it wasn’t a bad strategy for crawling under the radar in a way that would be harder to pull off when trying the same technique against Adobe Reader. Wrote EdgeSpot:

We decided to release our finding prior to the patch because we think it’s better to give the affected users a chance to be informed/alerted of the potential risk, since the active exploits/samples are in the wild while the patch is not near away.

What to do

Until the issue is patched, EdgeSpot’s recommendation is to view PDFs in an application other than Chrome, or even disconnect a computer from the internet when opening PDFs (Chrome on Android isn’t affected as opening PDFs on mobile devices is done through a separate app).

A possible alternative is to change Chrome’s default option of rendering PDFs in the browser so that instead they download for viewing in a separate application such as Adobe Reader. This is done via Settings Advanced Content Settings PDF documents, ticking the option Download PDF files instead of automatically opening them in Chrome.

Note that if you’re running Reader DC on Windows, it might also have installed a separate Chrome extension for opening PDFs. This doesn’t override Chrome’s PDF download/display settings so can be left enabled.

According to EdgeSpot, Google will fix the vulnerability in “late April”, presumably a reference to Chrome 74 due on the 23 April (30 April on Chromebook).

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Co32SxV3joM/

The Momo Challenge urban legend – what on earth is going on?

Some ideas are so good at getting people to spread them that they go viral.

There doesn’t have to be any design, purpose or merit in an idea to make it spread. It doesn’t have to be good, interesting, helpful, useful or true, in fact it can even be a very bad, even harmful, idea. All it has to do to spread is trigger our urge to share it with others.

One way to do that is to trigger the deep, primal urge inside parents to protect their children (and the deep primal urge within online news outlets to scare parents for clicks). And, over the last week or so, that’s exactly what the an idea called the Momo Challenge has been preying upon.

This article is about why you shouldn’t worry about the Momo Challenge, how we got here, and what we can usefully take away from this situation.

I’ll start by looking at what the Momo challenge is and isn’t.

What is the Momo Challenge?

The Momo Challenge is a modern equivalent of a campfire-side horror story.

Its fifteen minutes of infamy began with a story about a “haunted” WhatsApp account with the name Momo and a very creepy picture of a woman’s distorted face for an avatar.

The avatar is actually a much-shared picture of a sculpture called Momo (Mother Bird), made by a special effects company and exhibited in the Vanilla Gallery in Tokyo, Japan. The photo is much more disturbing when it’s cropped to show only the Mother Bird’s human head, and that’s the picture most often associated with Momo.

Legend has it that users who attempted to contact the Spanish-speaking WhatsApp account were mostly ignored but occasionally rewarded with responses in the form of “insults … and disturbing images”.

In a July 2018 video called Exploring The Momo Situation, YouTuber ReignBot took a look at the challenge and concluded that, by mid-2018:

The Momo thing is much more akin to an urban legend right now … People are claiming what Momo is and what Momo does, but not that many people have actually interacted with the account. Finding screenshots of interactions with Momo is nearly impossible.

Viral ideas often start life in one form and only explode into our collective consciousness after mutating (perhaps just through endless retelling) into something more frightening, and that seems to be what’s propelled Momo too.

Over time the idea seems to have undergone two important mutations.

At some point what people mean by The Momo Challenge seems to have changed from a story about a WhatsApp account into a new name for a completely different urban legend called The Blue Whale Challenge.

The Blue Whale Challenge is a story about a game in which players have to perform acts of self harm, before winning the game by committing suicide.

In both cases it’s important to note that the phenomenon isn’t the game, which almost certainly never existed, but stories about the game, and stories about stories about the game.

The second, more recent mutation to the idea seems to have occurred in the last week: that the Momo Challenge is appearing in the middle of innocent YouTube videos about things kids like, such as Peppa Pig and Fortnite.

For parents like me, whose kids love watching YouTube videos, that’s a terrifying thought. But, like the previous incarnations of Momo, that terrifying thought is being triggered by hysterical stories and warnings about videos, not by actual harmful videos, for which there seems to be no evidence at all.

The one verifiably real thing in the whole Momo saga, which seems to have propelled the meme on its journey, is the unsettling picture of the Mother Bird’s head. It’s a creepy picture which, by itself, might be enough to scare children.

What should you do?

Please, now you know what Momo is, don’t spread the hoax, and think twice the next time you receive a similar warning. When situations like this occur it’s entirely understandable that people want to warn others, but it’s normally counterproductive.

For example, thanks to unfounded social media chatter and the ensuing wall-to-wall media coverage, children are now talking to each other about Momo in the playground, and the scary Momo picture is all over the internet (but not in this article – if you want to see it, take a look at Momo’s Know Your Meme page.)

Entirely because of those warnings there’s a now a good chance your children will see the picture, and you should probably talk to them about what it is, and what it’s not, before they do.

All the attention that Momo is getting also increases the chance of copycats, or of scammers using it in social engineering attacks on kids or parents.

As a general heuristic, I recommend that you treat all unsolicited warnings about specific computer security threats as hoaxes, unless they come from reputable computer security organisations. And, I recommend you focus on doing the basics right rather than worrying about how to deal with specific threats.

You and your children are at some risk from a wide variety of online dangers all the time. There are too many to deal with on a case-by-case basis and getting cybersecurity right isn’t about doing one thing, it’s a process.

When it comes to your children, that means knowing what they’re doing online.

What that looks like is going to vary from one family to another, but here’s what we do in our house, with children under ten:

My children have time-limited access to a Mac laptop with parental controls enabled. If they want to use the computer they have to ask, and they have to say what they’re going to do. Their access to messaging is limited to email, which is restricted to classmates and, since they’ve only just started to use it, has to be done with a parent, so we can teach them the dos and don’ts.

Their favourite activity of all is looking at YouTube videos (almost always about Minecraft) but they are only allowed to look at videos by authors we have vetted and subscribed to, and they have to do it in a room with a parent in it.

If you’ve used different rules successfully, particularly with older children, I’d love to read about them in the comments below.

For more on the Momo Challenge, take a look at yesterday’s Naked Security Live video about Momo, embedded below.

(Watch directly on YouTube if the video won’t play here.)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/e2y61jgh7Oo/

After last year’s sexism shambles, 2019’s RSA infosec event has upped its inclusivity game

RSA As San Francisco gets ready for its annual RSA Gabfest Conference, organisers appear to have got the message over inclusivity following last year’s fiasco, but they aren’t out of the woods yet.

When the 2018 event was announced many in the security industry were shocked that precisely one non-male speaker has been booked – Monica Lewinsky. This proved hard for many to swallow and the matter wasn’t helped by RSA’s initial response that the industry was male-dominated and they just couldn’t find good female speakers.

This was embarrassingly refuted when a group of volunteers, operating on a shoestring budget and working in their spare time, managed to get the OURSA conference up and running, with 14 senior women in the cybersecurity industry giving a range of talks. This practical demonstration seems to have shamed RSA into action and there is now a much more diverse lineup of speakers for this year’s shindig.

“They have made a lot of progress in gender parity this year,” Melanie Ensign – Uber’s head of security and privacy communications, and one of the OURSA conference organizers – told The Register.

“What’s missing is the overall acknowledgement of the environment and culture of the conference. When you think about culture, it doesn’t matter how many women are on the show’s stages if I’m getting harassed on the show floor. I know they can do better, they have the resources and means to do it.”

Someone else’s problem

One senior female security executive, who wished to remain anonymous, pointed out this isn’t all RSA’s fault. After all, the conference organisers auction off many of the keynote spots to the highest bidder and it’s up to the paying company to decide who they send.

“Companies think of it as RSA’s problem that there aren’t diverse speakers,” she told The Reg. “Everyone else is waiting for them to solve the problem because they don’t want to give up time on stage. That’s one of the reasons why RSA doesn’t have more female speakers.”

As she and others have pointed out, RSA isn’t really a security conference as such, but a sales bonanza where the security industry tries to shift kit. While the exhibition floor has been noticeably lacking in booth beefcakes and bimbos, there hasn’t been a RSA yet that this hack hasn’t heard tales of women being harassed on the show floor.

Microsoft dancers at GDC

Microsoft’s equality and diversity: Skimpy schoolgirls dancing for nerds at an Xbox party

READ MORE

This isn’t just a problem for RSA, but one that bedevils many tech events. In recent years the adoption of strict codes of conduct have helped matters but there’s still a lot of work to be done.

“RSA was a focal point for us last year, but we were also trying to demonstrate the value of diverse content and people to the entire industry,” said Alex Stamos, another key player in OURSA, currently recovering from his stint as Facebook’s CSO as an adjunct professor at Stanford.

“It’s great that RSA is fixing things, but much of the infosec circuit seems to be moving backwards.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/01/rsa_sexism_kerfuffle/

Did you hear the one about Cisco routers using strcpy insecurely for login authentication? Makes you go AAAAA-AAAAAAAA *segfault*

Cisco has patched three of its RV-series routers after Pen Test Partners (PTP) found them using hoary old C function strcpy insecurely in login authentication function.

PTP looked at how the router’s web-based control panel handled login attempts by users, and found that it was alarmingly easy to trigger a buffer overflow by simply inputting long strings of characters into the login page, something which Cisco admitted “could allow an unauthenticated, remote attacker to execute arbitrary code on an affected device”.

The three routers affected – the RV110W, RV130W and RV215W – run “some form of embedded Linux” instead of Cisco OS, according to PTP’s definitely-not-pseudonymous blogger Dave Null. The flaw discovery was credited to Yu Zhang and Haoliang Lu at the GeekPwn conference, and T. Shiomitsu of Pen Test Partners, who worked and informed Cisco separately.

When following the RV130W’s login process at code level, PTP found that the router was placing the user-inputted password string into memory, ready for authentication against the saved password, using strcpy.

As El Reg reported years ago when a similarly worrying use of strcpy emerged in glibc: “strcpy is dangerous and an obvious target in an audit because it blindly copies the entire contents of a zero-terminated buffer into another memory buffer without checking the size of the target buffer. name can end up containing more bytes than hostname expects to hold, allowing a heap overflow to occur.”

Null from PTP elaborated: “If someone else has control over the source string, you are giving an external entity the capability to overwrite the bounds of the memory that you allocated – which might mean they can overwrite something important with something bad. In most exploitable cases, this will mean overwriting a saved return pointer on the stack and redirecting the execution flow of the process.”

He cheerily added: “Oh yeah, also, no PIE/ASLR in the binary.”

Cisco customers should check their routers are running the latest firmware versions, as follows:

  • RV110W Wireless-N VPN Firewall: 1.2.2.1
  • RV130W Wireless-N Multifunction VPN Router: 1.0.3.45
  • RV215W Wireless-N VPN Router: 1.3.1.1

A decade ago Microsoft banned the use of a superficially similar function, memcopy, from its code. PTP’s Null suggested latterday C authors might want to switch to strlcpy instead, “a nonstandard function which takes a third length argument, and always null terminates”. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/01/cisco_cve_2019_1663_strcpy_login_authentication/

Data Leak Exposes Dow Jones Watchlist Database

The Watchlist, which contained the identities of government officials, politicians, and people of political interest, is used to identify risk when researching someone.

A data leak from an unsecured Elasticsearch server has exposed the Dow Jones Watchlist database, which contains information on high-risk individuals and was left on a server sans password.

Watchlist is used by major global financial institutions to identify risk while researching individuals. It helps detect instances of crime, such as money laundering and illegal payments, by providing data on public figures. Watchlist has global coverage of senior political figures, national and international government sanction lists, people linked to or convicted of high-profile crime, and profile notes from Dow Jones citing federal agencies and law enforcement. 

The leak was discovered by security researcher Bob Diachenko, who found a copy of the Watchlist on a public Elasticsearch cluster sized 4.4GB. The database exposed 2.4 million records and was publicly available to anyone who knew where to find it – for example, with an Internet of Things (IoT) search engine, he explained in a blog post.

It’s important to note that data in the database, which has since been taken down, originated from public sources. Watchlist collects licensed and available news from publications around the world; a research team provides updates on listed individuals’ names and relations.

While it is public data, Diachenko warned that exposing Watchlist “could be reckless” given the nature of information it contains and the people included in it.

This isn’t the first time a misconfigured cloud server has put Dow Jones data at risk. In July 2017, a data leak exposed personal information of millions of customers. The culprit? An Amazon Web Services S3 bucket set to let any AWS Authenticated User download its data.

Read more details on the Watchlist leak here.

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/data-leak-exposes-dow-jones-watchlist-database/d/d-id/1334006?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Encryption Offers Safe Haven for Criminals and Malware

The same encryption that secures private enterprise data also provides security to malware authors and criminal networks.

The same technology millions depend on to protect personal and confidential information — and that browsers highlight as crucial for secure browsing — is being used by threat actors to hide malicious payloads and criminal activity targeting corporations and individuals. And in many cases, organizations aren’t doing anything to find out precisely what’s going on inside their encrypted network tunnels.

Those are the conclusions reached in a pair of reports out just ahead of next week’s RSA Conference, in San Francisco.

Gigamon ATR issued the “July-December 2018 Crimeware Trends Report” with a subtitle promising to tell readers “How The Most Prolific Malware Traversed Your Network Without Your Knowledge.” Justin Warner, director of applied threat research at Gigamon, says the “how” is wrapped up in a simple statement: “What we discovered is you can’t detect that you can’t see.”

Criminal use of encryption is the subject of Zscaler ThreatLabz report, “Zscaler Cloud Security Insights Report.” “Everyone knows that the world is going to encrypted tunnels for privacy, but with the advent of free certificate providers, bad guys are able to take advantage, too,” says Deepen Desai, vice president of security research and operations at Zscaler. 

Gigamon’s research found that encryption is being used by several “classic” malware families, including Emotet, LokiBot, and TrickBot. In fact, according to the Gigamon report, two-thirds of the malware detected in the study period was one of these threee types. The reason these malware families are still being used is simple, Warner says: They remain effective, and developing new malware is expensive.

“These threats are still succeeding. They’re still effective. They do a lot of work to evade. They do change up how they look, but, in general, they’re still using the same malware,” he explains. “It is expensive for an adversary to change up their entire operation, but our goal as professionals in the intelligence and research space is to force these threats to take on that cost. That is really how we as an industry will better dismantle them.”

Zscaler’s Desai says the three levels of certificate validation — domain validation, organization validation, and extended validation — leave room for criminals to obtain certificates for sites that appear legitimate but are not. In domain validation, for example, all individuals have to do is show they are the owner of a particular domain; no checking is done to make sure they have the legal right to the name.

“Attackers will register a new campaign, do an aggressive spam or malvertising campaign, then move on because the domain ends up in reputation block lists,” Desai says. According to the Zscaler report, in 74% of the sites that are blocked for security reasons, the certificate is short-term, valid for less than a year.

While free certificate authorities, such as Let’s Encrypt, were launched to allow legitimate sites to be protected by SSL/TLS, they have been used by malicious actors, as well, and in huge numbers. 

Desai is blunt about the consequences. “[As a result], we can no longer tell the users that the presence of a green padlock means you’re visiting a safe site because the bad guys can get certificates, as well,” he says.

According to the Zscaler report, 89% of the domains blocked on its networks for security reasons were encrypted with domain-validated certificates. The remaining 11% used organization validated certificates, while no sites employing extended validation certifcates were blocked.

While large enterprises see huge numbers of attacks, Gigamon’s Warner says these visibility-based security issues aren’t limited to big organizations. “These threats are not discriminatory — they’re targeting businesses of all sizes and across verticals. They aren’t picking any specific industry, and they aren’t picking a specific target,” he says.

The sites being attacked are getting hit by the legacy malware found by Gigamon, as well as an increasing amount of malware injected into the code of the Websites. “We’ve seen a lot of JavaScript skimmers injected into the page leveraging encrypted channels,” Desai says.

At RSA, Desai says there will be two paths of discussion regarding these issues: the SSL certification side and traffic inspection. “On the SSL certificate side, there are more and more organizations moving away from domain verification certificates and going to higher verification, but we’re still going at a slow pace,” he says.

Both Warner and Desai say more organizations must be willing to build in processes and technologies to look inside the encrypted tunnels. With no safety in the green padlock, seeing as much as possible seems a necessary step to greater network security.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/encryption-offers-safe-haven-for-criminals-and-malware/d/d-id/1334016?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Security Pros Agree: Cloud Adoption Outpaces Security

Oftentimes, responsibility for securing the cloud falls to IT instead of the security organization, researchers report.

Businesses are embracing the cloud at a rate that outpaces their ability to secure it. That’s according to 60% of security experts surveyed for Firemon’s first “State of Hybrid Cloud Security Survey,” released this week.  

Researchers polled more than 400 information security professionals, from operations to C-level, about their approach to network security across hybrid cloud environments. They learned not only are security pros worried – oftentimes they don’t have jurisdiction over the cloud.

Most respondents say their businesses are already deployed in the cloud: Half have two or more different clouds deployed, while 40% are running in hybrid cloud environments. Nearly 25% have two or more different clouds in the proof-of-concept stage or are planning deployment within the next year.

Only 56% of respondents report network security, security operations, or security teams manage cloud security, while the remaining 44% report IT/cloud teams, application owners, or other teams outside the security division are in charge of security for the cloud. Tim Woods, vice president of technology alliances at Firemon, calls it “fragmented security” or “fragmented responsibility.” Business owners and DevOps teams often take over responsibility for the cloud.

“It’s not hard when you’re starting out,” says Woods of cloud security. “But as you deploy more apps in the cloud and cloud adoption grows, if you don’t have a process that surrounds that – especially around a common security policy – down the road we’ll hit bigger problems.”

Survey data indicates businesses are inadvertently driving complexity by adopting multiple, disparate point products on-prem and across public and private clouds. The complexity is compounded by a lack of integrated tools and training needed to maintain security across environments. A rush to deploy cloud-based services has surpassed the ability to protect them.

For example, researchers found 59% of respondents use two or more different firewalls. Of those using more than one firewall, 67% also use two or more public cloud platforms. Woods says it’s fairly easy to lose track of the myriad cloud services and applications in the enterprise.

“Depending on the organization’s size, you have a lot of people doing a lot of things across a lot of different market sectors and business areas,” he explains. “It’s no surprise they don’t have a good handle on their assets deployed in the cloud.” After all, he points out, adopting a cloud application or service “is as easy as swiping a credit card,” and many cloud-based tools are free.

The Move to DevSecOps
Some businesses recognized the problem of cloud security early on and have taken steps to address it by integrating existing teams and hiring cloud professionals, Woods explains. Many are starting with the development process, bringing together security and DevOps teams.

Nearly 44% of respondents and 46% of C-level respondents report the acceleration of DevOps has positively affected security operations. More than 30% say they’re part of the DevOps team, and more than 19% say they have a close, positive relationship with the DevOps team.

Still, work remains. Thirty percent of security pros surveyed say their relationships with DevOps are complicated, contentious, not worth mentioning, or nonexistent.

Woods says regulatory changes, such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act, are influencing the ways people bring security into the development process. Both of these acts mandate security to be built-in by design. If an organization suffers an outage and data loss and cannot show security was integrated by default, it will suffer higher penalties, he explains.

This realization has made its way to the C-suite, where people are realizing the need for better oversight around cloud security deployment. In many organizations, executives are recognizing regulatory changes and the reality of “pay now or pay later,” Woods says.

What’s Holding Back Non-Adopters?
Organizations hesitant to adopt the cloud are primarily holding back due to poor visibility, the survey shows. Forty-five percent cite lack of visibility, lack of training, and lack of control as the top three challenges to securing their public cloud environments. Oftentimes, the tools businesses have don’t provide proper visibility in a hybrid enterprise with multiple clouds and on-prem tools.

“Having good visibility into all that is critical if you’re going to manage it,” Woods says. “You can’t manage what you can’t see, and you can’t secure what you don’t know about.”

Many organizations are eager to move to the cloud but don’t want to put their information at risk. Data is the currency of many modern enterprises, he continues, and they worry about its safety. At the same time, they don’t want to risk being noncompetitive by staying stagnant.

Despite the challenges, Woods anticipates adoption will continue to accelerate. “One thing I don’t think is going to happen is I don’t think cloud deployment is going to slow down,” he says.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/security-pros-agree-cloud-adoption-outpaces-security/d/d-id/1334013?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Security Experts, Not Users, Are the Weakest Link

CISOs: Stop abdicating responsibility for problems with users – it’s part of your job.

There are countless articles, conference speakers, panelists, and casual conversations among IT and security personnel lamenting that users are the weakest link in security. The claim is that no matter how well you secure your organization, it takes just one user to ruin everything. While there’s no doubt that a user can take down these “experts'” networks, the problem lies not with the user, but with the experts.

As I wrote in my previous column, user actions are expected and, most importantly, enabled by security staff. The problem with the expression “the users are the weakest link” is that it abdicates responsibility for stopping problems. Security professionals may believe that they did everything they could, but they’re really just giving up.

All a Part of the System
Here is what’s critical: Users are a part of the system. They are not accessories. They serve a business function that requires interaction with your organization’s computer systems. To determine that a part of the system — users — will always be insecure and there is nothing that you can do about it is a failure on your part.

Consider just about any other discipline within an organization. Accounting has processes in place to deal with the expected human actions involving financial mistakes and malfeasance. You do not hear CFOs declare that they can’t keep accurate financial records, because users are the weakest link. COOs don’t say their organizations can’t run effectively, because they have humans involved in operations. Any CFO or COO who made such a claim would be rightfully fired, because they are responsible for their processes, which have humans as a critical part of those processes and they must figure out how to effectively manage those people.

CISOs who cannot figure out how to effectively manage humans using systems they are responsible for protecting should be disciplined, if not fired, for proclaiming they are failing to deal with a critical aspect of their systems. Just as systems have to be designed to protect from the expected external hacking attacks, they must be designed to protect from expected user actions.

One critical aspect is that security professionals seem to believe that the solution to deal with human mistakes — and remember this doesn’t deal with intentional malicious actions — is awareness training. But the reality is that although awareness training can be valuable, it is not perfect. This reliance on an imperfect countermeasure is behind the negligence in proclaiming users the weakest link.

Security professionals must realize that while awareness reduces the risk, their job is not finished. First we must consider that most awareness programs are poor. From experience, observation, and research, most awareness programs are not achieving their desired goals in creating strong security behaviors. Even assuming they could, security professionals would still need to create comprehensive programs that implement the supporting processes and technical countermeasures. This would account for both the inevitable user error as well as the malicious actions.

However, instead of security professionals acknowledging that they have failed to account for expected user failings or malfeasance, they blame the user. That is unacceptable.

While one my previous columns described the need for a human security officer to address the users from a comprehensive perspective, in short, you need to have a process in place that looks at potential user failings regarding:

  • What are critical processes or likely areas where users can create damage?
  • Analyzing and improving the processes to remove user decision-making, or specifying how decisions should be made, if they cannot be removed.
  • Implementing technology that prevents the opportunities for users to cause damage, as well as technology that mitigates damages if proactive measures don’t work.
  • Developing awareness programs that focus on informing users how to make decisions and do their jobs according to the established processes.

Just as CFOs and COOs cannot simply state that the user is the weakest link to justify failures in the processes that they oversea, the CISO cannot blame users for failures in security processes. The user is an embedded component of organizational computer systems, and it is negligent not to put in a set of comprehensive countermeasures to prevent, detect, and mitigate the anticipated failings of that component.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Ira Winkler is president of Secure Mentem and author of Advanced Persistent Security. View Full Bio

Article source: https://www.darkreading.com/careers-and-people/security-experts-not-users-are-the-weakest-link/a/d-id/1333916?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Dow Jones Watchlist of risky businesses exposed on public server

Yet more sensitive data has been left lying around in the cloud.

The Dow Jones Watchlist, which details purportedly dicey executives, their dicey buddies and their dicey businesses to aid organizations in their due diligence, was discovered in an Amazon Web Services (AWS)-hosted Elasticsearch database that somebody forgot to slap a password onto.

Independent security researcher Bob Diachenko last week reported finding  a copy of the Watchlist on a public server, open for any and all takers.

All it needed to find the unsecured database was for somebody to run an Internet of Things (IoT) search with one of the publicly available IoT search engines.

The researcher reported his find to the Dow Jones security incident response team last Friday (22 February). Fortunately, the team was on it the same day, taking the database down and issuing this statement:

This data is entirely derived from publicly available sources. At this time our review suggests this resulted from an authorized third party’s misconfiguration of an AWS server, and the data is no longer available.

The exposed database contained 2.4 million records. A Dow Jones spokesperson told Tech Crunch that an “authorized third party” was to blame for the exposure: in other words, it sounds like a paying customer put the records online without securing them.

Risky business

It might well have been information derived from publicly available sources, but that doesn’t mean it wasn’t sensitive data, conveniently pulled into one repository that includes people’s alleged criminal histories and possible terrorist links. The Watchlist’s names and connections are regularly updated by a Dow Jones research team.

This is a useful repository for businesses. If you’re a big bank, you’re a big target, and, for both legal and branding-linked reasons, you don’t want to do business with big old criminals – say, money launderers or terrorists.

That’s the sales pitch for the Dow Jones Watchlist: a watchlist of risky people, their relatives, people they’re close to, and businesses they’re associated with. It’s used by government agencies, and banks use it to determine whether to provide financing. From a Dow Jones’s sales brochure:

Doing business with the wrong person just once can result in steep financial penalties for your organization and legal proceedings against key executives. The ensuing scandal can cause irreparable damage to your corporate reputation.

Diachenko says that the records are indexed, tagged and searchable. They’re also more valuable than what you might stumble across on your own, he says, given that they’re vetted, having come from “premium and reputable sources.”

In the age of fake news and social engineering online it is easy to see how valuable this type of information would be to companies, governments, or individuals.

Dow Jones isn’t the only financial information giant to curate this type of risk list. Thomson Reuters, for example, has its World-Check: a list that, as of 2015, was used by 49 out of the world’s largest 50 banks to help them judge who to take on as clients, or whose accounts to shut down (with no requirement to disclose why). As the BBC points out, banks can be held responsible if their clients are involved in financing terror or money-laundering.

These lists aren’t without their critics. They’ve been criticized as being based on ”flimsy evidence” and “fringe sources.” From a 2017 analysis of a 2014 copy of World-Check done by The Intercept:

[The analysis indicated] that many thousands of people, including children, were listed on the basis of tenuous links to crime or to politically prominent persons.

The database relied on allegations stemming from right-wing Islamophobic websites to categorize under “terrorism” people and groups like the Council on American-Islamic Relations, several mosques, and national and regional Islamic organizations.

TechCrunch reports that the exposed records in Dow Jones’s Watchlist vary “wildly,” with some including “names, addresses, cities and their location, whether they are deceased or not and, in some cases, photographs.” Diachenko also found dates of birth and genders. The profiles also had extensive notes collected from Dow Jones’s Factiva news archive and other sources.

Dow Jones has declined to publicly identify the customer responsible for the leak.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/8Doe00fnACM/