STE WILLIAMS

Cybercriminals Think Small to Earn Big

As the number of breaches increased 424% in 2018, the average breach size shrunk 4.7 times as attackers aimed for smaller, more vulnerable targets.

There were 12,449 new, authentic breaches and leaks in 2018, an increase of 424% from the year prior. But the average breach size was 216,884 records – 4.7 times smaller than in 2017.

The data comes from 4iQ’s “2019 Identity Breach Report,” which analyzed breaches throughout 2018 and found cybercriminals are using more sophisticated tools and against poorly secured businesses. In 2018, the EU’s General Data Protection Regulation (GDPR) heightened security awareness around the world. Major corporations have reviewed their data collection policies and installed systems to protect themselves from attacks and noncompliance penalties.

Small and midsize businesses can’t afford to take the same measures, as they have little to no security budgets and lack skills they need to defend against cybercrime. Large companies have historically been prime breach targets, but hackers are learning they can build stores of identity attributes (email addresses, passwords, passport numbers, etc.) from small businesses.

Researchers report 14.9 billion raw identity records circulated the Web in 2018, an increase from the 8.7 billion in 2017. Researchers say that underscores the growing use of identity data for account takeover, business email compromise, and other criminal activity. Government agencies proved the largest growing exposed sector in 2018, increasing 291% from 2017.

Read more details here.

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/cybercriminals-think-small-to-earn-big/d/d-id/1334136?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

5 Essentials for Securing and Managing Windows 10

It’s possible to intelligently deploy and utilize Windows 10’s many security enhancements while avoiding common and costly migration pitfalls.

With upward of 700 million devices running Windows 10, it’s the most rapidly adopted version of the operating system since Windows 95, proving the allure of its updated features, including security enhancements such as virtualization-based security, kernel isolation, and recursive data encryption. In fact, 85% of organizations had started their Windows 10 migration by the end of 2017, according to a Gartner survey.

But many are experiencing challenges, including 21% of migrating users experiencing software compatibility issues such as programs not working properly or at all. Today’s hybridized environments involve multiple operating systems across managed devices, bring-your-own-device, and other non-managed devices where people tend to update to Windows 10 quickly, treating their machine like their mobile device. Migration complexities for Windows 7 stragglers are compounded by pressures to rush the upgrade to meet Microsoft’s January 2020 deadline for end of life.

When it comes to the security and manageability of Windows 10, there are five key essentials to assist the migration.

1. See everything, get smarter: It’s important to understand your environment, your hardware, and its compatibility with the OS. This also means going beyond the device itself to include intelligence around the applications or software on the device, looking at whether a certain application is being used by an individual, whether it needs to be migrated, and whether it will be compatible once migrated. All of this insight helps you assess risk and understand where your gaps are, and helps you plan for filling those gaps.

2. Protections and controls: Let’s not forget the data that’s on the device. Organizations rely on access to that data; often it’s sensitive and needs protecting while the organization gives users the data access they need to do their jobs. Organizations benefit from this intent-based approach. Not only is it less wasteful — you’re not overbuying on hardware and software — but you also eliminate many of the security risks by factoring the user persona and business purpose.

But Windows 10 adds complexity and requires decision-making related to policies, configurations, settings, apps, and which services in the OS support your business intent. For example, Credential Guard (which separates login information from the rest of the OS) is attractive to most IT and security pros, with its hardened enclave away from the host OS. But Credential Guard relies on Defender ATP, which is problematic for those who prefer a third-party anti-malware vendor. Running multiple anti-malware tools erases any simplicity you were expecting, which confounds the decision process. This leads to a trade-off between business intent and Microsoft dependence.

3. Monitoring progress and transition: The transition to Windows 10 is really a journey, and it won’t work at the flip of a switch. You need to look at all the rich data available to you throughout this journey, understanding where you are in the process, and watching for new variations as they come online. If a certain user brings in a new device, you must understand if it’s compatible with Windows 10 and with the applications the user requires.

4. Reduce complexity and risk: As migration nears completion, complexities are often introduced. For example: endpoints are like snowflakes. They are all composed of the same material, but they’re arranged in unique ways. If that set of attributes changes in any way — and this is inevitable — you need to maintain visibility and be quickly informed if changes have occurred. It may mean your security and risk posture is drifting toward more exposure.

I also recommend evolving the definition of “asset” and moving to align it with the way real-world security teams define this term within the endpoint domain, which is to encompass devices, data, users, and apps. We must be aware of the interplay between all four components because you could easily find yourself in a situation where controls may be in place and apps are all consistent, but a particular user is utilizing those tools and technologies differently from another. You have to monitor the entire environment on the endpoint to reduce complexity and risk associated with all of the variables. 

5. Don’t Set It and Forget It: It’s not enough to set and forget security controls. Not only do devices experience natural decay of security controls over time, but this reality is accelerated because of the complexities and dependencies addressed above. It’s not just a matter of installing encryption, but you need to make sure it’s active and that if something does change on that device you can bring it back to health. Once you work through the Windows 10 migration, it’s important to think about how to make sure your devices are hardened with security controls that remain on the devices and stay healthy.  

There are a lot of utopian aspects to Windows 10 and the potential big payoff after migration. Despite the migration journey posing challenges for IT and security teams, it’s possible to intelligently deploy and utilize Windows 10’s many security enhancements while avoiding common and costly migration pitfalls. Ultimately, the goal here is to reap the new OS gains and sustain them over time, too.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Josh Mayfield is Absolute’s Director of Security Strategy and works with Absolute customers to leverage technology for stronger cybersecurity, continuous compliance, and reduced risk on the attack surface. He has spent years in cybersecurity with a special focus on network … View Full Bio

Article source: https://www.darkreading.com/application-security/5-essentials-for-securing-and-managing-windows-10/a/d-id/1334078?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

It Takes an Average of 3 to 6 Months to Fill a Cybersecurity Job

Meanwhile, organizations are looking at unconventional ways to staff up and train their workforce as technical expertise gets even harder to find.

As the demand for cybersecurity professionals continues to rise against the backdrop of a job candidate shortage, employers say only half of applicants (or fewer) actually meet the qualifications.

The new data from industry association ISACA also shows that finding and hiring qualified cybersecurity pros takes longer now: 32% of organizations say filling a position takes six months, up from 26% last year, and more than 60% of organizations say positions sit vacant for at least three months, up from 55% last year.

This steadily widening cybersecurity talent gap has forced organizations to consider nontraditional methods of hiring, retention, and training their workforce. The biggest deficit of talent they need to hire is on the technical side, a trend highlighted by both ISACA’s study and one from Tripwire, both released last week at the RSA Conference in San Francisco.

“There’s a drought of technical people, and it’s been compounding over the years,” says Frank Downs, director of ISACA’s cybersecurity practice. “There aren’t enough cybersecurity pros, period, and there really aren’t enough technical cybersecurity professionals … we need people who can sit down and perform” the technical tasks, he says.

Among the high-demand positions: security engineer, SOC analyst, penetration tester, and cloud security engineer, according to security experts with knowledge of the job market. “I no longer need a firewall engineer: I need a cloud security engineer,” says Lamar Bailey, senior director of security research development at Tripwire.

Some 80% of the nearly 340 IT security pros surveyed by Tripwire say it’s getting harder to find skilled people to fill their open job positions. Plus, the necessary skillsets are changing, as security evolves to tackle the blend of enterprise, cloud, virtual, DevOps, and other technologies. Some 85% of them say their security teams are understaffed; 70% of the organizations in ISACA’s survey said the same.

Keeping positions filled also is getting harder. Skilled cybersecurity pros unsurprisingly are often lured away from their jobs for higher pay or promotions, so it’s difficult to keep a solid security team in place for long. “There’s a cannibalization of talent. Once it’s [talent] acquired, there are concerns around retention as other companies start reaching out and luring over” those staffers, ISACA’s Downs says.

Organizations also are training up members of their existing staff to meet the new demands. “And whether [they’re] training [an existing employee] or hiring somebody else, and outsourcing the firewall job to a third party … they don’t have enough people to run everything, and we’re seeing core products getting ignored—such as vulnerability assessments,” Bailey says.

Some organizations running vulnerability scans, for example, are not necessarily following through and applying the fixes and patches those tests find. “They’re spreading themselves too thinly” and struggle to prioritize patches, he says.

That’s where technical staffers come in, he says, to help analyze the actual risks to their networks in order to prioritize the fixes for specific flaws and machines.

But training up existing security staffers isn’t always so simple, especially for more advanced technical roles. “If you’re looking at a mid-level security pro who wants to get into higher-level [technical role], it’s an investment of a couple of years. It’s not like a five-day SANS class,” Bailey explains. “Firewall-1 to cloud architect is going to be a lot of training,” for instance, he says. And for some organizations, it’s difficult to justify this type of training budget-wise, he adds.

While cybersecurity programs are growing on the higher education side, many fail when it comes to providing potential cyber professionals with the necessary—and most in-demand—technical skills. The problem is a lot of academic organizations don’t necessarily teach all aspects of security that make an individual technically proficient,” Downs says. “Academic organizations are still playing catch-up.”

Still missing from some programs are hands-on malware analysis, for firewall configuration, for example, he says.

Ralph Sita, co-founder and CEO of online training firm Cybrary, says cybersecurity education and training doesn’t necessarily need to follow the traditional academic trajectory. “You don’t need to treat getting into this industry through an educational avenue like high school, college, and boom: you get a job,” says Sita. “You have to treat it like a trade: like an auto mechanic, HVAC technician, or a plumber” with hands-on skills training, he says. “You have to touch and use [security] tools.”

Purple Unicorns

In some cases, the next security technician at an organization could be an employee on the non-technical side of the house. Tripwire’s Bailey says some existing positions more naturally can transition to cybersecurity jobs—accountants and legal experts, for example. “Some of my best [quality assurance] engineers are accountants because they are detail-oriented and good with numbers,” Bailey says.

ISACA’s Downs, also an adjunct cybersecurity professor at the University of Maryland-Baltimore (UMBC), says the average demographic of a cybersecurity job candidate is someone changing professions. One of the students in his cybersecurity Master’s program last year was a former middle school teacher. “A lot of students have very transferrable [skills]. If they have tenacity, that will transition really well,” he says, noting that the teacher in his class was his “star student” even among veteran IT pros looking to move to cybersecurity careers.

And while the hardest shoes to fill are technical ones in cybersecurity, the greatest missing skill for existing cybersecurity staffers is business acumen (nearly 50%), according to ISACA’s report. Some 34% of organizations say technical know-how is the biggest missing skill among their security teams.

“They want more technical people, and they’re now getting more choosy and want technical people who understand the business and can communicate that to the stakeholders,” Downs says. “They want a purple unicorn.”

The good news, though, is there’s a subtle yet slow shift under way in loosening some of the overly ambitious job requirements for entry-level cybersecurity positions, according to Cybrary’s Sita. “It’s out of necessity” to fill the jobs since there’s the Catch-22 of a security newbie not having all of the experience and certifications many of the entry-level jobs call for.

“I can’t ask for an entry-level network engineer with five years’ experience” anymore, he says.

Meanwhile, large tech and security firms such as IBM and Palo Alto Networks are offering their security teams training on Cybrary’s platform as a way to grow and retain their security staff as well as to help advance candidate prospects with the requisite training for employment.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/cloud/it-takes-an-average-of-3-to-6-months-to-fill-a-cybersecurity-job/d/d-id/1334135?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

John Oliver bombards the FCC with anti-robocall robocall campaign

Americans are fed up with robocalls, and John Oliver of Last Week Tonight wants to do something about it.

Despite the existence of a do-not-call list and tools like call-blocking apps and caller ID to slow down incoming call spam, these tools have barely made a dent in the flood of harassing phone calls most Americans receive on their phones, with no real recourse to stop them.

Unfortunately it just seems to be getting worse year after year – in 2018 alone robocall volume in the US increased by 56.8% to 48 billion calls, and the Federal Communications Commission (FCC) reports that about half the phone calls made to cell phones in the US in 2019 will be robocalls.

Enough’s enough of that, says John Oliver, comedian and host of TV show Last Week Tonight. He and his show are known for stunt activism to make a larger point about various societal and political ills in America.

Last Week Tonight has also gone after the FCC a few times in the past, namely in highlighting net neutrality and how it would affect the average internet user. The first time the show aired a net neutrality segment, the FCC’s website was DoSed into silence by angry viewers.

In the 10 March episode of Last Week Tonight, Oliver reported that 60% of the complaints registered to the FCC are about robocalls. So in his show’s tradition, Oliver announced that he’s hoping to spur the FCC into real action by giving them a taste of the annoyance of everyday Americans by subjecting the FCC commissioners with this message every 90 minutes:

Hi FCC! This is John from Customer Service. Congratulations! You’ve just won a chance to lower robocalls in America today. Haha… sorry, but I am a live person. Robocalls are incredibly annoying, and the person who can stop them is you! Talk to you again in 90 minutes. Here’s some bagpipe music.

So if robocalls are such a problem, what is the FCC doing about it?

In March 2018, FCC Commissioner Ajit Pai (then Chairman) was pleased when federal judges negated a 2015 law meant to put an end to robocallers, saying that law was “an example of the prior [Obama-era] FCC’s disregard for the law and regulatory overreach.” But just this February, Pai warned telecommunications companies that the FCC would be forced to intervene if the companies can’t figure out a plan to stop robocalls this year.

Pai and the FCC are urging telecoms companies to implement better caller authentication, but the companies have pushed back, saying they need more time. It’s not clear what the FCC would do if the telecoms do not comply, aside from levy fines.

In the meantime, “it turns out robocalling is so easy it only took our tech guy literally 15 minutes to work out how to do it,” said Oliver, just before hitting a comically oversized button to trigger the start of his anti-robocalling robocalling campaign.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/YmUCikdaMO4/

Email list-cleaning site may have leaked up to 2 billion records

The number of records exposed online by an email list-cleaning service in February may be far higher than originally anticipated, according to experts. The number of records available for anyone to download in plaintext from a breach at Verifications.io may have been closer to two billion.

Security researcher Bob Diachenko, who found the exposed data and worked on the breach investigation with research partner Vinny Troia, originally explained that on 25 February 2019, he discovered a 150Gb MongoDB instance online that was not password protected.

There were four separate collections in the database. The largest one contained 150Gb of data and 808.5 million records, he said in his blog post on the discovery. This included 798 million records that contained users’ email, date of birth, gender, phone number, address and Zip code, along with their IP address.

He then did some due diligence:

As part of the verification process I cross-checked a random selection of records with Troy Hunt’s HaveIBeenPwned database. Based on the results, I came to conclusion that this is not just another ‘Collection’ of previously leaked sources but a completely unique set of data.

Exposed MongoDB instances don’t always clearly indicate who uploaded them, but Diachenko’s research turned up a likely suspect: Verifications.io. This company, which has now taken down its website, offered what it called enterprise email validation services, along with free phone number lookup.

The service enabled mass emailers to clean their email lists, removing what it called ‘hard bounces’. This enables those with large email lists to verify which ones are real. It also included services that removed:

Spamtraps or possible threats in your email list such as role accounts, botclickers, honeypots, and litigators.

Diachenko emailed the company and received a response which said:

We appreciate you reaching out and informing us. We were able to quickly secure the database. Goes to show, even with 12 years of experience you can’t let your guard down.

After closer inspection, it appears that the database used for appends was briefly exposed. This is our company database built with public information, not client data.

This week, cybersecurity company Dynarisk said that it had analysed the other three data collections and found far more records than Diachenko reported. It puts the data volume at 196Gb, and claims that there were two billion records there.

The company told The Register that the other collections were named Verified Emails, PyEmail, and EmailScrub. The latter contained the most extra data, at 6.3Gb. However, it wasn’t clear what specific information was in these collections.

Various press outlets are carrying both the 800 million and two billion record figures, but Troia has gone public on Twitter disputing Dynarisk’s claim, arguing that the original figure is the accurate one:

Whether 800 million or two billion, the risk to the users involved is significant, Dynarisk said:

The lists can be used to target the people on it with phishing emails and scams, telephone push payment fraud, and the data contains enough information to enable tailored scams aimed at key staff who could be targeted for CEO fraud or Business Email Compromise.

Australian security researcher Troy Hunt has uploaded the records that we know about for sure to HaveIBeenPwned, his site that documents email addresses compromised in security breaches. Roughly a third of the email addresses were new to his database, the service said on Twitter:

The latest upload also appears to have earned the site a depressing new record:

Have you been pwned?

What can you do if your email address shows up among the compromised Verification.io addresses (or indeed any others) on HaveIBeenPwned?

The usual measures apply:

  • Immediately change any passwords common to multiple services, ensuring that each password is both unique and strong, and therefore very difficult to guess. How to pick a strong password.
  • Change any other passwords you’re using that would be easy to guess (that includes dictionary words, obvious combinations of numbers and deliberate misspellings).
  • Use a password manager to keep track of these unique passwords. Why you should use a password manager. 
  • Turn on two-factor authentication (2FA or MFA) for your most sensitive accounts. What is 2FA and why you should care. 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/TltkEPwptlA/

Citrix admits attackers breached its network – what we know

On Friday, software giant Citrix issued a short statement admitting that hackers recently managed to get inside its internal network.

According to a statement by chief information security officer Stan Black, the company was told of the attack by the FBI on 6 March, since when it had established that attackers had taken “business documents” during the incident:

The specific documents that may have been accessed, however, are currently unknown. At this time, there is no indication that the security of any Citrix product or service was compromised.

No mention of when the attackers gained access, nor how long that had lasted. As to how they got into the network of a company estimated to manage the VPN access of 400,000 large global organisations:

While not confirmed, the FBI has advised that the hackers likely used a tactic known as password spraying, a technique that exploits weak passwords. Once they gained a foothold with limited access, they worked to circumvent additional layers of security.

If you’re a customer of Citrix, apart from the lack of detail, two aspects of the statement will have unsettled you: the idea that attackers could bypass “additional layers of security” at a major tech company and the fact that the company didn’t know about the compromise until the FBI contacted it.

Enter Resecurity

And there the story might have paused for a few days had a little-known company called Resecurity not made its own claims about what happened to Citrix.

In a blog, it said that the attack by an Iranian group called Iridium had stolen “at least” 6TB of sensitive data from Citrix, including emails and files.

On 28 December, Resecurity had given Citrix early warning that a breach had happened, planned and organised to coincide with the Christmas period.

Citrix was only one of 200 government agencies, oil, gas and tech companies targeted during the Iridium campaign, the blog said.

Separately, NBC News said it had spoken to Resecurity’s president, Charles Yoo, who told it that the attackers had gained access to Citrix’s network via multiple compromised employee accounts:

So it’s a pretty deep intrusion, with multiple employee compromises and remote access to internal resources.

What does mean?

So far, Resecurity’s claims haven’t been confirmed which means that they should be treated with some caution until more details are released. It might (or might not) be significant that, so far, Citrix hasn’t denied them.

For Citrix customers, and the wider industry, the importance of this story is in the detail. For example, Resecurity claims the attackers found ways to bypass two-factor authentication (2FA) for “critical applications and services for further unauthorized access to VPN (Virtual Private Networks) channels and SSO (Single Sign-On).”

If accurate, how serious this is will depend on what type of 2FA is being talked about. If it’s OTP codes sent via SMS or generated by an app, that would fit with a number of reported compromises in recent months of this type of authentication.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/4kV4ALix420/

Study throws security shade on freelance and student programmers

Security researchers often dump on users for their cruddy password practices. But what about the developers who write the code that’s supposed to keep our passwords safe?

…as in, what’s up with the developers who fail to properly encrypt/salt/hash, who use outdated password storage methods, who copy-and-paste code they found online (vulnerabilities and all), who leave passwords sitting around in plain text, or who don’t understand the difference between encryption and hashing?

There have only been a few studies looking at how developers handle end-user password storage, even though such work is primarily involved with the security of those passwords. After all, reusing a password can have dire results for an individual, but a developer failing to hash and salt a database can lead to a far more widespread problem.

One such study, from 2017-2018, used computer science students as lab rats to examine how developers deal with secure password storage.

Saving face

The results: they didn’t. Without explicit prompting, none of the students implemented secure password storage. When asked why they didn’t, many of them said they would have, if they’d been creating real code, for a real company, for a real project that would actually see the light of day, as opposed to writing for an academic study.

As it was, the students were told that the task was to create a university social networking website, but they knew no real data would be under threat if they made a mistake.

More recently, researchers from the University of Bonn decided to redo the experiment. This time, though, they’d use “real” developers, lie to them about the work being just a study, pretend instead that the code was for a real startup, and pay them around €100-€200 (USD $112-$225).

The results: there’s no difference. Students and “paid” developers recruited from Freelancer.com seldom use secure password storage unless prompted, and even then, they have misconceptions about how to do it. They’re also using outdated methods.

From the study:

Our sample shows that freelancers who believe they are creating code for a real company also seldom store passwords securely without prompting.

In addition, we found a significant effect in the freelancers’ acceptance rate between the €100 and €200 conditions for the prompted task and examined the effect of different payment levels on secure coding behavior. We saw more secure solutions in the €200 conditions, although the difference was not statistically significant. However, this result might be due to the small sample size and we believe this is worth following up in future work.

The not-real real-life project

For the recent study, the researchers changed the described task from a university social networking platform to a sports photo-sharing social network. To make it more believable, they created a web presence for the company, and they posed as company employees when they hired the freelance developers. They told the freelancers that they’d just lost their developer and needed help to finish the registration code.

They posted the project on Freelancer.com, stipulated that they needed Java skills, and offered €30-€100 (USD $34-$112), with an expected working time of 1-15 days. In the final study, they jacked that up to €100-€200.

Of the 260 developers whom the researchers narrowed it down to, only 43 took up the job, which involved using technologies such as Java, JSF, Hibernate, and PostgreSQL. They paid half of them €100, and they paid the other half €200, in an effort to figure out if paying more would get them more password security.

Then, the researchers created a playbook to make sure their interactions with the freelancers were consistent. For example, if a developer asked if he or she should store passwords securely, or if a certain method was acceptable, the researchers answered “Yes, please!” and “Whatever you would recommend/use.”

If a participant delivered a solution where passwords were stored in plain text in the database, the researchers replied, “I saw that the password is stored in clear text. Could you also store it securely?” Those participants were marked as having received a security prompt.

We deliberately set the bar for this extra request low, to emulate what a security-unaware requester could do; i.e., if it looked like something hashed, we accepted it.

The researchers said that the freelance developers took three days to submit their work, and that they had to ask 18 of the 43 to resubmit their code to include a password security system after they’d first submitted a project that stored passwords in plaintext.

Most of the developers who were asked to resubmit their code – 15 out of 18 – hadn’t been explicitly told that the user passwords should be stored securely. Out of that non-prompted group, one of the developers actually asked whether he should… but before researchers had replied, within three hours, he had already handed in a plaintext project.

Misconceptions

Both students and freelancers suffered from some misconceptions, the researchers said, but not necessarily the same ones. While the students confused password storage security with data transmission security, some of the developers treated encoding as if it were a synonym for encryption.

Eight of the freelancers stored user passwords in the database by using the binary-to-text encoding scheme Base64 – basically, a way to jumble input so it’s readable by a different type of system, not so that the information is kept secret from prying eyes. One of them argued that “the clear password is encrypted” and that “It is very tough to decrypt.” The developers were also confused by MD5, which is a hashing function.

In fact, out of the secure password storage systems the developers chose to implement, only two of them – PBKDF2 and Bcrypt – are considered secure.

Only 15 of the 43 developers used salting, which makes encrypted passwords harder to crack by adding a random data factor. The study also found 16 examples of “obviously copied” code: code that the researchers found had been copy-and-pasted from online sources, rather than having been developed from scratch, and which could have been outdated or filled with bugs.

Don’t expect developers to know you need security

The lesson that the researchers came away with: keep your security expectations low.

Even for a task which – for security experts – is obviously security-critical, like storing passwords, one should not expect developers to know this or be willing to spend time on it without explicit prompting: ‘If you want, I can store the encrypted password.’

…but then again, there might be other takeaways from a study like this…

You get what you pay for

We can understand why this study may lead some developers to rage-throw their laptops out the window. How many programmers would interpret the lowball €100 project offer as a signal that the work would just be a placeholder, destined to be rewritten before it went live as part of the purported photo-sharing social network?

On the plus side of these study results, the “how many developers” question can be re-framed like this: only 43 of the 260 developers whom the researchers approached took up the job. That’s only 16.5%, and Naked Security’s Mark Stockley thinks that’s a good thing:

It’s reassuring that so few developers were prepared to take this on. Perhaps, instead of criticising the small number of developers prepared to work down to a price instead of up to a standard, we should applaud the silent majority that seem to have rejected an undeliverable brief out of hand.

That said, the research should act as a reminder to buyers that security rarely happens by accident: you have to make it important in your projects. It should also serve as a reminder to developers that clients often don’t know what to ask for, so if they don’t raise the issue of security with you, you need to speak up.

It’s as reasonable to assume that the 83.5% of coders who ignored the job realized that it was a security disaster and therefore that most coders have both knowledge and scruples, as it is to draw inferences from the desperate 16.5% who were prepared to do days of coding for $112.

For what it’s worth, the researchers wound up paying all the participants €200, in order to be fair.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/7sGnQhimz24/

763M Email Addresses Exposed in Latest Database Misconfiguration Episode

MongoDB once again used by database admin who opens unencrypted database to the whole world.

In February, a security researcher named Bob Diachenko found a MongoDB data instance containing four collections of data and a total of 150GB of data including approximately 763 million unique email addresses. The data instance was openly available and the data inside was stored in plain text. The personally identifiable information (PII)-rich instance is the latest MongoDB database to be hit in a breach totaling millions of records.

In the blog post announcing the discovery, Diachenko detailed the kind of data found in the records as well as the database’s owner — Verifications.io. When informed of the data set’s availability, the company took the site down very quickly; as of this writing, it is not yet back online.

While the data exposed in this incident is remarkable for its size, it is merely the latest in a significant series of data breaches and exposures involving MongoDB. In a January blog post at Krebs on Security, Brian Krebs noted that tens of thousands of MongoDB databases had been hit with ransomware. Those databases that used no authentication were particularly susceptible to the ransomware attacks.

Also in January, Diachenko discovered another open MongoDB database filled with personal information from job seekers. It is, it seems, quite easy to configure a MongoDB database in ways that open the door to thieves and attackers.

And that is really the issue. MongoDB can be configured in ways that are quite secure, but a novice developer who simply takes the default settings at every step in building a database will create a data set with no protection at all. The number of MongoDB instances makes the likelihood of that insecurity fairly high; a quick Shodan search shows 67,864 MongoDB installs around the world, with most — a bit over two-thirds — in the US. China is next when it comes to MongoDB use, with just less than half the number of instances found in the US.

MongoDB is popular in the cloud, as well. That same Shodan search shows that Amazon.com has 9,016 MongoDB instances, Digital Ocean hosts 4,966, Tencent cloud computing hosts 3,918, Microsoft Azure 2,849, and Google Cloud 1,931.

What is to be done about securing MongoDB databases? The most direct answer would be for the default settings to change, but MongoDB’s status as an open source project makes that a process that is, at best, slow. The answer, instead, is in education for the admins and developers most likely to deploy MongoDB in their own instance. As Chris DeRamus, DivvyCloud’s CTO, wrote to Dark Reading in a statement, “We live in a world where data is king — collecting, storing, and leveraging data is essential to running just about any type of business you can think of. All the more reason organizations must be diligent in ensuring data is protected with proper security controls.”

MongoDB lists companies such as KPMG, Telefonica, and Eharmony as customers: It’s obviously possible to configure and administer a MongoDB database in a way that is secure and in compliance with multiple regulations. Unfortunately, it is quick, easy, and cheap to launch a MongoDB instance that is a gift to criminals and a nightmare for its owners and their customers.

Related content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/cloud/763m-email-addresses-exposed-in-latest-database-misconfiguration-episode/d/d-id/1334132?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Handmaid’s Tale or Man-made Fail? Exposed DB of ‘BreedReady’ women probably not as bad as it sounds

An unprotected MongoDB database of 1.8 million women in China has been taken offline after drawing media attention for the inclusion of a data field designating whether the women are “BreedReady.”

The database was spotted by Victor Gevers, a researcher based in the Netherlands who founded the GDI Foundation, a non-profit organization focused on improving online security.

Interpretations of the database field in Western media swiftly skewed towards the sinister, with The Daily Beast invoking Margaret Atwood’s dystopian book The Handmaid’s Tale and The Guardian framing the term in the context of Chinese government concern over falling birthrates and the gender imbalance arising from government policies and cultural biases.

In a Twitter conversation with The Register, Gevers said the exposed data has been taken offline thanks to the social media attention his post received. Presently, he doesn’t know who owns the data and without that, there’s no way to be certain what the “BreedReady” boolean field really means.

“We have talked to many people about this one and the majority thinks literally means what it says,” he said. “But others say this could be a language barrier thing.”

Otto Kolbl, a researcher and doctoral student at the University of Lausanne in Switzerlan, who studies socio-economic development in China, warned against jumping to any conclusions. He suggested “BreedReady” might just be a Chinese developer’s bad English for “willing to have a baby,” which would not be out of place in a dating app.

A non-native speaker of English might not realize that “breed” tends to be used in the context of livestock and thus invites dehumanizing interpretations when applied to women. If the database key relied on less loaded terms and clearly reflected a woman’s self-submitted view on possible future interest in child bearing, rather than a possible third-party assessment, it’s doubtful the poorly secured database would have received much attention.

mongo

How to secure MongoDB – because it isn’t by default and thousands of DBs are being hacked

READ MORE

Another Twitter user with apparent knowledge of Chinese proposed an alternate explanation: The database contains open population data used for urban planning, possibly of women living in Beijing.

Gevers, who has been reporting exposed databases for several years, said China ranks second among the countries in terms of unsafe MongoDB usage, with almost 30,000 open databases. The US has the lead, with more than 45,000. Germany, Netherlands, and France fill out the top five.

Finding poorly secured databases has become a bit like shooting fish in a barrel, by which we mean commonplace rather than foolish and messy. Last week, security researchers found an email marketing company called Verifications.io had left several MongoDB databases holding more than 2bn email marketing records exposed. Expect more of the same in the foreseeable future. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/11/exposed_database_breedready/

3 Places Security Teams Are Wasting Time

Dark Reading caught up with RSA Security president Rohit Ghai at the RSA Conference to discuss critical areas where CISOs and their teams are spinning their wheels.

If a single adjective could describe the universal attendee experience at last week’s RSA Conference, it would probably be “overwhelmed.” There were nearly 750 exhibiting vendors overflowing many football fields’ worth of conference real estate, hundreds of conference talks, and tens of thousands of people thronging the event. As a result, it took most attendees a ton of work to sift through everything in order to mine the information and connections that actually offered them value.   

It’s pretty apt, too, as it offers an uncanny parallel to the existential experience of security leaders and practitioners out in the real world today. Their inboxes are flooded by vendor sales pitches, their security operation centers are deluged with alerts and false positives, and their emotional stress levels are at all-time highs. It certainly helps to explain the emphasis on career burnout and even organized yoga events offered at RSAC this year. 

But it’s going to take more than self-care to get security teams to the next level. It’s also going to take prioritization so that cybersecurity professionals can eliminate the wasteful activities in their professional lives and focus on the things that help them most efficiently tackle cybersecurity risks for their organizations.

At the show, we caught up with Rohit Ghai, president of RSA Security, to discuss the trends driving security leadership today. He believes that the most evolved executives are learning to prioritize by helping their organizations marry overall enterprise risk management with cybersecurity.

“People are realizing that standalone cybersecurity is overwhelmed, and in order to tip the balance, you have to apply business context to security so you can prioritize and focus on what matters most,” he said.

Additionally, he pointed to several key areas where cybersecurity leaders need to stop spinning their wheels.

Juggling Security Vendors 
Vendor fatigue is increasingly wearing on CISOs today, as the allure of acquiring best-in-class features has turned into an integration and vendor management nightmare for many. Right now organizations must sift between 4,700 different security vendors and systems integrators vying for attention, according to figures from the Cyber Research Databank. More than eight in 10 midsize business security leaders say it takes them and their staffs anywhere between 20 and 60 hours per week procuring, implementing, and managing security products.  

“I think they’re wasting a lot of time in integrating point solutions and dealing with this fragmentation in the industry,” Ghai said, “which is why an end-to-end strategy that brings in kind of the wholistic view is the right way to approach it.”

Low-Priority Problems
The second area Ghai pinpointed as a security time sink is on low-priority problems and vulnerabilities. Most security professionals, he said, don’t have an “innate sense of what’s important” to their organizations.

“In a world where almost half of the cyber incidents go unhandled, what you want to make sure is the right half is getting addressed,” he said. “They don’t have that compass to tell them what is the right half, and they need business context for that. So that’s a clear area of waste.” 

This jibes with Deloitte’s most recent “Future of Cyber” report, released last week, which named prioritization of cyber-risks across the enterprise as the second-biggest challenge facing CISOs today. 

Manual Labor on Automatable Problems
Finally, Ghai said, the third biggest area where cybersecurity teams are wasting their times is in plugging away with manual processes where automation would make more sense.

“We have a cyber talent issue, and we’re still doing a lot of work that can be automated,” Ghai said. “I envision a SOC where humans are collaborating with machines together to advance the agenda. We need to free up the human analysts from the mundane tasks of cutting and pasting URLs.” 

CISOs are definitely coming around on this front. Approximately 58% of security decision-makers agree that machine learning and AI should help make the job of security professionals easier in the future. 

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/risk/3-places-security-teams-are-wasting-time/d/d-id/1334127?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple