STE WILLIAMS

White-Hat Bug Bounty Programs Draw Inspiration from the Old West

These programs are now an essential strategy in keeping the digital desperados at bay.

Back in the Old West, sheriffs tacked up parchment “Wanted” posters offering cash bounties to help them catch lawless gunslingers like Billy the Kid and Butch Cassidy. Today, corporations and governments are paying high-dollar bounties to combat a new generation of Billy the Bots and Breach Cassidys on a far more expansive frontier — cyberspace.

These anonymous, modern-day outlaws hide behind the nicknames of the viruses they unleash on a wide range of targets, destructive malware with monikers like WannaCry and NotPetya. With so much at stake — the fast-growing cybercrime epidemic is projected to cost the world $6 trillion a year by 2021, according to Cybersecurity Ventures; these so-called “bug bounty” programs are now an essential strategy in keeping the digital desperados at bay.

Crucial to Cybersecurity Defense
Each day seems to bring new reports of unscrupulous hackers breaking into public and private sector computer systems, stealing sensitive data, compromising people’s privacy, and using ransomware to extort billions from victims across the globe. High-profile victims include Target, Uber, Anthem, Equifax, the FBI, and the National Security Administration.

Also toiling behind the scenes at keyboards far and wide, a legion of super-skilled white-hat hackers is sneaking into computer systems with an entirely different motive — keeping the world safer from their black-hat counterparts. This tendency to depict villains in black hats and heroes in white may be inspired by the old cowboy movies, but today it is integral to how we talk about the ongoing war on cybercrime.

Of course, the modern bug bounty is not a pouch of gold but substantial, sometimes six-figure cash rewards paid out to hackers who discover flaws and vulnerabilities in cybersecurity defenses. Though their work is largely out of the public eye, the white-hat specialists who participate in bug bounty programs are at the forefront of our cybersecurity defense system.

GM Calls Bug Bounties an “Essential Part of Our Security Ecosystem”
Like most major companies and organizations today, General Motors uses hackers and bug bounties to enhance its security. In 2016, GM began working with HackerOne, one of the leading bug bounty platforms, and since then more than 500 hackers have helped solve over 700 vulnerabilities. “Hackers have become an essential part of our security ecosystem,” says Jeffrey Massimilla, vice president of global cybersecurity at General Motors.

According to HackerOne, “We partner with the global hacker community to surface the most relevant security issues of our customers before they can be exploited by criminals.” Its exhaustive list of bug bounty programs includes such diverse participants as Facebook, Google and Microsoft; PayPal, LinkedIn and Match.com; eBay, ATT and MIT; Starbucks, Tesla and Twitter. According to the company, HackerOne customers have resolved over 65,000 vulnerabilities and awarded over $26 million in bug bounties.

Bugcrowd, another leading player on the bug bounty frontier, counts many of the same companies on its Bug Bounty List, as well as Apple, Oracle and IBM; HubSpot, Reddit and United Airlines; Netflix, Craigslist and Salesforce. And Zerodium, a cybersecurity company that deploys “a global community of talented and independent security researchers,” is now offering bounties as high as $2 million for discovering vulnerabilities in Apple’s iOS mobile operating system.

Bug Bounty Success Stories
“Most hackers remember their first bug.” So begins a HackerOne article about computer security whiz kid Jack Cable, who discovered he could “send negative amounts of money to other bank account holders at a financial institution, effectively stealing money from their accounts.” The Chicago teen then proceeded to beef up his own bank account … by alerting the company and collecting a bounty.

Several years later, at age 17, he responded to a Pentagon bug bounty called Hack the Air Force, discovered 20+ vulnerabilities in one day, and earned a good-sized check as the program’s top contributor. “It’s been great to see hackers help improve the Air Force’s security and be recognized for their efforts,” said Cable, who had already been acknowledged for his ethical hacking efforts by Google, Yahoo, and Uber.

Here are several additional bug bounty success stories: 

  • The Pentagon: Hack the Air Force and Hack the Army, part of a larger Hack the Pentagon initiative, have led to the discovery of hundreds of vulnerabilities and resulted in hundreds of thousands of bounty dollars paid out to participating hackers. The Department of Defense has reportedly invested $34 million to build on its Hack the Pentagon successes.
  • Microsoft: The technology giant paid $260,000 to hackers as part of its Blue Hat security contest, with $200,000 going to a Columbia University doctoral student, Vasilis Pappas.
  • Facebook: The now-controversial social media giant’s bug bounty program has paid out more than $7.5 million since its inception, including $1.1 million in 2018, according to a recent report in Wired.

Finally, for an inside look at the life of an ethical hacker, here is a quick story and video in which successful bug bounty hunter Anand Prakash talks about his work getting paid for finding vulnerabilities at companies like Twitter, Uber, Facebook, and more.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Michelle Moore, Ph.D., is academic director and adjunct professor for the University of San Diego’s innovative, online Master of Science in Cyber Security Operations and Leadership program. She is also a researcher, author and cybersecurity policy analyst with over two … View Full Bio

Article source: https://www.darkreading.com/application-security/white-hat-bug-bounty-programs-draw-inspiration-from-the-old-west/a/d-id/1333803?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Photography site 500px resets 14.8 million passwords after data breach

Photography website 500px has become the latest online brand to admit suffering a serious data breach.

In an advisory, the company said it became aware of the breach last week. It estimates that the breach took place around 5 July last year.

This affected the majority of the site’s nearly 15 million users, who should shortly receive an email asking them to change their passwords as soon as possible.

Data stolen included names, usernames, email addresses, birth date (if provided), city, state, country, and gender. Also at risk:

A hash of your password, which was hashed using a one-way cryptographic algorithm.

The company hasn’t said which hashing algorithms were in use beyond mentioning that any using the obsolete MD5 function were being reset.

The fact it was using MD5 at all is not terribly reassuring for reasons Naked Security has previously discussed at some length.

A sliver of good news:

At this time, there is no indication of unauthorized access to your account, and no evidence that other data associated with your user profile was affected, such as credit card information (which is not stored on our servers), if used to make any purchases, or any other sensitive personal information.

Who is affected?

Everyone who had an account with 500px on or before 5 July 2018 may be affected by the breach. Users who joined after that will also have to change their passwords (which initiates automatically the next time a user tries to log in) although they will receive notification to do this later than the bulk of affected account holders.

Anyone who reset their account password after 8am UTC (3am Eastern) on 12 February doesn’t have to reset it a second time.

If the same or very similar account password was used on any other sites, now would be a good time change those too.

Why is 500px telling its users now?

Because earlier this week The Register got wind of a huge database of 617 million users circulating on the dark web, 14,870,304 of which appeared to be 500px’s.

The 500px said it learned of the breach on 8 February, which presumably was the day it was told that its data was part of this trove.

Several companies whose data was also part of the cache were already known to have been breached, while some others are new and unreported.

Most of the sites had their user data breached in the last year, which underlines how often and easily cybercriminals are still finding their way past organisations’ defences despite the known risks.

Prevention is better than resets

Our advice is to check the list of sites mentioned in that story and, if you have an account, reset the password without delay.

Next, turn on multi-factor authentication in whatever form it’s offered. For 500px it’s SMS-based or app-based 2FA.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/dW6vYtHZlRw/

Chinese facial recognition database exposes 2.5m people

A company operating a facial recognition system in China has exposed millions of residents’ personal information online.

Shenzen-based SenseNets is an artificial intelligence company that uses a network of tracking cameras to spot people and log their movements in its database. Unfortunately, the company exposed that information publicly online allowing anyone to access the information in plain text, it emerged this week.

Dutch cybersecurity researcher Victor Givors found the vulnerable database online and tweeted about it.

The database housed records on over 2.5m people, including their gender, nationality, address, date of birth, photo, and employer. A lot of this was linked to their ID card number, which was also revealed in the database records. China maintains a compulsory national identity card system for residents.

SenseNet maintained a collection of trackers which logged whomever it identified in the database. This created over 6.6m logged entries in a single 24-hour window, Givors revealed.

In another tweet, Givors showed what appeared to be an abandoned location in Keriya, in southern China, with a tracking device installed:

Givors works at the GDI Foundation, a Dutch non-profit dedicated to reporting internet security issues. According to CNet, Givors had reported the issue to SenseNet in July.

Since Givors went public with the breach online, the company has blocked access to the public database, he tweeted, adding that it may only have been blocked to requests from outside China.

SenseNet’s website has displayed a default empty web server page for months, but in 2017 it explained in Chinese that:

Face recognition is performed on real-time video captured by HD cameras, which compares black and white lists, confirms identity, and implements alarm, tracking, and disposal functions.

Facial recognition is big business in China. Chinese citizens can now check in and clear security using facial recognition at Shanghai Hongqiao International Airport, and the Beijing subway has announced facial recognition plans to help streamline passenger flow, sparking privacy fears.

Chinese companies are also using the technology to allow citizens to do everything from paying with a smile to dispensing rationed toilet paper.

However, facial recognition technology in China also has its darker, more authoritarian side. Government agencies including its Ministry of Public Security are building a facial recognition system, nicknamed Skynet, that would eventually cover China’s entire population of well over a billion people. It will achieve 100% coverage in “key public areas” next year, according to official government documents.

Chinese police are already using augmented reality glasses to scan and recognise faces, enabling them to quickly identify and apprehend suspects. Schools are planning to use facial recognition to ensure that children attend class and check that they’re paying attention.

Perhaps most unsettling of all: China has also introduced facial recognition technology that tracks people’s movements and predicts how likely they are to commit a crime – Minority Report-style.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MH-sE9YZq38/

‘This collaboration is absolutely critical going forward’… One positive thing about Meltdown CPU hole? At least it put aside tech rivalries…

A panel of eggheads from Intel, the US government, and academia held court this week to figure how they can keep the likes of El Reg from spoiling their next major bug reveal.

The group met at the Churchill Club in San Francisco to reflect on 2018’s big security story – the Spectre-Meltdown CPU flaws – and ponder how it could be better handled going forward. Although chip designers were alerted to the vulnerabilities around June 2017, and operating system developers soon after, an action plan for disclosure was still being formulated the week before they hoped to public on Tuesday, January 9, 2018. The Reg blew the lid off it on January 2, after hearing no response from vendors, forcing timetables to be torn up.

Among the board of brains were Intel government and policy director Audrey Plonk, Semiconductor Industry Association CEO John Neuffer, UC Berkeley Law Prof Deidre Mulligan, and White House NSC bod turned Venable cybersec director Ari Schwartz.

The talk centered on the CPU speculative execution holes that sent chip designers back to the drawing board, and kernel and toolchain programmers back to their IDEs, to solve and come up with mitigations. Now one year past the big reveal, the panel pondered how they could have done things differently.

For Schwartz, the saga reaches back to 2014’s Heartbleed, the data-leaking OpenSSL bug that was Meltdown before Meltdown. At the time, he was working in the White House, and had to actually play up the risk of the bug until it got the right attention.

“When we looked at it we know this was very big,” Schwartz recounted. “The chief of staff to the President walked into our office, and said: I want to know everything about this.”

The crisis of Heartbleed seemingly trained the tech giants on how to handle mass disclosure and patching of major security holes that affect the entire industry. Companies would learn how to cooperate with one another and set aside competitive differences for the greater good.

Fast forward three years to late 2017, and researchers dotted around the world uncovered fundamental flaws in the way modern CPUs predicted which data or code would be needed next, flaws that could be exploited by malware to read memory that should be out of bounds – kernel memory or that of another application – and potentially steal passwords and other secrets.

Fixing the flaws would require the hardware and software vendors coming together and not only addressing the security shortcomings, but also coming to terms with various performance hits – some large, some small, some unnoticeable – that would result from the changes. Companies, some dealing with public open-source projects such as C/C++ compilers and the Linux kernel, would need to discreetly share intelligence and code to close the holes while minimizing performance hits and keeping the entire affair quiet.

It had to stay under wraps to prevent the development and distribution of malware that exploits the design weaknesses before folks had a chance to patch. In the end, no known software nasties targeting Meltdown and Spectre were spotted, perhaps because so much fuss was made over addressing the holes, and perhaps because the bugs are difficult to exploit compared to asking someone to open a booby-trapped webpage or document on Windows.

Thumbs up for Spectre-Meltdown protection

Revealed: El Reg blew lid off Meltdown CPU bug before Intel told US govt – and how bitter tech rivals teamed up

READ MORE

We also note that not everyone was included in the private Meltdown-Spectre discussions. While the secret party featured the expected faces of Intel, AMD, Arm, Red Hat, Microsoft, Amazon, Google, and other large companies, folks like the BSD crowd, smaller Linux distros, and some cloud providers were a tad hurt they were left out, or brought in with little notice. Intel et al hadn’t even warned US-CERT by the time we broke the news.

For Neuffer, a man whose job entails wrangling billionaire CEOs intent on destroying one another, the Meltdown-Spectre crisis brought out the best in companies that would normally be at each others’ throats.

“They are fiercely competitive, amazingly competitive, yet this was an example where they found a common cause,” Neumann explained. “This kind of collaboration is absolutely critical going forward, and I’m sure as we attack more complicated causes in the future this collaboration will continue going forward.”

Yet, despite this hush-hush effort, El Reg couldn’t help but notice efforts to rewrite chunks of the memory management portions of the Linux and Windows kernel, lines of code that are so sensitive they shouldn’t be touched once they’re known to work. That indicated something big was afoot, something that had to be mitigated in part in software because the hardware was not enforcing its security as hoped. It started with changes made to the open-source Linux kernel.

“It wasn’t so much that the open source community did something wrong, people were doing their job, and people did their job and pieced parts together and found it out,” said Plonk. “That’s always a risk we take, but we have to work with the open source community.”

It did, however, give cause to reflect on how the big companies should handle confidentiality and non-disclosure agreements, specifically what to do when someone breaks them.

“If you have to work with that person, you have to find a way to make them have some skin in the process,” Schwartz suggested. “Maybe a donation to a charity that focuses on security.”

We can only hope the spirit of collaboration holds as the battle between AMD and Intel heats up.

After our January 2018 report, Intel management was incandescent with anger that an AMD engineer, in a patch quietly submitted to the Linux kernel mailing list in late December 2017, had revealed that there was a flaw only in Intel x86 CPUs and that it involved speculative reads from kernel memory by applications – aka Meltdown. However, that patch was in response to Intel’s private attempts to lump AMD in with its own vulnerabilities; AMD’s patch sought to make clear only Intel’s CPU cores were hit by Meltdown, and thus the kernel’s Meltdown mitigations, and associated slowdowns, shouldn’t be applied to AMD components. When the next big bug hits, we hope the pair of chip designers can put this feud aside.

There is also the question of patching. The panelists agreed that all of the awareness in the world will not help if admins and users are not applying the necessary updates to flaws and closing the vast majority of bugs attackers use to spread malware.

“One of the things that is important is to figure out why people don’t patch,” said Mulligan.

“The level of knowledge of people that are managing machines within businesses varies widely, so you want to think about where you’re positioning responsibility, and if there is liability, where it gets positioned.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/15/vulnerability_experts_blab/

Want to know what 2020 holds? Microsoft has a little something for you

Microsoft fired up the speculation machine last night by issuing a fresh build of Windows 10 to lucky skip-ahead testers: and it contains code from 2020s Windows.

While greybeards still enjoy drawers filled with dusty CDs of beta code for Windows NT, handed out years before final Release To Manufacturing, this is a first since Windows 10’s release (unless one counts the leaked builds smuggled out through Redmond’s walls.)

Windows Insiders are Microsoft’s army of volunteer testers, given access to early versions of Windows 10 in order to give the OS a thorough kicking on as many hardware configurations as possible. Until yesterday, “Skip Ahead” meant opting to jump beyond the next version of the operating system for ever shinier toys.

As such, anxious testers were wondering when 19H2 would put in an appearance, with 19H1 (likely called the Windows 10 April 2019 Update) nearing release. The “Skip Ahead” ring was briefly opened just over two weeks ago, and users have been impatiently awaiting their first glimpse of 19H2, likely the “October 2019 Update”, ever since.

Would it see the return of Sets? Might there be hints of a leaner, lighter Windows? Perhaps some more beatings with the dark mode stick that so many inexplicably enjoy so much?

Surprise! Those Skip Ahead users are actually getting a glimpse of a Windows world in the year 2020, thanks to build 18836.

In a posting woefully short on detail, Windows Insider supremo Dona Sarkar stated the bleeding obvious when she said: “some things we are working on in 20H1 require a longer lead time” without actually explaining why Skip Ahead testers are being asked to Skip Ahead quite so far into the future. She also said Insiders would get hold of 19H2 after 19H1 (the next release) is “nearly finished and ready”.

So that means the brave souls on Skip Ahead will get a downgrade to 19H2, right? Not so, according to Microsoft’s Brandon LeBlanc:

So, if you selected Skip Ahead, you won’t be seeing 19H2 (likely the October update). This could mean Microsoft intends to play with its Rings. Fast Ring users might become what was Skip Ahead and get 19H2 as 19H1 nears release while the currently neglected Slow Ring finally gets some fresh code to test.

Then again, LeBlanc was also the chap that cheerfully reckoned enough testing had been done with the October 2018 Update to bypass Release Preview and unleash the thing directly on users. That didn’t end so well.

So, pull up a chair and crank up the ol’ Speculation 3000.

Right now, the new build that contains the 2020 code has little visible in the way of the new features, which is not unusual for a Skip Ahead build. The change does, however, seem to imply a shake-up of the testing rings are imminent for Insiders. If so, it is a shame that those enthusiastic souls who opted to join Skip Ahead were not warned that they would be leaping two versions into the future rather than one.

Or perhaps, dare we say it, there is something so horrifying in 19H2 that 20H1 has been hastily shovelled out to give Microsoft more time to deal with it. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/15/winidows_2020/

Apple phone users targeted with hardcore porn and gambling apps

Apple’s easily abused Enterprise Certificate program isn’t just enabling snoopy Facebook and Google apps. It’s also being exploited by at least a dozen hardcore porn apps and a dozen gambling apps.

Last week, Facebook’s Research app – that paid people, including teens, to install a Virtual Private Network (VPN) app that planted a root certificate on their phones to get access to traffic from other apps – got the boot from Apple. The Research app was created under Apple’s Enterprise Certificate program, a way of creating non-App Store apps that are used for “specific business purposes” and “only for use by your employees” …not by consumers whose data Facebook was sucking up.

Within hours, Google found itself apologizing for doing something similar.

Now, it’s apparent how easy it is to use enterprise certificates to avoid the App Store’s content policies prohibiting apps that show “explicit descriptions or displays of sexual organs or activities intended to stimulate erotic rather than aesthetic or emotional feelings.”

According to Tech Crunch, the developers behind the gambling and porn apps have either passed what it calls Apple’s “weak” Enterprise Certificate screening process or piggybacked onto a legitimate approval.

Apple was swift to react when Tech Crunch broke the news about Facebook’s and Google’s “clear breach” of its certificate policies. After briefly revoking the companies’ certificates (for all apps, including those that were, per Apple’s policy, used by employees), Apple has over the past few days gone on a bit of an app-disabling spree. Some of the dozens of porn and gambling apps that Tech Crunch initially found have vanished in the process.

As of Tuesday, still-functioning porn apps included Swag, PPAV, Banana Video, iPorn (iP), Pear, Poshow and AVBobo, and the gambling apps still available included RD Poker and RiverPoker. As of Wednesday, Banana Video, for one, was still hanging in there.

How ‘iPorn’ et al. get enterprise certificates

All developers have to do to get an enterprise certificate is to fill out an online form, fork over $299, hand over an easily found D-U-N-S business ID number (Apple provides a tool to look it up) and business address, and use an up-to-date Mac. Tech Crunch’s Josh Constine even found these step-by-step directions on how to get an Apple enterprise app developer license.

Then, the developers sit back and wait for a call from Apple. It takes one to four weeks. The last step: lie to the Apple rep about plans to only distribute the apps internally.

Often, part of the ruse is for these violative apps to hide behind company names that obscure their real purpose: for example, Tech Crunch found such business names as Interprener, Mohajer International Communications, Sungate and AsianLiveTech. Constine says that he also came across what appeared to be “forged or stolen credentials to sign up under the names of completely unrelated but legitimate businesses.” From his report:

Dragon Gaming was registered to U.S. gravel supplier CSL-LOMA. As for porn apps, PPAV’s certificate is assigned to the Nanjing Jianye District Information Center, Douyin Didi was licensed under Moscow motorcycle company Akura OOO, Chinese app Pear is registered to Grupo Arcavi Sociedad Anonima in Costa Rica and AVBobo covers its tracks with the name of a Fresno-based company called Chaney Cabinet Furniture Co.

Apple will send the apps – and maybe their devs – packing

Apple wouldn’t explain how these apps are getting past its vetting to get into the Enterprise Certificate app program. Nor would it discuss whether it will change how it deals with its enterprise program, including whether it will in the future follow up to see if apps that get in are, or remain, compliant, or if it plans to change its admission process. It did, though, give Tech Crunch a statement about its plans to shut down such apps and potentially to ban the developers from building iOS products:

Developers that abuse our enterprise certificates are in violation of the Apple Developer Enterprise Program Agreement and will have their certificates terminated, and if appropriate, they will be removed from our Developer Program completely. We are continuously evaluating the cases of misuse and are prepared to take immediate action.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Z3GuTqaP3dY/

Use an 8-char Windows NTLM password? Don’t. Every single one can be cracked in under 2.5hrs

HashCat, an open source password recovery tool, can now crack an eight-character Windows NTLM password hash in less time than it will take to watch Avengers: Endgame.

In 2011 security researcher Steven Myer demonstrated that an eight-character (53-bit) password could be brute forced in 44 days, or in 14 seconds if you use a GPU and rainbow tables – pre-computed tables for reversing hash functions.

When developer Jeff Atwood said as much in 2015, the average password length was about about eight characters and there’s no indication things have changed much. With some 620 million stolen web credentials coming up for sale this week on a dark web market, now’s as good a time as any for a password review.

In a Twitter post on Wednesday, those behind the software project said a hand-tuned build of the version 6.0.0 HashCat beta, utilizing eight Nvidia GTX 2080Ti GPUs in an offline attack, exceeded the NTLM cracking speed benchmark of 100GH/s (gigahashes per second).

“Current password cracking benchmarks show that the minimum eight character password, no matter how complex, can be cracked in less than 2.5 hours” using that hardware rig, explained a hacker who goes by the pseudonym Tinker on Twitter in a DM conversation with The Register. “The eight character password is dead.”

It’s dead at least in the context of hacking attacks on organizations that rely on Windows and Active Directory. NTLM is an old Microsoft authentication protocol that has since been replaced with Kerberos. According to Tinker, it’s still used for storing Windows passwords locally or in the NTDS.dit file in Active Directory Domain Controllers.

Processing arsenal

More robust hashing algorithms take longer to crack, sometimes orders of magnitude longer. As a point of comparison, when IBM was getting hash cracking rates of 334 GH/s with NTLM and Hashcat in 2017, it could only manage 118.6 kH/s with bcrypt and Hashcat. But, given a suitably short password, those attempting to crack hashed passwords can break out their wallets and pay cloud services for the necessary compute arsenal.

Tinker estimates that buying the GPU power described would require about $10,000; others have claimed the necessary computer power to crack an eight-character NTLM password hash can be rented in Amazon’s cloud for just $25.

NIST’s latest guidelines say passwords should be at least eight characters long. Some online service providers don’t even demand that much.

When security researcher Troy Hunt examined the minimum password lengths at various websites last year, he found that while Google, Microsoft and Yahoo set the bar at eight, Facebook, LinkedIn and Twitter only required six.

Tinker said the eight character password was used as a benchmark because it’s what many organizations recommend as the minimum password length and many corporate IT policies reflect that guidance.

password

Either my name, my password or my soul is invalid – but which?

READ MORE

“Because we’ve pushed the idea of using complexity (upper case letters, lower case, numbers, and symbols), it’s hard for users to remember individual passwords,” Tinker said. “This does, among other things, cause users to pick the minimum length allowed, so that they can remember their complex password. As such, a large percentage of users choose the minimum requirements of eight characters.”

So how long is long enough to sleep soundly until the next technical advance changes everything? Tinker recommends a random five-word passphrase, something along the lines of the four-word example popularized by online comic XKCD, “correcthorsebatterystaple.”

That or whatever maximum length random password via a password management app, with two-factor authentication enabled in either case.

Via Twitter DM, HaveIBeenPwned admin Troy Hunt told The Register that while web apps are increasingly using better hashing algorithms than NTLM, like bcrypt, “I always make my passwords dozens of random characters generated by 1Password.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/14/password_length/

High Stress Levels Impacting CISOs Physically, Mentally

Some have even turned to alcohol and medication as their demands outpace resources.

A quarter of chief information security officers (CISOs) suffer from mental and health disorders as a result of tremendous and growing work pressures, a new survey shows.

Contributing to the strain are concerns about job security, inadequate budget and resources, and a continued lack of support from the board and upper management.

Domain name registry service provider Nominet recently polled 408 CISOs working at midsize and large organizations in the United Kingdom and United States about the challenges they encounter in their jobs.

A whopping 91% of the respondents admitted to experiencing moderate to high stress, and 26% said the stress was impacting them mentally and physically. A troubling 17% of the CISOs who took Nominet’s survey admitted to turning to alcohol and medication to deal with the stress, and 23% said their work was ruining personal relationships.

Nominet’s survey showed that 40-hour workweeks are a rarity among CISOs. Twenty-two percent said they are available on an around-the-clock basis, and nearly nine in 10 of US-based CISOs said they don’t have a break from work for two weeks or more at a stretch.

The data is not surprising, says Jon Oltsik, senior principal analyst at Enterprise Strategy Group (ESG). “The demands of the [CISO] job are growing much faster than the resources available,” he says. Business executives are constantly asking CISOs to do more even as security leaders themselves often have to contend with understaffed teams, manual processes, and a patchwork of tools.

In a survey that ESG conducted, 70% of the respondents said a skills shortage was impacting their ability to properly protect their organizations against cyberthreats. About a quarter said the skills shortage had resulted in staff burnout and turnover, Oltsik says. “[As a result], many CISOs are leaving corporate jobs to become virtual CISOs where they have more control and flexibility.”  

Nominet’s survey data reflects several of these trends. More than half (57%) of the CISOs said a lack of resources is holding them back from implementing a more effective security posture, and 63% are having trouble recruiting the right people. Despite substantial increases in overall enterprise security spending in recent years, less than half (43%) said they have an adequate or very adequate security budget, and just 51% said they have the requisite technologies for protecting the enterprise.

Nominet CEO Russell Haworth says the constantly evolving threat environment is one major reason why CISOs feel they are under-resourced despite all the spending. “There are always new threats and threat variants, which drive the need for new defenses,” he says.

Enterprise datasets are typically massive, with huge volumes of traffic hiding tiny levels of malicious activity; for many, finding evidence of breaches and malicious activity remains a major challenge, he notes. “The largest resource deficit identified in the study was people,” Howarth says. A majority of CISOs identified the skills shortage as impacting their ability to find malware hidden on their network, he adds.

Top Management Disconnect
A continuing lack of top management support is exacerbating the situation. Nearly one in five said their board members are indifferent to the security team and viewed them as an inconvenience, and only 52% said their executive teams value the security organization from a revenue and brand protection standpoint. Likely as a result of such attitudes, 32% of CISOs told Nominet they are concerned they would lose their jobs in the event of a breach.

“Across the board, there was an overarching feeling amongst the CISOs questioned that, whilst their work is appreciated by senior management teams, it’s still yet to be seen as strategically valuable,” Haworth says. “We would expect that CISOs may have the highest stress levels among other senior technology executives, [considering] many CISOs feel the burden of protecting the entire organization is on their back.”

Gartner analyst Avivah Litan says the results of the Nominet study are another confirmation of the many challenges security organizations face. Enterprise organizations are under more attack and need to protect more data than ever before. But few are applying automation, artificial intelligence, and machine intelligence approaches in understanding the threats they face and in addressing them. A shortsighted emphasis on revenue generation and new customer acquisition over security at many places is often leaving CISOs in an untenable position, she notes.

“CISOs bear more responsibility than they should,” Litan observes. Though many are compensated well, “it is a very difficult career path for anyone,” she says.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/high-stress-levels-impacting-cisos-physically-mentally/d/d-id/1333888?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

From ‘O.MG’ to NSA, What Hardware Implants Mean for Security

A wireless device resembling an Apple USB-Lightning cable that can exploit any system via keyboard interface highlights risks associated with hardware Trojans and insecure supply chains.

During a month-long hiatus between jobs, Mike Grover challenged himself to advance a project he’d been working on for over a year: Creating a USB cable capable of compromising any computer into which it’s inserted.

His latest iteration, the Offensive MG or O.MG cable, resembles an Apple-manufactured Mac USB-Lightning cable but incorporates a wireless access point into the USB connector, allowing remote access from at least 100-feet away, according to Grover. A video demonstration shows Grover taking control of a MacBook and opening up Web pages from his phone.

The cable takes advantage of a known weaknesses. To make keyboard, mice, and other input devices as easy to connect as possible, operating system makers have made computers accept the identification, through the Human Interface Device (HID) protocol, of any device plugged into a USB port. An attacker can use the weakness to create a device that acts like a keyboard to issue keystrokes, or a mouse to issue clicks.

“What the user sees, or doesn’t see, depends a lot on their machine, how the cable is configured, and when the HID interface is initiated,” Grover says. “A competent attacker can make this invisible to the victim.”

The project is the latest demonstration of hardware Trojans — often called implants in the intelligence world — and the potential risk they pose to companies and their unwitting employees. In 2014, a team lead by Karsten Nohl created BadUSB, a set of USB peripherals that could abuse the HID protocol to compromise systems or pose as an external hard drive to control a computer’s boot process.

Many of these attacks, including Grover’s latest O.MG cable, aim to fool the user into inserting the compromised device into their own computers. 

“The weakest point of any company’s security are the people, so any chance you have to interact with people is a good opportunity to breach their security,” says John Cartrett, the red team technical lead at security services firm Trustwave.

Such a hands-off approach is one reason why Grover pursued creating such a close twin to Apple’s cable, including creating his own miniature custom circuit boards. In the past, many malicious USB devices, such as Teensy and Rubber Ducky, were carried in by a red-team member to quickly infect systems or capture data by inserting the drive into the target computer. In 2013, a group of researchers from the Georgia Institute of Technology showed off a power charger for MacBooks that could compromise the systems.

Grover designed his cable to connect to a Wi-Fi hotspot and connect back through the Internet for instructions.

“The malicious cable can be both victim-deployed [and] attacker-controlled and updated while in possession of the victim,” he says. “Lots, but not all, of malicious hardware tends to be intended for the attacker to deploy. [With this], the attacker does not have to risk gaining physical access to a secure location if the victim will carry it in for you.”

Some security experts see the projects as a good teaching moment, but unlikely to be used very often in practice. While Cartrett admired Grover’s work on the cables, he thinks it’s unlikely his team would use it in their own engagements. 

“My guys go in with a small set of tools and they are initially tasked with getting a foothold,” he says. “We usually don’t use novelty cables like this or the chargers. We looked at some of them, but they are just too James Bond-ish.” 

‘Cottonmouth’-Inspired

There is a connection to intelligence work. The original idea for the wireless USB cable came from the documents leaked by Edward Snowden. Project “Cottonmouth” — as listed in the leaked Tailored Access Operations (TAO) catalog — is a USB hardware implant that provides a wireless bridge. 

Still, the decreasing costs of such custom hardware Trojans and implants, and public accessibility, could mean that they will proliferate. While Grover spent an estimated $4,000 to complete the cable, the actual cost in parts for each cable is about $30, he notes.

Companies can take steps to make sure they are harder targets of such hardware implants. Employee education can help foil some attacks, says Deral Heiland, IoT research lead at cybersecurity firm Rapid7. Workers, for example, should be taught to never insert any hardware into their computer not supplied by the company. In addition, whenever they step away from their system, they should lock it, he says.

“Lock your console, don’t wait for the one minute, lock your console every time you walk away from your keyboard,” Heiland advises. 

In addition, firms should pay more attention to their supply chains, making sure they are procuring hardware from reliable sources. Unfortunately, there are no sure-fire ways of detecting a hardware Trojan.  

“Supply chain security is hard,” Heiland says. “If you are buying from a reliable trustworthy source, you are going to have to trust those devices.”

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/from-omg-to-nsa-what-hardware-implants-mean-for-security/d/d-id/1333889?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Mozilla, Internet Society and Others Pressure Retailers to Demand Secure IoT Products

New initiative offers five principles for greater IoT security .

Mozilla Foundation, the Internet Society, and eight other organizations have teamed up to push retailers to demand that Internet of Things manufacturers improve security in their devices. The initiative seeks to enlist retailers to use their greatest power — that of dropping products from distribution — to convince manufacturers that adhering to minimum security and privacy standards is good for business.

In an open letter to Target, Walmart, Best Buy, and Amazon, the Mozilla Foundation lists the IoT security features it sees as minimal requirements:

  • Encrypted network communications
  • Provisions for security updates
  • Strong passwords (including the ability to change passwords)
  • Vulnerability management (including a workable reporting/mitigation system)
  • Strong, understandable privacy practices.

The requirements are echoed in a blog post from the Internet Society that calls on consumers to carry these demands to their favorite retailers.

In a statement provided to Dark Reading, Jeff Wilbur, technical director of the Internet Society’s Online Trust Alliance, noted that connected devices today come with risks. “Consumer confidence is critical for this market to thrive and grow, yet many of today’s offerings are rushed to market with little consideration for basic security and privacy protections,” Wilbur said. “Fortunately, it’s a solvable problem if everyone from manufacturers and policymakers to leading retailers just work together to make smart devices safe for consumers, and we’re happy to join in the effort of the Mozilla Foundation to focus attention on this important issue.”

The Mozilla Foundation has developed a Web page of Valentine’s Day Gifts that may or may not meet all the security requirements laid out in the open letter. [Author’s Note: The individual products featured on the page may or may not be suitable for workplace viewing.]

The recommended requirements for these IoT devices are a subset of their IoT Trust Framework, a best of requirements with four broad areas incorporating dozens of comprehensive security factors.

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/iot/mozilla-internet-society-and-others-pressure-retailers-to-demand-secure-iot-products/d/d-id/1333890?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple