STE WILLIAMS

The War for Cyber Talent Will Be Won by Retention not Recruitment

Six steps for creating a work environment that challenges, stimulates, rewards, and constantly engages employees fighting the good fight against cybercriminals.

When it comes to cybersecurity, there are two common truths any executive will tell you. First, there is a well-documented shortfall of 3 million workers in the industry — too many jobs for too few qualified workers. Second, to fill these jobs, we need to think outside the box and look to professionals who aren’t in the computer science and IT fields.

There are more articles than we can count that tangentially explore those two points. We need to move past discussing the problem and who may fill these jobs and explore the deeper question of how we get people into these jobs.

Right now, the supply of skilled workers is significantly less than the incredible demand for these positions; thus, the negotiating power is squarely in the hands of the workers. They can set their requirements and can do so with virtually any number of willing suitors. Consequently, how we attract talent and who we recruit will still be an active area of focus. But how we retain these workers should be positioned with equal or greater importance. Here are six steps to keep your cyber talent from running off to the next highest bidder.

Step 1: Stay competitive with compensation and benefits. This should go without saying: The best legacy cyber workers and the smartest professionals that can be upskilled to be cyber professionals are able to name their price. If the wages and benefit packages aren’t fair and competitive, they’ll find their next opportunity quickly.

Step 2: Have a well-defined hiring strategy. While there are more jobs than can be filled, there is no need to be reckless and hire for quantity versus quality. Clearly articulate what your organization and team is looking for and hire against those needs. This will provide your hire(s) with a sense of purpose toward a specific goal instead of anonymity in some homogeneous group.

Step 3: Provide continuous education. Cybersecurity is a field that is changing by the hour. There are new threats, new advances in technology, new social and political ramifications, and new solutions to constantly stay in front of. By investing in education, you are equipping your new hires and current employees to be the best in their field and provide the best service and solutions to your clients.

Step 4: Redefine purpose. It’s very easy once people are hired to give them objectives and leave them to their own devices. While focusing on the objective is great for short-term goals, in the long term, new hires may begin to wonder what their purpose is on the team, what they are trying to achieve, and how their work is affecting the greater good. At the onset, work with employees to create their big-picture purpose and continually redefine their objectives as the work changes. This will allow your employees to articulate how their positions are impacting the company and society. For instance, while the employment objective may be pinhole testing for system vulnerabilities, that employee’s bigger purpose is to discover weaknesses in a bank’s mobile app and create defenses against those vulnerabilities to allow for a safe and seamless experience for customers while mobile banking.

Step 5: Create an employee career map. Job security and the opportunity for growth are incredible motivators. However, as cybersecurity practitioners are incredibly coveted in the marketplace, it becomes crucial to show them their career trajectory rather than simply saying “you have a future with this company.” By creating an employee journey map, you are laying out clear instructions for how they can succeed and grow organically within the organization.

Step 6: Utilize human resource analytics. The use of HR analytics will allow the hiring manager on the team to not only see in real time what the needs of the team are, who’s been hired, and where they came from, but it will also measure the ROI of employee programs and overall workforce performance as well as identify where the team is growing and where resources can be allocated. By utilizing this information, hiring managers can make informed decisions possible that will help them hire the best people, reduce costly and morale-damaging turnover, and allow for the proper management of team resources.

Recruiting the best talent is only the beginning. Where we’ll win both the battle and the war for talent is by creating an environment that challenges, stimulates, rewards, and constantly re‑engages our employees to fight the good fight against cybercriminals.

The views reflected in this article do not necessarily reflect the views of the global EY organization or its member firms.

Related Content: 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Sundeep Nehra, Principal, Cybersecurity leader,  Financial Services Office, Ernst Young LLP
As a Principal in the Financial Services Office, Sundeep leads the Integrated Cyber and Resiliency Risk practice. He advises clients on issues related to cyber, … View Full Bio

Article source: https://www.darkreading.com/risk/the-war-for-cyber-talent-will-be-won-by-retention-not-recruitment/a/d-id/1335281?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

CISA Warns Public About the Risks of 5G

Vulnerabilities include everything from physical risks through the supply chain to business risks.

5G wireless networks are coming and the Cybersecurity and Infrastructure Security Agency (CISA) wants everyone to be aware of the risks that come with the enhanced capabilities 5G brings. A new notice from the agency points out the risks associated with 5G and illustrates the risks with an infographic that shows the major components and points of vulnerability of the new technology.

The vulnerabilities and risks highlighted include everything from physical risks through the supply chain to business risks from lack of choice and competitiveness.

CISA points out that 5G is expected to be critical for the performance of billions of Interent of Things devices expected to join the Internet in the coming years, but “…vulnerabilities may affect the security and resilience of 5G networks.”

For more, read here.

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/mobile/-cisa-warns-public-about-the-risks-of-5g/d/d-id/1335318?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Russia Attempted to De-Anonymize Tor Browser: Report

An attempt to crack Tor was one of many projects hackers discovered when they broke into Russian intelligence contractor SyTech.

When hackers from the 0v1ru$ group breached the server of SyTech, a contractor for Russian Federal Security Service FSB, they stole approximately 7.5 terabytes of data that included descriptions of internal projects. One of these was an attempt to crack the Tor browser.

BBC Russia first reported on the breach, which occurred on July 13. The intruders replaced SyTech’s homepage with a “yoba face,” or a smiley common among Internet trolls, and they shared the wealth of information they discovered with other attack groups and journalists.

It’s unclear whether attempts to de-anonymize Tor were successful, the report states. It seems the experimental tactics mostly relied on luck. Tor lets people conceal location and Internet use; when people connect, Internet service providers know they’re using it but not which sites are visited. The FSB can demand to know whether Tor is being used but wanted to learn more, so it attempted to detect which websites were being visited through the Tor browser.

The attempt to de-anonymize Tor was one of many projects discovered in the SyTech breach. Others included efforts to search email servers of major companies, collect data on social media users, and learn how Russia’s Internet interacts with external networks.

Read more details here.

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/russia-attempted-to-de-anonymize-tor-browser-report/d/d-id/1335320?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Bug Bounties Continue to Rise as Google Boosts its Payouts

Reward for vulnerability research climbed 83% in the past year.

Bug bounties just got another boost.  

On July 18, Google announced it had raised its payout for vulnerabilities found in its Web services, Chrome operating system, and Android software, including tripling the maximum baseline reward to $15,000 from $5,000 and doubling the maximum reward for “high quality report” to $30,000, from $15,000.

The company also bumped up its top reward — for a complete chain of exploits that results in code execution on a Chromebook — to $150,000.

Google is not alone. Other companies are either raising their bounties or facing a trend of needing to increase bounties to attract researchers. The average vulnerability payout increased by 83%, with critical vulnerability payouts reaching and average of $2,700, according to Casey Ellis, chief technology officer with vulnerability research crowdsourcing firm Bugcrowd. 

“From a numbers standpoint, things are continuing to trend up and to the right in terms of the average severity of the issues and in terms of the incentives that are being used to attract those issues,” he says.

A decade ago, companies argued over the appropriateness of paying hackers and security researchers a reward for reporting vulnerabilities. Now, the payouts for security security issues regularly surpass $1,000 and often exceed $10,000. 

In 2018, for example, ethical hackers made $19 million through HackerOne’s vulnerability-program management platform, compared to $11.7 million the prior year. Among those companies that launched their first bounty programs in the last year are Hyatt Hotels and Postmates, the company said. 

“We continue to see more bug bounty programs launching and with that increased hacker engagement as some are motivated by higher bounties awards,” says Miju Han, director of product management for HackerOne.

More recent research has shown that bug bounties can help companies improve their security. In a paper presented at the Workshop on the Economics of Information Security in June, two researchers created a model that showed two significant benefits of bug bounty programs: diverting certain types of hackers away from attack their systems, and convincing attackers to cooperate with the company. 

An important finding is that a bug bounty program only works to recruit white-hat hacker talent if the company also has an in-house security program aimed at protecting its assets. If the organization cuts back too much on security, then the bug bounty program will not be able to make up the difference. In addition, companies with valuable assets may not be able to dissuade hackers from going after their digital goods, the researchers found. 

“[T]he bug bounty program is not a one-size-fits-all solution,” Jiali Zhou and Kail-Lung Huii, both researchers from Hong Kong University of Science and Technology, stated in the paper. “Firms do need to evaluate their own security environment, the value and vulnerability of their systems, and in-house protection strategies to make better use of bug bounty programs.”

Yet, the reason behind the roughly annual doubling of average bounties — up 73% last year and 83% this year, according to Bugcrowd — is unclear. While platformssuch as Microsoft Windows operating system and Google’s Chrome OS have been hardened over the years and thus are much more difficult to plumb for system-compromising security issues, other new software frameworks have become targets for hackers and, thus, good candidates for bug bounties. 

The end result is a marketplace that has not yet found its equilibrium point, or even neared it, Bugcrowd’s Ellis says.

“Supply and demand and making sure the marketplace is attractive enough and liquid enough to keep everyone happy and engaged is one part of it,” he says. “Apart from that, there is the idea that more critical issues continue to be more rare and more difficult to find and exploit.”

The latest boost to Google’s bounties is a sign of that, he says. 

$5 Million in Bounties

Google was among the first major companies to offer rewards for information on its vulnerabilities. The company, whose program started in 2010, has paid out more than $5 million to date for over 8,500 bug reports in its Chrome browser and operating system. In 2018, 51% of the vulnerabilities reported to Google were Web-based issues, Artur Janc, staff information security engineer at Google, said a May 2019 presentation at the Google IO developer conference.

“The majority of the vulnerabilities that we see at Google … are Web issues— flaws that allow an attacker to attack users who are logged into our services and extract or modify some of the data that they have,” Janc said.

Like Bugcrowd and HackerOne, Google is seeing an accelerating marketplace for vulnerability information. In 2018, all three companies gave out their highest amount of awards. In 2018, Google awarded $3.4 million in bug bounties to 317 researchers for 1,319 different vulnerabilities. For comparison, the Google Vulnerability Rewards Program paid out $15 million over the past 10 years. 

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

 

 

 

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/bug-bounties-continue-to-rise-as-google-boosts-its-payouts/d/d-id/1335322?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

FSB hackers drop files online

A hacking group that distributed files stolen from a Russian contractor to the media last week has published some of the documents online. After posting tweets taunting the Russian government, Digital Revolution exposed 170Mb of files relating to secret projects on a file-sharing server.

The files were reportedly stolen from SyTech, a contractor that was working for Federal Security Service (in Russian, the FSB), which is Russia’s primary security agency and the successor to the Soviet KGB. The hackers stole 7.5TB of data on projects that SyTech had developed for the Russian government, reported BBC Russia on Friday.

Most of SyTech’s work was conducted for Russian military unit 71330, according to the BBC report. This group handles signals intelligence as part of the FSB.

Projects reportedly detailed in the 7.5TB cache include Nautilus, a software product designed to scrape social networks including Facebook and LinkedIn for information on users. Another, called Nautilus-C, investigated the potential to deanonymize Tor, the onion routing network commonly used to surf the web anonymously and to access a dark web of anonymous sites.

The Nautilus-C project, begun in 2012, suggested populating the community of Tor relays with malicious servers that could intercept traffic and also serve up fake content. In 2014, Swedish researchers wrote about an active project to mount man-in-the-middle attacks using malicious relays and found several Tor relays using the same root certificate operating on a single Russian netblock.

The data was reportedly stolen by a hacking group called 0v1ru$. It shared the data with Digital Revolution, which had set up a secure digital drop on 2 May. Digital Revolution sent the data to BBC Russia and also posted the following message on 17 July, along with what appear to be screengrabs of documents from the leak:

[Translated] All of us, journalists, students and even pensioners, are under the supervision of the FSB. Join us, as well as $0V1ru, protecting our future! They will not drown our voices

Then, later that day:

[Translated] Hey, FSB, how are you doing with Onslaught-2? Maybe it would be worth changing the name of the project to Colander-1?

Yesterday, Monday 22 July, the group posted another message on Twitter, offering some of the files for download, along with a message on its website:

Thank you all for support in our struggle with the Kremlin’s lawlessness. Our movement is growing. We will continue to expose the projects, showing how our government trying to shove us all under the hood answered the FSB-related control.

We offer to your attention some documents that are shared with us hacker group 0V1ru$. Very grateful to them – guys justify our trust.

The 20 folders posted online Monday included several documents relating to Hope, which is a project to visualise Russia’s connection to the outside internet, conducted in 2013 and 2014. Another project, Tax-3, focused on the manual removal of selected peoples’ information from Russia’s Federal Tax Service.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nkr8ZrG8j1s/

Your Android’s accelerometer could be used to eavesdrop on your calls

Just because you don’t give an application access to your microphone doesn’t mean that it can’t listen to you. Researchers have created an attack called Spearphone that uses the motion sensors in Android phones to listen to phone calls, interactions with your voice assistant, and more.

When you install an Android app, it has to ask your permission if it wants access to your microphone so that it can listen to what you’re saying. However, the researchers discovered a workaround.

Most modern smartphones have accelerometers that are supposed to sense how quickly you’re moving. They’re useful for fitness apps, for example. Android apps don’t need permission to use the phone’s accelerometer, so the researchers used it as a listening device. The smartphone’s loudspeaker causes the device’s body to vibrate, and they were able to hijack the accelerometer to sample these vibrations.

The attack used a combination of signal processing and machine learning to convert the vibration samples into speech. The technique works whether the phone is lying on a table or held in the user’s hand, as long as the phone plays sound through the loudspeaker and not through an earpiece.

Motion sensor sample rates (the frequency at which they read data) are low, but the researchers still got good results. After cranking up the sample rates as high as possible, they could determine the remote speaker’s gender and identity with 90% and 80% probability respectively, with as little as one spoken word, the researchers claimed.

The researchers’ paper lists several possible attacks. Software could eavesdrop on a phone call, detecting the gender and possibly the identity of the remote party. It could also use speech recognition to understand what they’re saying.

The software could also use the technique to listen in on audio files, it warned, suggesting a sneaky commercial application:

Advertisement companies could use this information to target victims with tailor-made ads, inline with the victim’s preferences.

Finally, the motion sensors could listen to your digital voice assistant’s responses, to find out that you just asked it how to get to a local meeting place, for example.

An attacker would have to get the malicious software onto the phone, but given how many fake apps are already doing illegitimate things on users’ phones, this isn’t inconceivable. Alternatively, the attack code could run in JavaScript if the user was browsing a malicious site at the time, they added.

The most obvious countermeasure is to turn on permissions-based controls for the accelerometer, as Google has done for other sensors like the GPS. However, this would directly affect the usability of the smartphones, the researchers argued, and in any case, users don’t always take notice of permission notifications anyway.

What the researchers are describing here is a side-channel attack, and it isn’t the first. Earlier this month, another team of researchers highlighted several apps and third-party libraries that used alternative information gathered by the phone to infer things like location when they couldn’t get permission to use the GPS. In June, researchers at Cambridge University also found that sensors could uniquely identify individual phones.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/XizfSFAizIQ/

Big password hole in iOS 13 beta spotted by testers

A security clanger has been spotted in the current beta version of iOS 13 which allows anyone to access a user’s stored web and app passwords without having to authenticate.

Affecting iOS 13 public beta 2, developer beta 3, and iPadOS 13 betas, the issue appears to have surfaced first on Reddit, complete with a brief demo video later expanded with commentary on YouTube channel iDeviceHelp.

The issue can be reproduced by repeatedly tapping on Website App Passwords menu (Settings Password Accounts) which stores credentials used by the web autofill function.

Normally, tapping on this menu should prompt iOS to ask for Face ID or Touch ID authentication, which indeed it does if the user only taps a few times.

However, tapping 20 or more times in quick succession, while cancelling the authentication prompts at the same time, eventually gives access to the passwords. Once in, the passwords can be changed and shared with other devices.

Nick of time

The barriers to an attack are still quite high – an attacker would need physical access to an unlocked iPhone or iPad – but even by beta standards it’s still an unfortunate flaw to uncover.

One could argue that this is what public betas are for – finding flaws, both minor and serious. It’s also easy to imagine that a flaw that is so hard to trigger could easily have been missed and ended up in the final version of iOS 13 due for release to the public in September.

The next public betas of iOS 13 are said to be imminent, although it’s not yet clear whether Apple will have fixed the issue by then. If you’re one of the enthusiasts running public betas, this weakness will be one to check for when it appears.

On the plus side, when it does finally arrive, iOS 13 will feature a number of security tweaks, including telling users which apps are tracking them.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aZwDCfTPAro/

Programmer from hell plants logic bombs to guarantee future work

If you’ve spent any time working with computer programmers then you’ve probably been part of a project that, for one reason or another, just seems to have too many bugs. No matter what you do, you can’t make progress: there’s always more bugs, more rework and more bugs.

At some dark moment, as frustration at the lack of progress gnaws away at you, you may wonder: what if the programmers are adding the bugs deliberately?

If that’s occurred to you then you can bet that the programmers, who tend to be intelligent bunch, have let their minds wander there too. Mine certainly has. Like me, they will have noticed that the incentives often stack up in favour of mischief: the work is often thankless, the code unsupervised and the money only good for the length of the project.

Thankfully, most of us are too morally upstanding to go there, but every barrel has its iffy apple.

In this story the barrel bears a Siemens logo and our apple is contractor David Tinley, who recently pleaded guilty to one count of intentional damage to a protected computer.

According to filings by the United States District Court for the Western District of Pennsylvania:

TINLEY, intentionally and without Siemens’ knowledge and authorization, inserted logic bombs into computer programs that he designed for Siemens. These logic bombs caused the programs to malfunction after the expiration of a certain date. As a result, Siemens was unaware-of the cause of the malfunctions and required TINLEY to fix these malfunctions.

The logic bombs left by Tinley were bugs designed to cause problems in future, rather than at the time he added them. He might have done this to avoid looking like the cause of the kind of grinding, bug-riddled, non-progress I described at the beginning. Or perhaps he thought Siemens was less like to give up on buggy code that’s been deployed than code that’s still in development.

Law360 reports that he would fix the bugs by resetting the date the logic bombs were due to go off, and that his attorney argued he did this to guard his proprietary code rather than to make money.

It goes on to describe how Tinley was exposed after being forced to give others access to his code while he was on vacation. Siemens, it says, had to fix the the buggy system without him in order to put a time sensitive order through it.

According to court filings, Tinley worked as contractor for Siemens for fourteen years, between 2002 and 2016, and engaged in his unorthodox income protection scheme for the last two.

He faces sentencing in November.

What to do?

I suggest that if a contractor is refusing to let you see their code, or doesn’t trust you enough to give you access, that should raise a red flag for one of you. And if somebody is making themselves a single point of failure, you have a problem, even if they aren’t doing anything malicious.

In my experience, programmers are vastly more interested in fixing things than breaking them though, and most projects have a plentiful enough supply of accidentally introduced faults.

That said, programmers and their code both get better with peer review, and modern development practices like continuous test and build cycles are designed to surface bad code as quickly as possible.

So, while I don’t think you should do either of those things to root out bad apples, there are good reasons to do them anyway, and if you do you’ll stand more chance of catching saboteurs.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/8SVS8oDLc_U/

Lancaster Uni data breach hits at least 12,500 wannabe students

Lancaster University – which offers a GCHQ-accredited degree in security – has been struck by a “sophisticated and malicious phishing attack” that resulted in the leak of around 12,500 wannabe students’ personal data.

In a statement published yesterday evening, the university admitted that undergraduate applicant records for the years 2019 and 2020 had been accessed, along with the data of some current students.

Information accessed by whoever the hackers were – so far Lancaster has said nothing about this – includes names, addresses, phone numbers and email addresses.

The uni also mentioned fraudulent invoices “had been sent to some undergraduate applicants”.

Lancaster accepted 3,585 applicants for student places in the educational year 2018, the latest for which data is available. Over the past five years, the number of people accepted onto courses increased by around 100 to 200 people per year, meaning the latest data breach is likely to have affected around 3,700 successful applicants.

Of the 3,585 students accepted by Lancaster last year, 375 were from other EU countries and 575 were from non-EU nations.

Further statistics compiled by UCAS (PDF) show that 12,545 people applied to Lancaster in 2018 alone, with the number having been roughly stable for the preceding three years. On that basis, the recent data breach may have affected about 12,500 applicants.

No data is available from public sources on the number of non-EU applicants to Lancaster.

UCAS told The Register that these numbers do not include those who applied through Clearing, the process where wannabe students desperate to get on any degree course at all are matched up with empty places on under-subscribed courses.

“We acted as soon as we became aware that Lancaster was the source of the breach on Friday and established an incident team to handle the situation. It was immediately reported to the Information Commissioner’s Office,” said the university in a prepared statement.

We understand university’s graduation week took place just last week. With A-level final results being published in a few weeks from now, the timing is rather bad. Ironically, Lancaster offers a master’s degree in cyber security – accredited by none other than GCHQ. El Reg trusts the intrusion wasn’t caused by students putting their newly learned skills to the test.

The university did not answer The Register‘s questions about how many people were affected by the breach, claiming that a police investigation means it is bound by some sort of code of omerta. This “blame the cops” strategy is a relatively common one for deflecting bad PR and attempting to minimise the impact of a data breach.

In the academic year 2017-18, the most recent year for which official statistics are available, the university had 14,210 enrolled students.

People who think they may have been affected by the breach have been urged by uni administrators to ring them on 01524 510 044 or email [email protected]. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/23/lancaster_university_data_breach/

Rapper Who is Very Concerned with Password Security