STE WILLIAMS

NIST Issues IoT Risk Guidelines

A new report offers the first step toward understanding and managing IoT cybersecurity risks.

NIST has issued a new report intended to help managers understand and manage the risks that come with Internet of Things (IoT) devices throughout their life cycles.

The 34-page report, “Considerations for Managing Internet of Things Cybersecurity one Privacy Risks,” begins with basic definitions and critical issues, such as the operational difference between privacy and security. It goes on to address large management considerations, including device access and management, and the dramatic difference between the security capabilities of IT hardware and IoT systems.

NIST defines IoT risk and mitigation within a framework of three risk mitigation goals: protect device security, protect data security, and protect individuals’ privacy. Within each of these goals are two to five more specific risk mitigation areas, such as vulnerability management, data protection, and information flow management.

The report provides a series of tables listing security expectations IT managers may have for conventional IT devices set against the ways in which IoT devices may be challenged in meeting those expectations.  

While this report, the first in a series addressing the IoT, looks at higher level considerations, NIST says future reports will go into greater depth and detail about related issues.

Read more here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/iot/nist-issues-iot-risk-guidelines/d/d-id/1335080?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Chronicle Folds into Google

Alphabet’s enterprise cybersecurity division will become part of the Google security portfolio.

Chronicle, the enterprise cybersecurity division spun out of Alphabet’s X — the search giant’s “moonshot factory” incubator — is being swallowed by corporate sibling Google to become part of the Google Cloud. The move will make Chronicle part of the Google security portfolio, joining Google Cloud’s detection, incident management, and remediation services.

In a blog post announcing the move, Google Cloud CEO Thomas Kurian wrote that Chronicle’s VirusTotal and Backstory products are intended to add to the depth of the services Google can offer customers to secure their workflows and data in the cloud and on-premises.

Backstory, Chronicle’s first analytics product, was announced at the RSA Conference in March. It joined VirusTotal, a virus detection and protection service that had been the company’s first commercial offering starting in 2018.

The combination of Chronicle’s analytics services and Google Cloud’s big-data expertise seems likely to position the new Chronicle as a competitor to Splunk, LogRhythm, ArcSight, and companies with similar analytics offering.

The absorption of Chronicle follows Alphabet’s integration of smart-home component manufacturer Nest into Google. According to Kurian, the reason for the integration is straightforward. “With the trajectories of Chronicle and Google Cloud increasingly converging in response to customer needs, we want to bring these essential capabilities together for customers,” he wrote in the blog post.

Integration is expected to be completed by this fall.

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/analytics/chronicle-folds-into-google/d/d-id/1335082?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Are heart electrocardiograms the next big thing in biometrics?

After fingers, the iris of the eye, ears and even lips, it was probably inevitable that someone would propose the human heart as the next big thing in biometric security.

Given that the heart’s electrical signals measured by electrocardiograms (ECGs) are already known to be individual to each person, this isn’t as far-fetched as it sounds.

But uniqueness isn’t the only requirement for authentication – the chosen method (in this case heart ECGs) must also be invariable enough over time and be practicable in terms of the equipment needed to measure it.

And while consumer-level ECG monitors can be bought quite cheaply, that doesn’t mean they are also accurate and easy enough to use correctly by a security application.

As explained in A Key to Your Heart: Biometric Authentication Based on ECG Signals, researchers Nikita Samarin from University of California Berkeley, and Donald Sannella from Edinburgh University decided to put the idea to the test experimentally.

First, they twice collected ECGs from 49 healthy men and women over a four-month period, using a $99 home monitor and smartphone app setup.

Comparing the two readings, the researchers established that error rates over a short period of time – a single reading – were an encouraging 2.4%, a result better than most previous studies making the same measurement.

That’s also in line with the upper error rates of fingerprint readers:

The results presented in this work provide a positive perspective on ECG-based biometrics, by showing that individuals can be authenticated by using their ECG trace.

However, the authors acknowledge that ECG biometrics “degrade” or change over time, for which they suggest:

Improving the performance of ECG over longer periods of time could be done by synchronizing the stored biometric with the new signal after each successful authentication.

In other words, using the heart as an authentication mechanism is feasible but only if the subject re-enrols their ECG at regular intervals to counter natural changes.

That doesn’t rule out the idea but perhaps hints that ECGs might be appropriate for high-security environments when used in conjunction with other biometric identifiers such as fingerprints.

ECGs also face the same worries as any biometric security systems in that the data they collect represents a target that criminals are bound to be interested in stealing.

Once compromised, biometrics cannot be easily revoked, as they depend on persistent physiological or behavioral characteristics of an individual.

Adding someone’s ECG to this would meet opposition from privacy campaigners who might point out that the tech industry doesn’t exactly have a spotless reputation for defending valuable data – and that’s before considering potential abuses by governments.

Or perhaps biometrics are just an inevitable part of the dawning era of smart authentication and people should acclimatise themselves to risks that are offset by the benefit for cybersecurity.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/yAhnfqBFIbk/

FTC crackdown targets operators behind 1 billion robocalls

The Federal Trade Commission (FTC) on Tuesday announced a big, fat crackdown in which it’s targeting operators responsible for one billion illegal robocalls. Or, using verbiage more appropriate to our current robocall nightmare, “a drop in the bucket.”

After all, according to the spam-call blocker company YouMail, there were an estimated 4.7 billion robocalls placed in the month of May alone. Still, any dent in that number is welcome.

The commission said that its joint crackdown – the Department of Justice (DOJ) is chipping in – is called “Operation Call It Quits.”

The crackdown involves 94 actions targeting operations around the country that are behind the never-ending e-blabber of robots offering services from scammers, including credit card interest rate reduction, get-rich pitches, and medical alert systems… or what the FTC calls the “tide of universally loathed pre-recorded telemarketing calls.”

The operation includes four new cases and three new settlements from the FTC alone, it said. The DOJ filed two of the new cases on the FTC’s behalf. In all, the defendants in these cases were responsible for making more than one billion illegal robocalls to consumers nationwide, the FTC said. These new cases bring the FTC’s count to a total of 145 cases against illegal robocallers and Do Not Call (DNC) violators.

The defendants

First Choice Horizon LLC is an example of the multiple robocalling rings being charged in this roundup. In that case, the FTC is charging six corporate and three individual defendants with illegally robocalling financially distressed consumers – often seniors – with bogus offers of credit card interest rate reduction services.

According to the complaint, the defendants allegedly told their targets that they could lower their credit card interest rates to zero for the life of the debt, thus saving themselves thousands of dollars on their credit card debt… for a fee, of course. Then, the alleged scammers would allegedly “confirm” their target’s identities, thus getting them to divulge personal financial information, including their taxpayer and credit card numbers.

The FTC alleges that the defendants neglected to mention the hefty, additional bank or transaction fees the consumers would be hit with… or that they’d allegedly go on to open up credit cards on behalf of their targets without their knowledge or say-so.

Another robocaller ring that’s facing charges from the FTC is an outfit called 8 Figure Dream Lifestyle that allegedly pitched fraudulent money-making schemes via robocalling. Other defendants are charged with spoofing caller ID information and using robocalls to drum up business leads. Besides the FTC’s efforts, 25 federal, state and local agencies have opened 87 cases against robocall offenders as part of Operation Call it Quits.

Meanwhile, in Congress…

While the FTC is doing its thing with hunting down robocallers, filing charges against them, and pushing for the development of technology-based solutions to block robocalls and combat caller ID spoofing, bipartisan legislation is still working its way through Congress.

In May 2019, the US Senate passed an anti-robocalling bill. It’s still waiting for the House to take it up, which the House might not do, given that it’s working on its own version, the Stopping Bad Robocalls Act (HR 946).

While we wait for something to happen in Washington, this is what we can all do to try to lessen our own, private robocaller nightmares:

What NOT to do?

Besides wrestling robocallers in the courtroom, Operation Call It Quits also has an educational component regarding how to stop unwanted calls. Though it might be old hat for many of us, it’s a must-read for anybody who still thinks that if they’re told to “press 1 to speak to an operator”, they should actually do that …in the hope, perhaps, of screaming some human ears off?

Here’s the advice, in a nutshell: Don’t press anything. Think of yourself as the person in the horror movie whom you always scream at to not go into that dark basement. Pressing any buttons (besides the one to hang up the call) is like rubbing “living human” pheromones onto yourself before you go into the dank robocaller basement.

The FTC says you should hang up without pressing any options, block the number, and then report it to the Commission. Andrew Smith, the Director of the Bureau of Consumer Protection at the FTC, said that pushing buttons is the most common mistake people receiving robocalls tend to make.

From remarks he made at a press conference:

Pressing numbers to speak to someone or remove yourself from the list will probably only lead to more robocalls.

Typically, the robocallers give you two options: Press one to speak with a customer service rep – or what’s better known as a scammer – or press two to be removed from their calling list.

In a word: Don’t! Punching any number just leads to more calls. Robcalling is a numbers game: they place a massive number of calls, most of which go to dead or inactive numbers. If you press a button, it lets the scammers know that they’ve not only hit a live number, but that they’ve hit on somebody willing to answer calls from unknown callers.

Be very, very quiet: just back out of that robocaller basement without the scammers getting a whiff of fresh meat, hang up, and block/report the number.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/l0BveVGlunY/

YouTube’s antics with kids’ data prompts call for FTC to force change

Earlier this month, people familiar with the matter told news outlets that the Federal Trade Commission (FTC) is nearing the end of an investigation into YouTube’s alleged failure to protect the kids who use the Google-owned service.

On Tuesday, one US senator, Edward Markey, and two consumer privacy groups sent letters to the FTC about the matter, urging the FTC to do whatever it takes to figure out if YouTube has violated the law protecting children and, if so, to make it shape up and stop it.

From Senator Markey’s letter:

Given the extensive evidence that YouTube is invading child users’ privacy, I urge you to take all necessary steps to hold YouTube accountable for any illegal activity affecting children that the company may have committed and, if violations are found, to require the company to institute new safeguards that will stop these harms from continuing.

In April 2018, a group of 23 child advocacy, consumer and privacy groups filed a complaint asking the FTC to make YouTube stop its allegedly illegal collection of, and profiteering from, children’s personal data.

The group urged the FTC to investigate the matter, given that the Children’s Online Privacy Protection Act (COPPA) outlaws the collection of data from kids younger than 13.

However, that’s exactly what happens to under-13s who use YouTube – the group’s complaint said that Google collects personal information including location, device identifiers and phone numbers, and tracks them across different websites and services without first gaining parental consent, as is required by COPPA.

Josh Golin, executive director of the Campaign for a Commercial-Free Childhood (CCFC), which is one of the groups that filed the complaint, said at the time that YouTube is being disingenuous when it talks about how kids use the platform:

For years, Google has abdicated its responsibility to kids and families by disingenuously claiming YouTube – a site rife with popular cartoons, nursery rhymes, and toy ads – is not for children under 13. Google profits immensely by delivering ads to kids and must comply with COPPA. It’s time for the FTC to hold Google accountable for its illegal data collection and advertising practices.

Markey called out one example of YouTube content that exemplifies what experts have said are the millions of channels on YouTube that are clearly directed at children: Ryan ToysReview, which has over 19 million subscribers and which bills itself as “Toy reviews for kids by a kid.”

That type of content doesn’t jibe with Google’s claims that the website isn’t intended for children, Markey said. Thus, because Google serves up content directed at kids, it’s subject to COPPA, he said. That means providing parents with clear notification – and gaining their consent – before collecting data on their kids.

What Markey wants the FTC to do:

  • Order Google to immediately stop collecting data on users it knows are under 13, and delete any data it’s collected on kids, even if those kids are now over the age of 13.
  • Set up a way to tell if a user is under 13, and deny them access until Google updates its processes to be compliant with COPPA.
  • Get rid of targeted marketing on the YouTube Kids platform, and tell users what data it’s collecting, what it’s doing with it, and who it’s sharing it with.
  • Subject Google to a yearly audit.
  • Keep Google from rolling out any new child-focused products or services until they’re approved by an independent panel that includes FTC-appointed experts on child development and privacy.
  • Require Google to conduct a consumer education campaign that warns parents that kids shouldn’t use YouTube.
  • Require Google to retain documentation about its compliance with any consent decree that comes out of this investigation.
  • Require Google to establish a fund to produce non-commercial, quality content for children.

We need to stop letting corporations feed on kids’ data, Markey said, lest those money-hungry machines turn our progeny into privacy-deprived, marketed-at-to-smithereens shopaholics:

Personal information about a child can be leveraged to hook consumers for years to come, so it is incumbent upon the FTC to enforce federal law and act as a check against the ever increasing appetite for children’s data.

The privacy groups that chimed in were the Center for Digital Democracy and the Campaign for a Commercial-Free Childhood. Also on Tuesday, both groups sent a letter to the FTC with a list of recommended penalties, including the deletion of user data on all children, civil penalties and “a $100 million fund to be used to support the production of noncommercial, high-quality and diverse content for children.”

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zfCeLvLM1-k/

Tesla 3 navigation system fooled with GPS spoofing

Cybersecurity researchers have fooled the Tesla Model 3’s automatic navigation system into rapidly braking and taking a wrong turn on the highway.

Israeli firm Regulus Cyber spoofed signals from the Global Navigation Satellite System (GNSS), fooling the Tesla vehicle into thinking it was at the wrong location. The spoofing attack caused the car to decelerate rapidly, and created rapid lane-changing suggestions. It also made the car signal unnecessarily and try to exit the highway at the wrong place, according to the company’s report.

The GNSS is a constellation of satellites that beam location information to earthbound receivers. It’s an umbrella term for the variety of regional systems in use, such as the US GPS system, China’s BeiDou, Russia’s GLONASS, and Europe’s Galileo.

Spoofing the signals replaces them with false signals to fool receivers. Regulus used this to attack Tesla’s Navigate on Autopilot (NoA) feature.

Introduced in October 2018, NoA is the latest development in Tesla’s ongoing autonomous driving efforts. It complements the existing Enhanced Autopilot feature that enabled cars to accelerate, brake, and steer within their lane. NoA introduced the ability to change lanes for speed and navigation, effectively guiding a car on the highway portion of its journey between onramp and offramp.

NoA initially requires drivers to confirm lane changes by flicking the car’s turn indicator, but Tesla lets them waive the confirmation requirement.

In its report on the test, Regulus said:

The navigate on autopilot feature is highly dependent on GNSS reliability and spoofing resulted in multiple high-risk scenarios for the driver and car passengers.

Regulus first tested the Model S, which doesn’t support NoA. It presented those findings to Tesla and quoted its response in the report. Tesla reportedly said that drivers should be responsible for the cars at all times and prepared to override Autopilot and NoA. It also reportedly dismissed the GPS spoofing attack, pointing out that it would only raise or lower the car’s air suspension system.

Regulus then tested NoA on the Model 3. This time, it went public with its findings.

Regulus chief marketing officer Roi Mit told Naked Security:

The purpose of this kind of reporting that we do is to create cooperation. But specifically, in the case of the Tesla 3, we decided to go public on this simply because it was a car that was already available to consumers. It’s already widespread around the world.

As manufacturers inch towards truly autonomous driving systems, the danger of GNSS spoofing represents a clear and present danger to drivers, he warned.

The researchers were able to spoof GNSS signals using a local transmitter mounted on the car, to stop it affecting other cars nearby. However, broader attacks extending over several miles are possible, and Russia has systematically spoofed GNSS signals to further its own interests, according to a recent report on the dangers of GNSS attacks.

Todd E. Humphreys, co-author of that report and associate professor at the University of Texas department of aerospace engineering and engineering mechanics, described the attack as “fairly distressing” to Naked Security yesterday. He added:

The Regulus attack could be done from 10Km away as long as they had a line of sight to the vehicle.

Does this mean nightmare scenarios in which a terrorist could drive all the Teslas in a 10km radius off the road into concrete bollards, killing their owners? It’s unlikely. For one thing, NoA combines vehicle sensors with the GNSS data, and would only turn off the highway if it spotted road markings that indicated an exit. The Regulus attack was only able to force an exit from the highway onto an alternative ‘pit stop’ with exit markings.

A more realistic attack would involve misdirecting the Tesla’s navigation system to take the left-and-right turns it expected, but at unintended junctions, Humphreys told us.

The Tesla vehicle would have enough visual and radar smarts that it wouldn’t drive off the road into a tree. But you could definitely cause the vehicle to end up in a neighourhood where the driver was not safe.

In fact, researchers at Virginia Tech recently covered this scenario in a paper entitled All Your GPS Are Belong To Us.

Tesla did not respond to our requests for comment yesterday.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/31jtzRTDM9Y/

UK’s MoD is helping itself to cops’ fingerprint database ‘unlawfully’, rules biometrics chief

The Ministry of Defence has been searching the police national fingerprint database without a “clearly defined lawful basis,” the UK’s biometrics commissioner has said.

In his annual report (PDF) filed today, Paul Wiles warned that inter-government searching of databases should be properly regulated.

“I continue to be very concerned about the searching by the Ministry of Defence into the police national fingerprint database without an agreed, clearly defined lawful basis.”

The MoD has been using the database to check whether fingerprints taken or found during military operations abroad matched to persons known to the UK police or immigration authorities or matched crime scene fingerprints held by the police.

Wiles said he has repeatedly challenged the MoD as to the legal basis on which the Defence Science and Technology Laboratory has gained direct access to and is searching the police’s fingerprint collections.

“I also wrote last year to the Permanent Secretary of the MoD seeking clarification on this issue. Over the last eighteen months the MoD has come up with a series of claims as to the legal basis of carrying out their searching through Dstl, none of which I have found convincing. “

He said: “There is nothing inherently wrong with hosting a number of databases on a common data platform with logical separation to control and audit access but unless the governance rules underlying these separations are developed soon then there are clear risks of abuse.”

The commissioner also referred to the controversy around the police matching of facial images in public spaces.

The Metropolitan Police have trialled biometrics technology at the Champions League Final and Notting Hill Carnival and South Wales Police have also more than dabbled with the tech.

Parade of dancers in costume at London's Notting Hill carnival.

London’s Met Police: We won’t use facial recognition at Notting Hill Carnival

READ MORE

“At its extreme it is raising the spectre of using facial scanning for mass police surveillance. That may be unlikely but one that some countries are reported as developing.

“The sober point is that unless there are clear and publicly accepted rules governing the police use of new biometrics then damage could be done to public trust in policing and at a time when regard for some other public institutions is declining.”

He said: “In summary, we are seeing the rapid exploration and deployment by the police of new biometric technologies and new data analytics. Some of these will improve the quality of policing and will do so in a way that is in the public interest.

“However, some could be used in ways that risks damaging the public interest, for example by re-enforcing biases of which reinforcement is not in the public interest.

“If the benefits of these new technologies are to be achieved there needs to be a process that provides assurance that the balance between benefits and risks and between benefits and loss of privacy are being properly managed.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/27/mod_helping_self_to_fingerprint_dna_database_unlawfully/

Learn How Privacy Laws Can Be Used for Identity Theft at Black Hat USA

Attend Black Hat USA this summer and see how researchers are subverting the GDPR’s privacy rules and detecting deep fakes with machine learning.

Even the most well-meaning cybersecurity laws and procedures can be subverted by a sufficiently devious mind, and there’s no better place to learn how they do it than Black Hat USA this summer. In fact, there’s a whole Human Factors track at the event that is dedicated to how human decisions affect the security of your organization, and how engineering and technology can help.

GDPArrrrr: Using Privacy Laws to Steal Identities is a good example. It’s a 50-minute Briefing about how the General Data Protection Regulation’s “Right of Access” provision (which gives individuals the right to access their personal data) can be easily abused by social engineers to steal sensitive information.

After a survey of more than 150 companies, a security researcher will demonstrate that organizations willingly provide highly sensitive information in response to GDPR right of access requests with little or no verification of the individual making the request, providing one of the most reliable general phishing attack typologies to date.

In Deconstructing the Phishing Campaigns that Target Gmail Users you’ll get a rare inside look at Gmail telemetry to illuminate the differences between phishing groups in terms of tactics and targets. Then, leveraging insights from the cognitive and neuro-science fields on user’s susceptibility and decision-making, you’ll learn why different types of users fall for phishing and how those insights can be used to improve phishing protections.

And don’t miss out on Detecting Deep Fakes with Mice, a fascinating look at how researchers worked to train humans, mice, and machines to detect fake speech in “deep fake” videos, using a “deep fake” data set provided by Google. For machines, you’ll look at two approaches based on machine learning: one based on game theory called generative adversarial networks (GAN), and one based on mathematical depth-wise convolutional neural networks (Xception). For biological systems, researchers gathered a broad range of human subjects as well as mice, which don’t understand words but respond to the stimulus of sounds, and can be trained to recognize real vs. fake phonetic construction. Come to this Briefing and learn who did best: the mice or the machines!

Further information on these cutting-edge Briefings and many more are available now on the Black Hat USA Briefings page, which is live now with full details on this year’s schedule!

Black Hat USA returns to the Mandalay Bay in Las Vegas August 3-8, 2019. For more information on what’s happening at the event and how to register, check out the Black Hat website.

Article source: https://www.darkreading.com/black-hat/learn-how-privacy-laws-can-be-used-for-identity-theft-at-black-hat-usa/d/d-id/1335060?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Office 365 Multifactor Authentication Done Right

Why the ubiquitous nature of Office 365 poses unique challenges for MFA-based security and how organizations can protect themselves.

Attacks like password spraying, brute force, and phishing have targeted Office 365 cloud users for years. Most incidents share a common thread: access to the right combinations of usernames and passwords along with legacy authentication mechanisms like basic authentication.

Attacks targeting email accounts protected only by single factor authentication, such as a password — even a “strong” password — see a higher probability of success. Against these odds, MFA has become a necessary line of defense. Using strong factors discourages attacks by introducing an extra layer of authentication to complete the sign-in process.

To combat these attacks, enterprises and users have layered security through the enforcement of multifactor authentication (MFA) for Office 365. (Disclaimer: Okta is a provider of MFA technology, along with many other security vendors.) While MFA is widely recognized as a trusted security measure and deployed by organizations to protect against cyber threats, the ubiquitous nature of Office 365 poses a unique challenge for MFA-based security: MFA can be bypassed, and it can occur without extraordinary levels of sophistication. To mitigate the risk of bypass, organizations need to understand the MFA bypass techniques for Office 365 and take steps to ensure these two technologies can coexist to keep future attacks at bay.

Bypassing MFA Through Office 365
While MFA can provide efficient protection, and many organizations have invested in MFA technology, not everyone has implemented the control effectively to protect access to Office 365.

While Microsoft Exchange does provide a mechanism for enforcing MFA using modern authentication — an umbrella term for a combination of authentication and authorization methods — it is not supported on every sign-in method supported by Office 365. In fact, only OWA and email clients built with Azure Active Directory Authentication Libraries (ADAL) support use the modern authentication flow, while legacy clients use only basic authentication, which relies only on a username and password, without requiring an MFA factor.

Possible scenarios that could potentially break or limit MFA enforcement on Office 365 include:

  • Legacy protocols like POP and IMAP which can only support basic authentication.
  • Access protocols that support modern authentication, like Exchange ActiveSync, Exchange Web Service (EWS), MAPI and PowerShell, that can be defaulted to use basic authentication.
  • Not all email clients are built with ADAL/modern authentication support, limiting access for some users from legacy email clients.

In addition to those vulnerabilities, last year, Okta researchers last year discovered that Microsoft’s Active Directory Federation Services can allow potentially malicious actors to bypass MFA safeguards, as long as they can successfully MFA-authenticate to another user’s account on the same ADFS service and have the correct password for other users. After being notified about the vulnerability and independently validating it, Microsoft produced a patch to address it. However, for anyone who has not patched, the vulnerability persists.

Implications of Office 365 MFA bypass
The potential impact of an Office 365 MFA bypass is massive: Once attackers compromise Office 365 credentials, they can exfiltrate sensitive data. In cases where admin credentials are compromised, malefactors can gain the ability to scan email content across entire businesses, or create email forwarding rules to execute a phishing campaign targeting the employee’s peers, while remaining undetected. These attacks can incur significant economic, brand, and compliance losses: some estimates suggest it could cost up to $2 million for an organization to conduct a large scale email compromise investigation including legal, forensics, data mining, manual review, notification, call center, and credit monitoring costs.

An example from last year: a group of nine Iranian nationals connected to the Mabna Institute illegally gained access to sensitive data from universities, at least 36 US businesses, private companies and government organizations through Office 365 — acquiring research the US had banned access to in Iran.

What can you do?
At the core of enforcing MFA on Office 365, you need to disable the use of basic authentication. Exchange Online added support for disabling basic authentication by creating “authentication policies” on Office 365 and applying these policies to users, so security teams need to ensure these are in place. However, defining and maintaining Exchange policies can be problematic with its reliance on PowerShell, meaning there is no corresponding graphic user interface for easy configurability.

The issue becomes easier to navigate when using client access policies from identity providers, which can be an effective approach to ensure that only MFA-enforced access is allowed through. These policies govern access to Office 365 based on attributes like client types, network location, user group membership and password-only versus password and MFA. However, applying these client access policies comes with trade-offs: native email clients on macOS and Android as well as older Windows Outlook app versions (older than 2013) that are not built with ADAL support cannot use modern authentication, and hence, will be prevented from accessing Office 365.

With attackers increasingly targeting corporate email, we need to give this issue the attention it deserves. Office 365 is the tip of the spear, as it is widely used and often attacked. MFA can be a robust control in preventing email-based breaches, but that only matters if it’s implemented effectively. It is critical that IT admins and security teams ensure full MFA enforcement by investing in configuring, patching, and testing their Office 365 implementations for security flaws.

Related Content:

Yassir Abousselham is the chief security officer at Okta. As CSO, Yassir is responsible for upholding the highest level of security standards for both Okta’s business and customers. Prior to Okta, Yassir served as chief information security officer at SoFi, managing the … View Full Bio

Article source: https://www.darkreading.com/perimeter/office-365-multifactor-authentication-done-right/a/d-id/1335039?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

More Supply, More Demand: Cybersecurity Skills Gap Remains

Although the number of programs for training workers in cybersecurity skills has increased, as well as the number of graduates, the gap in supply and demand for cybersecurity-skilled workers is essentially unchanged, leaving companies to struggle.

Although a variety of initiatives have worked to increase the supply of cybersecurity professionals, demand for cybersecurity workers has also increased, leaving the overall landscape in the jobs market unchanged, according to a report released on June 25 by Burning Glass Technologies, a workforce analysis firm. 

The result: For every opening in cybersecurity-related professions, there are only 2.3 employed workers to pull from, almost exactly the same as three years ago. In the general US job market, the situation is not nearly so dire, with an average of almost six employed professionals who could fill each job opening. 

The shortfall exists despite efforts to increase training. The number of programs for training workers in cybersecurity skills has grown by a third between 2013 and 2017, while the number of graduates has increased by over 40%, yet the number of job postings requiring cybersecurity skills has grown 94%, says Will Markow, author of the report and manager of client strategy for Burning Glass Technologies.

“The problem is that the demand growth has been so strong and the evolution of skill requirements has been so rapid that we still have not seen a significant budge in the ratio of supply-to-demand for cybersecurity workers,” he says.

The shortage of skilled cybersecurity workers has been a common theme over the past decade and continues to affect businesses. While a variety of organizations — from manufacturers to municipalities and from financial institutions to small businesses — are facing significant damages from attacks, almost three-quarters of organizations have a shortage in cybersecurity workers, according to a survey published in May by the Information Systems Security Associations (ISSA).

In many ways, the problem is the fast-evolving nature of the discipline — it remains nearly impossible to hire a worker trained in the exact mix of skills that a business believes it needs, Candy Alexander, executive cybersecurity consultant and president of ISSA, said in a statement. 

“Organizations are looking at the cybersecurity skills crisis in the wrong way: it is a business, not a technical, issue,” she said. “Business executives need to acknowledge that they have a key role to play in addressing this problem by investing in their people.”

Businesses need to focus on regularly training workers and allowing them to advance their skills sets. While 93% of cybersecurity practitioners cite a need to keep up their skills, about two-thirds of workers say their daily duties make it impossible for them to do so, according to ISACA’s report.

Such training is necessary, says Burning Glass’s Markow. The skills required of cybersecurity professionals are in a constant state of flux. Four years ago, the most in-demand skills included Python, HIPAA, risk management, and internal auditing, showing that most companies had a compliance-focused approach to security. Cybersecurity jobs that required expertise in information-systems management, information assurance, or Sarbanes-Oxley compliance were the hardest to fill. 

Changing Landscape
In 2019, the picture has changed. Companies are focused on specific security disciplines, with public-cloud security, Internet of Things security, and cybersecurity strategy topping the list of in-demand skills, according to Burning Glass’s latest report.

“We’re now seeing a shift toward managing risk associated with cyber threats and adopting risk management frameworks,” says Markow. “Not just checking off a box, but helping the organization understand where they are facing the most critical risks and benchmarking the tolerance for those risks.”

Automation remains a key skill demanded by companies. Job posting requiring automation skills grew 255% over the past six years, compared with 133% for risk-management skills and 94% for cybersecurity in general. The most in-demand skills within automation are Python, PERL, Java, and Splunk, according to the Burning Glass report. Even four years ago, the company’s 2015 report noted that knowledge of the Python programming language had become a top job skill, suggesting that the move toward automation was already under way.

For cybersecurity workers who can develop automation skills, the payoff can be significant. The average premium for cybersecurity roles that include an automation component is $14,000 on average.

For companies, the choice is a trade-off, Markow says. 

“If you want to automate all of these skills and those processes, you have significant premium to do so,” he says. “Employers have to determine whether they are willing to have the trade-off of paying less for the workers they do not have to hire by paying more for the workers that they do hire.”

The report also sheds light on a common trend. The shortage in cybersecurity workers has driven many companies to use managed and professional security services, raising the demand for — and cost of — those professionals. The result is a self-supporting feedback loop with higher salaries offered by professional services driving cybersecurity professionals into those positions, leaving companies with few options but to use such services to eliminate the gap. 

“Many workers will be scooped up by professional services firms, and then they will support a broad range of firms who cannot bring talent in-house, either because there is not enough talent or the talent is too expensive,” Markow says. “It’s not an uncommon scenario. We see that across many industries. It’s always a challenge when there is a very small workforce to prevent those workers from becoming concentrated in professional services firms.”

For companies looking to pull away from a reliance on professional firms, the report offers some advice. Companies should identify internal employees who may have interest in being trained in security. In addition, they need to focus on their cybersecurity pipeline, developing relationships with local universities and programs that train workers in cybersecurity disciplines.

“It is a very thorny problem without any easy solution,” Markow says. “But companies that start with a general understanding of the key demands of the industry, how that is evolving over time and into the future, can set their expectations and start on the path to developing talent.”

Related Content

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/more-supply-more-demand-cybersecurity-skills-gap-remains/d/d-id/1335071?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple