STE WILLIAMS

Commercial Spyware Uses WhatsApp Flaw to Infect Phones

A single flaw allowed attackers – thought to be linked to a government – to target human rights workers and install surveillance software by sending a phone request. The victims did not even have to answer.

A previously undiscovered flaw in the WhatsApp messaging application allowed an attacker to target human rights activists and lawyers by compromising mobile phones and installing commercial-grade spyware just by making a call, Facebook and independent researchers stated on Tuesday.

A variety of government agencies, security companies, and digital rights activists warned WhatsApps users of the seriousness of the issue, although users have been protected since the Facebook subsidiary blocked the attack vector on the network late last week, the company said in a statement. WhatsApp briefed several human rights organizations on the attack over the past few days.

“We believe a select number of users were targeted through this vulnerability by an advanced cyber actor,” the company said. “The attack has all the hallmarks of a private company reportedly that works with governments to deliver spyware that takes over the functions of mobile phone operating systems.”

The attack shows the dangers of zero-day vulnerabilities, which are often sold to private companies and government agencies. The current exploit appears to be part of a spyware program called Pegasus, developed by Israeli cyber-offense firm NSO Group and sold to governments for surveillance purposes. The NSO Group, and other offensive tool providers, incorporate exploits for undiscovered security issues into their attack tools to give their customers the ability to hack into the technology used by targeted citizens and companies. 

The University of Toronto’s Citizen Lab, a digital-rights research group, warned that evidence suggests that a human-rights lawyer was targeted by the attack over the weekend. On May 14, both the UK National Cyber Security Centre and the US Cybersecurity and Infrastructure Security Agency warned users to upgrade to the latest version of WhatsApp.

“This new type of attack is deeply worrying and shows how even the most trusted mobile apps and platforms can be vulnerable,” Mike Campin, vice president of engineering for security firm Wandera, said in a statement. “While this attack is based on a previously identified exploit known as Pegasus, the fact that it has been repackaged into a form that can be delivered via a simple WhatsApp call has shocked many.”

WhatsApp is considered to be a fairly secure, yet easy to use, messaging application, and so many activists, journalists, and dissidents use the application to protect their communications. It is relatively unclear how many people were targeted with the latest attack. 

The Financial Times, which broke the story on May 13, noted that WhatsApp has been targeted successfully in the past with attacks that allowed specially crafted text messages to force affected phones to download the attacker’s spyware. WhatsApp has not yet estimated the number of people affected.

By all accounts, WhatsApp acted quickly and blunted the impact of the attack, but parent company Facebook gave very few details of the vulnerability or what happened. A vulnerability report published by the company on May 13 consisted of two sentences:

“A buffer overflow vulnerability in WhatsApp VOIP stack allowed remote code execution via specially crafted series of SRTCP packets sent to a target phone number,” the company stated, adding a second line listing the affected versions.

WhatsApp uses the secure real-time transport protocol, or SRTCP, to establish connections between clients and allow for audio and video calls. In this case, the code used to handle incoming data had a buffer overflow vulnerability, says John Kozyrakis, staff research engineer at Synopsys.

“Buffer overflow bugs are very common in code that parses incoming packets of complicated protocols due to the large attack surface,” he says, adding that by crafting a series of malicious SRTCP packets and sending them to a client identified by a phone number, an attacker could exploit the vulnerability. “The client is going to process this malicious packets and due to the buffer overflow bug, it will allow the attacker to also execute arbitrary code on the user’s device.”

An analysis by network security firm Checkpoint Software Technologies gives more details of the attack and how Facebook patched the WhatsApp application.

Despite the attack, security professionals continue to describe WhatsApp as a secure messaging application.

“WhatsApp remains one of the most safe and secure messaging clients, as far as security features are concerned, like encryption, message authentication, integrity, (and) replay protection,” Kozyrakis says. “[Another messaging app called Signal] uses the ZRTP protocol instead of SRTCP and may be considered more safe, while other messenger apps less so.”

WhatsApp’s quick response has likely cost the attackers, Bob Rudis, chief data scientist at vulnerability management firm Rapid7, said in a statement.

“This means they ‘burned’ the exploit—that is, wasted a valuable exploit on a campaign—since it’s now widely known and will get lots of attention and be patched by users pretty quickly,” he said. “These exploits tend to not be cheap so unless they really did get to their intended victims and find whatever they were looking for, this was a potentially big fail on their part.”

 

Related Content

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/commercial-spyware-uses-whatsapp-flaw-to-infect-phones/d/d-id/1334713?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Resolution Requires Cybersecurity Training for Members of Congress

A bipartisan resolution would mandate IT and cybersecurity training for all members of Congress, their staff, and employees.

A new resolution in the US House of Representatives would require all members of Congress and their staff members to receive annual training in information technology and cybersecurity.

The Congressional Cybersecurity Training Resolution, co-sponsored by Reps. Kathleen Rice (D-NY) and John Katko (R-NY), requires the chief administrative officer of the House to conduct the training. The two sponsoring representatives said that it is imperative that members of the House be able to recognize cyber intrusions and that the responsibility can best be met if they undergo the same sort of training now required of their staff members.

For more, read here and here.

 

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/resolution-requires-cybersecurity-training-for-members-of-congress/d/d-id/1334716?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Website Attack Attempts Rose by 69% in 2018

Millions of websites have been compromised, but the most likely malware isn’t cyptomining: it’s quietly stealing files and redirecting traffic, a new Sitelock report shows.

Websites suffer an average of 62 serious attack threats per day — an average of 376 million per day, according to a new study of more than 6 million websites worldwide.

“Even though the numbers seems a little small, 62 attacks is still a pretty big number,” says Monique Becenti, product and channel marketing specialist at SiteLock, which published the study in a report today.

Those attacks weren’t concentrated in ransomware and cryptomining malware, but in such “classic” techniques as backdoors, shells, and JavaScript files. The JavaScript attacks are notable because they tend not to directly attack the website, but to hijack visitor traffic and send them to alternate, illegitimate destinations.

The report points out both the type of stealth attack seen in 2018 and the risk factors for compromise, factors that boil down to site complexity, site popularity, and site composition, or the software or CMS used to build the site.

According to the report, sites built with one of the three leading CMS platforms — Drupal, Joomla, and WordPress — are from 1.6 to 2.2 time more likely to be infected with malware than the average site. The issue, though, is not as simple as a problem with vulnerable CMS platforms, according to Becenti.

“Core files are starting to update a lot faster, as far as checking the security vulnerabilities,” she says of the major CMS platforms. “However, one of the primary culprits I feel we have to be worried about are plug-ins and schemes.”

Becenti notes that the three major CMS platforms are much more diligent than they once were about patching vulnerabilities and sending updates in a timely fashion. The hundreds or thousands of third-party plug-ins that add functionality to the core platforms, though, are where many vulnerabilities are introduced — and where many of those vulnerabilities remain for months or years without being patched.

“Website attack attempts per day grew by 59% from January 2018 to December 2018,” according to the report.

In terms of the total number of sites surveyed, the report says that approximately 1% of all sites are infected with malware at any given time — making roughly 17.6 million websites worldwide struggling with an infection every day. And of those infected sites, 50% had at least one backdoor, 48% had at least one shell script, 47% had at least one file-hacker, and 46% had at least one malicious evaluation request.

And SiteLock said cryptomining malware was found on just 2% of infected sites.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/website-attack-attempts-rose-by-69--in-2018/d/d-id/1334714?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Buffer the Intel flayer: Chipzilla, Microsoft, Linux world, etc emit fixes for yet more data-leaking processor flaws

Intel on Tuesday plans to release a set of processor microcode fixes, in conjunction with operating system and hypervisor patches from vendors like Microsoft and those distributing Linux and BSD code, to address a novel set of side-channel attacks that allow microarchitecture data sampling (MDS).

These side-channel holes can be potentially exploited by malicious software or rogue users already on a vulnerable machine to extract information, such as passwords and other secrets, from memory it is not allowed to touch.

MDS provides a way to expose sensitive data held in a processor’s internal structures, such as its store buffers, fill buffers, and load buffers. The various MDS techniques, developed by some of the same researchers who revealed the Spectre and Meltdown flaws last year, provide a link between memory side-channel attacks and the transient execution attacks exemplified by Meltdown.

Intel’s patch dump coincides with the expected release of two research papers by computer scientists – summarized at cpu.fail and zombieloadattack.com – detailing how the vulnerabilities arise from speculative execution – a shortcut taken by modern processors to execute software instructions before they’re needed that has opened new avenues of attack. The vulnerabilities appear to be limited to Intel hardware; the researchers say they were unable to replicate any of their attack primitives on Arm or AMD-designed processors.

Chipzilla maintains the vulnerabilities being disclosed today are difficult to exploit outside of a laboratory environment.

MDS describes a way to sample snippets of data as opposed to grabbing it all at once; it’s more like eavesdropping on privileged communications than cracking a safe. As a result, it’s not easy to target specific data or differentiate valuable information from background noise.

To make such attacks more efficient, an attacker might seek to have a targeted app running on the same physical core on an adjacent thread from the malware in order to run load and flush operations repeatedly.

The chipmaker has classified three of the relevant CVEs as medium severity and the fourth as low severity, a numerical range that spans from 6.5 to 3.8. The company contends its recent model chips have hardware mitigations for MDS in place.

“MDS is already addressed at the hardware level in many of our recent 8th and 9th Generation Intel Core processors, as well as the 2nd Generation Intel Xeon Scalable Processor Family,” an Intel spokesperson told The Register in an emailed statement.

“For other affected products, mitigation is available through microcode updates, coupled with corresponding updates to operating system and hypervisor software that are available starting today.”

Spectre/Meltdown redux?

The researchers who identified the flaws argue that hardware fixes for the Meltdown vulnerability implemented in Whiskey Lake and Coffee Lake CPUs are not enough and that software-based isolation of user and kernel space – which comes with a performance hit – needs to be enabled even on current processors.

Intel insists that recent steppings of its Whiskey Lake and Coffee Lake CPUs make all the necessary changes to its current chipsets. However, the company acknowledges there may be a performance hit due to microcode fixes in some circumstances for some workloads.

Chipzilla is expected to provide benchmark figures with its disclosure, but based on a discussion with Intel personnel, The Register understands that the microcode mitigations may cut chip performance in the WebXPRT 3 benchmark by about 3 per cent and in the Fio benchmark by 8 to 9 per cent.

So in short, the latest Whiskey Lake and Coffee Lake CPUs have mitigations built in; earlier processors will need to install microcode fixes. Operating systems and hypervisors need to be updated to work with the microcode updates to ensure they function properly; these patches are rolling out today from Microsoft, Apple, Google, Linux distributions, and others.

The following flaws, which as we stressed above depend on local code execution, are slated to be addressed:

  • Microarchitectural Store Buffer Data Sampling (MSBDS)

    CVE-2018-12126
  • Microarchitectural Fill Buffer Data Sampling (MFBDS)

    CVE-2018-12130
  • Microarchitectural Load Port Data Sampling (MLPDS)

    CVE-2018-12127
  • Microarchitectural Data Sampling Uncacheable Memory (MDSUM)

    CVE-2019-11091

The vulnerabilities are described in two papers: Store-to-Leak Forwarding: Leaking Data on Meltdown-resistant CPUs, by Graz University of Technology researchers Michael Schwarz, Claudio Canella, Lukas Giner, and Daniel Gruss; and ZombieLoad: Cross-Privilege-Boundary Data Sampling, by Michael Schwarz, Moritz Lipp and Daniel Gruss from Graz University of Technology, Daniel Moghimi from Worcester Polytechnic Institute, Julian Stecklina and Thomas Prescher from Cyberus Technology, and Jo Van Bulck from KU Leuven.

We got there first, says Chipzilla

According to Intel, this research touches on techniques first identified internally by Intel researchers Ke Sun, Henrique Kawakami, Kekai Hu and Rodrigo Branco and reported independently by academic researchers. The company intends to release its own white paper on these issues aimed at developers.

In an email to The Register, Daniel Gruss said: “We reported LFB [line fill buffer] leakage to Intel in March 2018, they acknowledged it, and we continued to explore it. We found the store-to-leak attack and reported it in January 2019, then we continued and found the ZombieLoad attack and reported it to Intel in April 2019.”

While Intel sponsored the researchers at TU Graz and KU Leuven, it did not disclose its findings or work with the academics on the techniques being disclosed, according to Gruss.

The Store-to-Leak Forwarding paper describes the store buffer as a microarchitecture element that turns a stream of store operations into serialized data and masks the latency from writing the values to memory. It stores data asynchronously so the CPU can do out-of-order execution. The operations for reassembling everything in the right order make Meltdown-like unauthorized memory reads possible.

The researchers describe a technique called Data Bounce that can access supposedly inaccessible kernel addresses. “With Data Bounce we break KASLR (Kernel address space layout randomization), reveal the address space of Intel SGX enclaves, and even break ASLR (address space layout randomization) from JavaScript,” the paper says.

Data Bounce is also invisible to the operating system; it doesn’t involve a syscall and doesn’t trigger an exception.

The paper also describes a technique called Fetch+Bounce for monitoring kernel activity through the store buffer and translation lookaside buffer (TLB). A third technique combines speculative execution with Fetch+Bounce to leak arbitrary data from memory.

“Speculative Fetch+Bounce is a novel way to exploit Spectre. Instead of using the cache as a covert channel in a Spectre attack, we leverage the TLB to encode the leaked data,” the Store-to-Leak Forwarding paper explains. “The advantage of Speculative Fetch+Bounce over the original Spectre attack is that there is no requirement for shared memory between user and kernel space.”

spectre

Spectre rises from the dead to bite Intel in the return stack buffer

READ MORE

The second paper, ZombieLoad, exploits the logic of the processor’s fill-buffer. It’s a transient-execution attack that exposes the values of memory load operations on the physical CPU, without respecting process barriers or privilege levels.

The attack, the researchers say, leaks user-space processes, CPU protection rings, virtual machines and SGX enclaves. “We demonstrated the immense attack potential by monitoring browser behaviour, extracting AES keys, establishing cross-VM covert channels or recovering SGX sealing keys,” the ZombieLoad paper explains. “Finally, we conclude that disabling hyperthreading is the only possible workaround to mitigate ZombieLoad on current processors.”

According to Gruss, the researchers also discovered that the line-fill buffer can be used to bypass Foreshadow mitigations, though that’s not detailed in either paper.

Intel disagrees about the need to disable hyperthreading, and says it plans to add additional hardware defenses to address these vulnerabilities into future processors. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/05/14/intel_sidechannel_vulnerability/

Why AI Will Create Far More Jobs Than It Replaces

Just as spreadsheets and personal computers created a job boom in the ’70s, so too will artificial intelligence spur security analysts’ ability to defend against advanced threats.

Teaching a machine to think like a human is the promise of artificial intelligence (AI). Using that narrow definition, it naturally follows that AI’s future could ultimately include the idling of countless millions of workers who are gainfully employed today.

These concerns about job loss are logical and unavoidable, but in my opinion, they are as unfounded as they are provocative. While someday in the distant future AI systems may start to approach the holy grail of emulating the thought process and analytical capabilities of a human, today’s capabilities put AI squarely in the category of a beneficial, time-saving tool rather than a human replacement.

Beginning with the Industrial Revolution and continuing into modern times, machinery has replaced workers. Automated looms disrupted the textile industry, and mass production disrupted the automobile industry. Desktop computing and word processing cut short many stenographer’s careers, and other tools such as email and voicemail have imperiled letter carriers and administrative assistants.

But AI is different. AI enables the creation of work products that often cannot be replicated by any countless number of able-bodied humans. How many workers would it take to approximate the search capabilities of Google’s AI algorithms? What personal shopper is so gifted that he or she could build bundles and offers with the targeted appeal that Amazon does a thousand times per second? What linguist is well-versed enough to understand the native tongue of nearly any soul on this earth?

The question is not how many people will lose their job as a result of AI, but how can people learn to use AI as a tool to improve the quality of the human existence?

Birth of the Computer Gives Rise to the Spreadsheet
Many people are surprised to learn that the word “computer” is not a new term but first appeared in the early 17th century to describe throngs of low-paid workers who toiled into the night performing repetitive calculations. This miserable task was ultimately laid to rest in the late 1970s with the invention of VisiCalc, the first modern spreadsheet. In fact, before the commercial introduction of spreadsheets for personal computers, people avoided the tedium of creating forecasts, budgets, or other computation-rich numerical models. The spreadsheet was a new tool. It made these numerical models easier to create and easier to modify. The entire concept of “what-if” analysis was essentially spawned by the advent of spreadsheets.

Did this result in large job losses? Obviously not. It made financial analysts and others more efficient and more productive. They had to learn how to use this new tool and had to evolve how they did their jobs, but in the end what the spreadsheet really did was to elevate their thinking, allowing them to spend more time on real problems, analysis, what-if scenarios, and the decisions facilitated by their results versus the manual, laborious effort of endless calculations.

The same concept applies to AI. AI is a tool that people need to learn how to use and how to apply to what they’re already doing. It can improve efficiency and productivity and relieve us of some of the laborious, tedious aspects of security analytics.

And just as spreadsheets created new jobs for people who became experts at using the new tool, AI will also create new jobs — jobs focused on applying AI to security, improving AI techniques to do a better job, and on maintaining these new tools and the underlying AI technology, including tuning and data collection.

To be more precise about the new opportunities, we should separate AI-birthed jobs into two categories:

  • Jobs for engineers who are knowledgeable about AI and apply core AI capabilities across a wide range of fields, and
  • Jobs for people who aren’t necessarily experts in AI but use AI-empowered applications where another developer has applied AI to a specific field — for example, security analysts who use AI-driven security tools.

A plethora of new jobs will be created for those with expertise in applying core AI technology to new fields and applications. Experts will be needed to determine the best type of AI to use for a particular application (deep learning, expert systems, machine learning), develop and train the models, and maintain and retrain the systems as needed. In fields such as security, where vendors have empowered security software with AI, it’s up to users — the security analysts — to understand the new capabilities and put them to the best possible use, just as their predecessors did with spreadsheets.

Where the AI Jobs Will Be
What AI does very well is to classify, predict, and automate repetitive tasks. Currently, for example, enterprise security analysts are faced with seemingly endless alerts about anomalous events or suspicious activities taking place within their organization. An estimated 99% of the time, these alerts are benign, false positives, according to Lastline’s analysis of customer network traffic. Though I’ve never done it myself, I have been assured that sifting through false positives is one of the least agreeable and lowest-productivity jobs ever conceived. No product is built nor problem solved, no profit is earned, and 99% of the time one’s effort is totally for naught.

Education is another well-recognized field where AI is creating new jobs. Currently, across the US, the top two positions in the list of academic openings are for security and machine learning experts. Universities simply don’t have enough people and can’t find professors to hire and to teach these critically important subjects. AI-powered job growth will grind to a halt and ultimately be frozen in place if higher education capacity is constrained any further.

To conclude, AI presents a tremendous opportunity for enterprising people. Employees have the opportunity to dive into a new field and abstract their job to a new, higher level of analysis and strategic value. Employers need to support these moves, allow time and flexibility, and generally stay open to employees reinventing themselves and their jobs as they embrace new technologies that will pay handsome dividends.

People won’t be replaced by AI and security analysts and educators won’t lose out to AI-driven security software. AI is here to stay and will be a powerful tool for improving our collective ability to defend against advanced threats. AI must be understood — its capabilities and limitations — and embraced as a new tool that already has had and will continue to have profound impact on security and other fields. But it’s not something to be feared as a job killer. Nothing could be further from the truth.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

John DiLullo has nearly 30 years of demonstrated success in enterprise security, networking, cloud, and AI, plus go-to-market expertise spanning sales, marketing, customer success, technical support, and operations. His career includes extensive time domestically and abroad … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/why-ai-will-create-far-more-jobs-than-it-replaces/a/d-id/1334635?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Effective Pen Tests Follow These 7 Steps

Third-party pen tests are part of every comprehensive security plan. Here’s how to get the most from this mandatory investment.PreviousNext

There’s little debate about whether penetration tests should be part of a comprehensive cybersecurity plan. It’s critical that defensive systems be tested by real-world pros so vulnerabilities and weaknesses can be found and corrected.

Instead, the question is how to get the most from the investment.

In all but the rarest cases, a pen test means having a third party explore the strength of an organization’s security. Many of the keys to effectiveness have been repeated as business wisdom so often they’ve become cliché: Know what you want, know the group you’re hiring, communicate clearly, write it down, and have a plan for what you’ll do with the results.

[Hear John Sawyer, director of red team services at IOActive, present Getting the Most Out of Penetration Testing and Red Teaming at Interop 2019 next week]

With each of these points, and the others on this list, factors specific to third-party pen tests need to be considered. This list, cherry-picked from conversations, conference panels, Internet articles, and personal experience, include the basics about what an organization needs to think through before launching a third-party pen test. What other factors should be on this list? Let us know in the Comments section, below.

(Image: putilov_denis VIA Adobe Stock)

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/threat-intelligence/effective-pen-tests-follow-these-7-steps----/d/d-id/1334697?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Missing in Action: Cybersecurity Professionals

Just as every organization security team’s needs are unique, so are the reasons for the shortage of candidates for open positions. Here are five strategies to help you close the gap.

For the past several years, security operations teams (SOCs) have consistently reported that one of the biggest obstacles they face is the lack of qualified candidates for open positions. With the increasing volume and sophistication of threats facing organizations, this problem has evolved from an inconvenience to a full-blown epidemic.

According to (ISC)2 research, the shortage of cybersecurity professionals is currently close to 3 million globally and is expected to increase in the years to come. Given the increasingly digital-first orientation of the under-30 population, why is the security community experiencing this crisis of candidates, and more importantly, how can we close the gap?

Education
One of the most important elements used to evaluate candidates is their level of education. Over the past few decades, higher education has gone from being a luxury only a few could afford to an absolute requirement for gainful employment. Unfortunately, even though the job market has emphasized higher education as table stakes for most knowledge-based positions, not all forms of education are viewed as equal.

Typically, a college education is associated with a degree from a four-year institution, which many still view as the only true form of academic achievement. However, when recruiting for information security positions, hiring managers should look beyond these institutions for candidates, and also consider technical and trade schools as talent pools.

Historically, universities have focused on teaching more theoretical concepts. While these are important for developing critical thinking skills, they can leave some graduates underprepared for a hands-on career in information security. Trade schools, on the other hand, have always emphasized hands-on experience, which can prepare graduates to hit the ground running in their new careers. By focusing exclusively on graduates from four-year institutions, organizations are not only shrinking their talent pool but may also miss out on qualified candidates who may not have had the inclination or financial resources to acquire a traditional bachelor’s degree.

Experience
Education and experience go hand in hand. Every organization wants to ensure their staff has the tools to perform at a high level. However, many new graduates lack real-world experience, which may prevent quality candidates from applying to fill open positions. 

If experience is non-negotiable, organizations should consider partnering with universities, colleges, and trade schools to create programs that will produce graduates with the skill sets they desire. Through internships and work-study programs, organizations can train their prospects in their proprietary processes and procedures, and assess which possess the requisite tools to be brought on full-time, while students gain real-world work experience necessary to become gainfully employed.

Professional Burnout
One of the most overlooked factors contributing to the shortage of qualified information security professionals is burnout. The lack of staff within security teams and the firehose volume of incidents has forced existing personnel to take on increasing responsibilities and workloads. Often, this will lead to professional burnout and cause morale to drop. Together, low morale and burnout can lead to career train wrecks. It also explains why more individuals are not opting to join the security workforce. However, the industry as a whole can do something about by taking a few simple steps:

Mentorship
Creating a mentorship program within the organization that partners junior security professionals with senior peers provides two important benefits. Junior security analysts gain the experience they need to become more effective team members, while senior staffers get the additional support they desperately need to keep from burning out. In addition, this partnership will improve camaraderie and help boost morale.

Mentorship programs also benefit the organization by demonstrating to potential applicants that they will be valued, and have the opportunity to work in a team-oriented environment. With competition for qualified candidates at an all-time high, creating a culture that promotes professional development through on-the-job training and mentorship will attract applicants who are interested in a long-term career with the organization.

Automation
By its very nature, automation helps people do more with less. In SOCs, the automation of business processes can help free up security staff to not only perform more value-added tasks but also allow for a better work-life balance, and reduce the risk of the previously discussed burnout syndrome.

Automation can be used across a wide range of repetitive, early-stage functions such as prioritizing security events and vulnerability assessment findings, creating incident response workflows, and tuning security rulesets.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Andrea Fumagalli has more than 20 years of experience in information technology and cybersecurity, and works globally with businesses and organizations including Fortune 500 multinational companies and governments on security technology related projects. View Full Bio

Article source: https://www.darkreading.com/risk/missing-in-action-cybersecurity-professionals/a/d-id/1334641?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Feds hook ELECTRICFISH, new Windows malware from North Korea

The FBI and Department of Homeland Security have identified (Malware Analysis Report AR19-129A) a new strain of malware from North Korea, the latest in a long line of cyber attacks from the country.

The Windows malware, dubbed ELECTRICFISH, sets up a tunnel between a machine on the victim’s network and the attacker’s system, enabling the attacker to receive network traffic from the victim.

Once it has a foothold, it then tries to connect to a source IP address within the victim’s network, and a destination address owned by the attacker. The attacker can also configure a proxy to act as an intermediary between the infected computer and the destination IP, avoiding the need for authentication to get outside the victim’s network. The US CERT advisory says:

If a connection is made to both the source and destination IPs, this malicious utility will implement a custom protocol, which will allow traffic to rapidly and efficiently be funneled between two machines.

How to avoid infection

Aside from keeping their antivirus signatures up-to-date, the advisory recommends:

  • patching operating systems and restricting permissions to install and run unwanted software;
  • thinking twice before opening email attachments, and be cautious when using removable media;
  • admins should disable file- and printer-sharing services or at least use strong passwords or Active Directory authentication if leaving them on.

HIDDEN COBRA

This isn’t the first advisory that the DHS has issued concerning North Korean hackers. It has a whole codename dedicated to the country’s online shenanigans: HIDDEN COBRA. The most recent advisory it issued before this one was on 10 April, 2019. It was a malicious executable file that collected information about the victim’s machine and sent it back to the attacker’s IP addresses.

MITRE associates HIDDEN COBRA with other names that have surfaced in the press in relation to North Korea: the Lazarus Group, Guardians of Peace, ZINC and NICKEL ACADEMY.

The group, active since at least 2009, has been blamed for the 2014 attack against Sony Pictures Entertainment, and for the WannaCry ransomware.

The list of DHS advisories on North Korea includes reports of remote administration tools (RATs), Trojans, worms, and DDoS botnets.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wfsHg1d2r44/

Windows 10 brings password-free access another step closer

Microsoft hammered another nail in the password’s coffin by winning a certification for Windows Hello that will make it easier for people to log into Windows machines. 

Windows Hello is the authentication system in Windows 10, and Microsoft introduced it to wean us off password-based access. It enables machines with the right hardware reader or camera to scan your fingerprint or face to access Windows 10 and your Microsoft account. You can also use it to access third-party services.

This month, the company earned FIDO2 certification for Windows Hello. By becoming a FIDO2 certified authenticator, Microsoft has just enabled 800million Windows 10 users to use a hardware security key with Windows Hello’s password-free system.

FIDO aims to make logins easier and more secure

To understand why this is important, we need to dig into FIDO, which stands for Fast IDentity Online. The FIDO Alliance is an industry group backed by large tech players that aims to make logins easier and more secure. 

Since the FIDO Alliance started in 2013, it has released three specifications. The first, announced in 2014, was the Universal Authentication Framework (UAF). That standard focused on using biometrics like your fingerprint for password-free authentication.

The second standard was Universal Second Factor (U2F). This let people authenticate themselves using hardware devices like USB keys that you could plug into your computer, or near-field communication (NFC) devices that you could tap on a hardware-based reader. Google and Yubico developed this technology for two-factor authentication, meaning you’d use it as an extra layer of protection on top of your regular password.

Ideally, though, we’d like to do away with passwords altogether. That’s where FIDO2 comes in. It uses a protocol called Web Authentication (WebAuthn), which takes the digital key stored on your USB or other hardware key and delivers it directly to the web application you want to access.

What this means for you is that if you have a hardware key, a browser, and a web application all supporting FIDO2, you’ll be able to log into your web applications without trying to remember your pesky passwords. 

Microsoft initially announced support for FIDO2 in November 2018. Then, you could use your hardware key with the Edge browser to log into your Microsoft account on the web. Windows Hello already allowed you to use your face or fingerprint (with a suitably equipped device) to log into your computer and Microsoft account. 

Hello password-less web

This month’s announcement now means you can log into your Windows 10 machine and Microsoft account using your hardware key and Windows Hello. That will please Windows Hello users that don’t have a camera for facial recognition or fingerprint reader for scanning. Not all Windows 10 users are Windows Hello users, but this development makes it easier for more Microsoft users to adopt the system and move away from password-based access altogether.

It also adds more support for a standard that will help us move away from the password altogether. WebAuthn is an official standard after the W3C ratified it in March 2019, so the consensus for FIDO2 is strong. FIDO2 is also backward-compatible with UAF and U2F, meaning that people who’ve already invested in those systems don’t lose out.

Not all web applications support FIDO2, but things look promising because developers can turn on support using a simple JavaScript API call. 

Firefox users win too

The company also announced today it would let Firefox users log into their Microsoft accounts using FIDO2, with Chrome support to follow soon. So if you’re not an Edge fan, you can still access your Microsoft goodies that way.

There are risks with FIDO2. You could lose your hardware key, and if someone steals it, they can theoretically log in as you. I say ‘theoretically’ because are mitigating steps you can take to avoid this, such as making a backup key and using a hardware key with built-in fingerprint recognition. It’s certainly more secure than relying entirely on a password that someone halfway across the world can steal, and it‘s more convenient to use.

Does this mean the end of the password as we know it? No. This probably won’t happen for years, given the inertia inherent in thousands of online applications and services. But support from Microsoft, with its massive user base, is a step in the right direction. 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SlMS3cUq0pA/

White label SOS panic buttons can be hacked via SMS

A widely used panic alarm handed out to at least 10,000 thousand elderly people in the UK can be remotely controlled by sending it simple SMS commands, researchers at Fidus Information Security have discovered.

The alarm – a small plastic pendant device with an SOS button in the middle – connects to 2G/GPRS cellular networks, which means it can be used anywhere without the need for an intermediary base station and provides a live status feed.

As well as being able to locate the wearer via GPS, it can also detect whether the wearer has taken a fall and comes with a microphone and speaker for two-way communication should an emergency be detected.

On the face of it, a potentially life-saving device, but also one whose unnamed maker doesn’t appear to have factored in even basic security.

Alarming oversights

The extent of the oversight is eye-opening, frankly. Armed with the phone number of the installed SIM (which are handed out in batches, meaning you can infer a range by knowing only one of them), the Fidus was able to send it documented SMS commands to do the following:

  • Call the device and have it answer, creating a “glorified wiretap’ that can’t be detected.
  • Remove emergency contacts.
  • Disable GPRS, motion alarms, and fall detection.
  • Power off the device.
  • Remove any set PIN number.
  • Retrieve GPS data to work out whether the wearer is located.

Fidus tested the theory by contacting real devices to see how many of the guessed phone numbers would respond, receiving replies from 7%, or 175 of the 2,500 numbers tested:

So this is 175 devices being used at the time of writing as an aid for vulnerable people; all identified at a minimal cost. The potential for harm is massive, and in less than a couple of hours, we could interact with 175 of these devices!

It should have been possible to prevent communication by setting a PIN number but it appears that many didn’t have one set, rendering the security useless.

However, even had one been set, Fidus discovered that it was possible to bypass this by issuing a factory reset with no authentication needed.

White label IoT

As a Chinese-made ‘white label’ device (one branded by numerous third parties), it doesn’t have an obvious name that would make it easy to identify.

In the UK, it’s offered under the following product brands – and probably many others – usually distributed by local councils:

  • Pebbell 2 – HoIP Telecom
  • Personal Alarm GPS Tracker with Fall Alert – Unforgettable
  • Footprint – Anywhere Care
  • GPS Tracker – Fall Alarm – Amazon/Tracker Expert
  • SureSafeGO 24/7 Connect ‘Anywhere’ Alarm – SureSafe
  • Ti-Voice – TrackIt24/7

Could the security flaws be fixed?

According to the researchers, for new devices that have yet to be sent to customers, yes. All that’s needed is a unique code printed on each device that would be required to make configuration changes.

For the ones already out there, it is almost certainly too late, short of replacing them:

Any local authorities that are supplying these devices or employers who are using them to keep their workforce safe should be aware of the privacy and security problems and should probably switch to another device with security built from the ground up.

Fidus said it had contacted suppliers to point out the device’s risk which had resulted in some considering recalling them. Others, however, failed to respond.

Because it doesn’t connect directly to the internet and uses SMS, perhaps its makers assumed it would be safe from remote attack. Internet of (insecure) Things laziness strikes again.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/BOTi-dL5utQ/