STE WILLIAMS

Fewer Vulnerabilities in Web Frameworks, but Exploits Remain Steady

Attackers continue to focus on web and application frameworks, such as Apache Struts and WordPress, fighting against a decline in vulnerabilities, according to an analysis.

The number of vulnerabilities in major web-application frameworks has declined since peaking most recently in 2016, but attackers have remained focused on exploiting weaknesses in the software platforms, according to an analysis published by cybersecurity firm RiskSense on March 16.

The result is that while major frameworks such as Apache Struts and platforms such as WordPress have seen fewer overall vulnerabilities, the weaponization rate climbed to 8.6% in 2019, exceeding the 3.9% rate for the National Vulnerability Database as a whole. The data suggests that although the groups and organizations responsible for maintaining the frameworks have become better at securing the code, attackers remain focused on finding ways to use the even smaller number of security bugs to compromise web application servers, says Wade Williamson, a researcher with RiskSense.

“Web application frameworks are the last piece of code that people pay attention to,” he says. “But they are Internet-facing, there are a lot of them, and they are easy to find once they are out there.”

The data suggests that companies should take stock of their web application frameworks from the standpoint of security. The typical website is scanned by automated attacks targeting exploitable vulnerabilities dozens of times a day, past research has shown

Because developers typically are not going to help maintain the actual framework, and producing patches for web application frameworks can sap a great deal of developer productivity, selecting the right platform for a company’s web applications is extremely important, Williamson says.

“No matter how good of a developer you are, if there is a vulnerability in your framework, your application is going to be vulnerable,” he says. “As a developer and an organization, choosing a framework is a big deal — it is what the security of your apps will rely on.”

While the rate of exploitation — or weaponization, as RiskSense calls it — has increased, the absolute number of exploits has not risen by much. The increase in the rate of weaponization is more due to the drop in vulnerabilities in the frameworks overall — a positive sign.

However, WordPress, Apache Struts, and Drupal — along with their parent languages PHP and Java — continue to have the highest rates of weaponization, Williamson says. 

“We have been seeing very different types of problems in the past five years versus the past 10, but even as that changed, the problems with weaponization were still in the same spots,” he says. “The hot spots remained the same.”

It’s not just a measure of their popularity or of the framework’s age, he adds. Apache Struts, for example, is declining in popularity but has had a significant number of vulnerabilities, 

“I think Apache Struts is one of the first frameworks that I, as a developer, would consider moving away from,” he says. “It is not just about who has the broadest footprint, because the attackers are still very active in investigating certain frameworks, even as their popularity goes down.”

The Python frameworks have become very popular and both the number of vulnerabilities found in popular frameworks, such as Django and Flask, and the weaponization rates have been very low. 

JavaScript has also become increasingly scrutinized by researchers, with many more vulnerabilities discovered. But so far, only one issue in the Node.js framework has been exploited in the past five years, according to RiskSense data.

However, web application frameworks have evolved over time, as have the vulnerabilities that attackers have found. In 2010, cross-site scripting, input validation, and permission errors topped the list of reported security issues. In 2019, the top three issues were input validation, information exposure, and access control. Cross-site scripting has fallen to the fifth most exploited issue.

From a vulnerability standpoint, Python-based and JavaScript-based frameworks seem to have the fewest vulnerabilities and the fewest weaponized vulnerabilities, and perhaps those frameworks should be increasingly considered, Williamson says.

“Upgrading frameworks is kind of a pain and risky for developers because as you move from version to version, you have to maintain your changes,” he says. “So, to me, the choice of framework is one of risk and the level of maintenance you can tolerate.”

Related Content

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Beyond Burnout: What Is Cybersecurity Doing to Us?

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/application-security/fewer-vulnerabilities-in-web-frameworks-but-exploits-remain-steady/d/d-id/1337321?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hellman & Friedman Acquires Checkmarx for $1.5B

The private equity firm will buy Checkmarx from Insight Partners, which will continue to own a minority interest.

Private equity firm Hellman Friedman will acquire application security company Checkmarx from Insight Partners for a $1.15 billion valuation, the companies reported today. Insight Partners, which acquired Checkmarx in 2015 for $84 million, will continue to own a minority interest.

Tel Aviv-based Checkmarx was founded in 2006 by CTO Maty Siman, who continues to lead the organization with CEO Emmanuel Benzaquen. Its technology provides static and interactive application security testing, software composition analysis, and application security training and skills development so businesses can better detect vulnerabilities in their software. Checkmarx  employs more than 700 employees and reports more than 1,400 customers in 70 countries.

Today’s acquisition, which aims to further drive the company’s growth, arrives at a time when businesses are placing stronger focus on secure software development. “As cybersecurity threats continue to intensify, we strongly believe that embedding security early in the software development lifecycle is critical,” said HF partner Tarim Wasim in a statement on the news.

Read the full release here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Beyond Burnout: What Is Cybersecurity Doing to Us?

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/application-security/hellman-and-friedman-acquires-checkmarx-for-$15b/d/d-id/1337322?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Five Indicted on Romance and Lottery Fraud Charges

Fraudsters allegedly targeted elderly victims, ultimately wringing more than $4 million from their bank accounts.

Five men from Connecticut have been indicted on 11 counts stemming from their alleged involvement in a romance and lottery fraud that took more than $4 million from victims. According to the indictment, the fraud schemes began in August 2015 and continued until the time of the arrests last week.

The criminals are said to have targeted the elderly, ultimately taking more than $1 million from a single victim. As part of the schemes, victims either sent checks, money orders, or gift cards to addresses in Connecticut, or they wired money to bank accounts controlled by the five perpetrators.

The individuals arrested were Farouq Fasasi, Rodney Thomas, Jr., Montrell Dobbs, Jr., Stanley Pierre, and Ralph Pierre. Fasasi, Thomas, Dobbs, and Ralph Pierre were arrested on March 10. Dobbs has been released on bond, while Stanley Pierre is still being sought by law enforcement.

All of the defendants are charged with one count of conspiracy to commit money laundering, a charge that carries a maximum term of imprisonment of 10 years. The indictment also charges Fasasi and Thomas with one count of conspiracy to commit mail and wire fraud and one count of mail fraud, each of which carries a maximum term of imprisonment of 20 years. In addition, Fasasi, Dobbs, Stanley Pierre, and Ralph Pierre are charged with one or more counts of money laundering, which carries a maximum term of imprisonment of 10 years on each count.

Read more here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Beyond Burnout: What Is Cybersecurity Doing to Us?

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/five-indicted-on-romance-and-lottery-fraud-charges/d/d-id/1337325?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Privacy in a Pandemic: What You Can (and Can’t) Ask Employees

Businesses struggle to strike a balance between workplace health and employees’ privacy rights in the midst of a global health emergency.

The balance between employee health and privacy rights is difficult to strike, especially at a time when organizations are making critical decisions based on health-related information.

Collecting and sharing information is necessary but must be done with employees’ privacy in mind. Many businesses are curious to know what they can ask employees without violating any privacy laws, says Christine Lyon, privacy partner at Morrison-Forrester LLP. What health-related inquiries are acceptable? Can employers require a doctor’s note or medical exams? 

“The interesting aspect of this is there aren’t straight-line answers,” Lyon explains. “Even legal analysis changes as the facts evolve.” As an example, Lyon points to the increasingly common question of whether businesses can take temperatures at work. This typically is considered a medical exam and is prohibited under the Americans with Disabilities Act (ADA), the Equal Employment Opportunity Commission (EEOC) states in guidance related to pandemics.

However, as COVID-19 continues to spread across the United States, the Center for Disease Control (CDC) has begun to recommend employers take temperatures. Daily “health checks,” which include screening for temperature and respiratory symptoms, have been encouraged in CDC guidance for Santa Clara County, California, and Seattle-King, Pierce, and Snohomish counties, Washington.

“It’s challenging for employers because there’s no clear-cut answer,” Lyon says. The CDC may recommend taking temperatures but doesn’t suggest what to do if someone has a fever. It’s one of many areas in which businesses should proceed with caution. If an office visitor has a high temperature, the company likely would not turn that person away. Instead, she says, it would likely call the person the visitor had planned to meet and say they’ll schedule a phone call.

“Keep as much confidentiality as possible,” she says. “What is the information that we really need to know?” This concept, she says, also applies to storing health-related information. Many employers are collecting minimal health data, including the temperatures they record. If you’re keeping temperature data, it’s considered a medical record and confidentiality rules will apply.

Privacy rules and regulations differ by company, industry, and state. As a result, it’s difficult to provide detailed guidance on what employers should do. Modern privacy and data protection laws, like the European Union’s General Data Protection Regulation and the California Consumer Privacy Act, don’t prevent businesses from recording certain information, says Bart Willemsen, research vice president at Gartner. For example, employers must record data necessary to determine if salaries are being paid, or information related to the workspace physician providing treatment to an employee. However, health-related data must be treated differently.

The Do’s and Don’ts of Health-Related Questions
“Health information is information of a sensitive nature, a special category of data,” Willemsen continues. “Every person has the right to not share such information — but they can share metadata.” Employers can collect data related to insurance payment (for example, if something happens in the workplace). They can also record employees’ adjusted work environments, if they start to work remotely. But employers are not doctors, he emphasizes, and they should not assume the position of collecting detailed health data unless under specific circumstances. 

So, what can employers ask their employees to ensure a safe workplace without violating privacy rules? Lyon says it’s “generally fine” to ask if they have been experiencing cold or flulike symptoms, especially if there is a pandemic. The CDC states employees who fall ill with flulike symptoms during a pandemic should leave the workplace. Companies can ask about the expected duration of absence if an employee calls out sick; however, they can’t ask why.

“Though it’s important to know how long an employee may be absent, it is not for the employer to inquire in detail after why that absence is a fact,” Willemsen adds. People do not have to share the details of their illness unless it has direct influence on their job function (for example, if they are a healthcare worker). It’s fine if they want to volunteer that information, but even if they do, employers should refrain from recording and processing the data they share.

Employers should be careful with pointed questions about specific illnesses and diagnoses. Questions like “Have you been tested for coronavirus?” and “Do you have any medical conditions that make you susceptible?” are crossing the line into ADA territory, says Lyon. “An employer has to show a justification for asking those sorts of questions,” she continues. If an employee returns from travel, the company may ask if they are returning from a country with a known outbreak, even if the travel was personal and the employee does not have symptoms.

Doctor’s notes can also be tricky. The CDC suggests companies do not require a note to validate illness or return to work because in times like these, “healthcare provider offices and medical facilities may be extremely busy and not able to provide such documentation in a timely way.”

If a company wants to verify someone is fit to return to the office, they may ask for a note saying as much because it doesn’t disclose a specific condition, Lyon explains. However, if a company wants a note stating an employee has tested negative for a particular condition, such as coronavirus, that ventures into dangerous territory.

Companies are encouraged to record only health-related information that is factual, and the minimum amount of information necessary. This data should only be shared with employees on a “need-to-know” basis and used as anonymously as possible, Willemsen says. It should be stored securely and only for as long as it is necessary. If it must be disclosed, it should only be shared with external parties as mandated by law — for example, with local health agencies.

Lyon suggests businesses establish a centralized place where employees can view information about what is and isn’t appropriate. “Make sure these questions are going to the right people so managers aren’t on their own for what they can and can’t ask,” she explains. Creating a list of frequently asked questions for managers and employees can be helpful in times like these.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Beyond Burnout: What Is Cybersecurity Doing to Us?

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/privacy-in-a-pandemic-what-you-can-(and-cant)-ask-employees/d/d-id/1337326?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Senate bill would ban TikTok from government phones

On Thursday, two US senators introduced a bill that would ban all federal employees from using the Chinese singing/dancing/jokey platform on government phones.

The bill comes from Senators Josh Hawley (R-MO) and Rick Scott (R-FLA). It would expand on current TikTok bans from the State Department, the Department of Homeland Security (DHS), the Department of Defense (DoD), and the Transportation Security Administration (TSA).

The bans have been put in place due to cybersecurity concerns and possible spying by the Chinese government.

A statement from Hawley:

TikTok is owned by a Chinese company that includes Chinese Communist Party members on its board, and it is required by law to share user data with Beijing. The company even admitted it collects user data while their app is running in the background – including the messages people send, pictures they share, their keystrokes and location data, you name it. As many of our federal agencies have already recognized, TikTok is a major security risk to the United States, and it has no place on government devices.

TikTok’s many attempts to smooth it all over

TikTok has tried to soothe US fears about censorship and national security risks, including a reported plan to spin TikTok off from its parent company.

In November 2019, Vanessa Pappas, the general manager of TikTok US, wrote that data security was a priority, reiterating what TikTok has repeatedly claimed: that all US user data is stored in the US and that TikTok’s data centers are located “entirely outside of China.”

That and other attempts to allay concerns came after the US opened a national security review of TikTok owner Beijing ByteDance Technology Co’s $1 billion acquisition of the US social media app Musical.ly in 2017. ByteDance combined Musical.ly with a Chinese app called Douyin and put it under a new brand: TikTok. As of November 2019, the Committee on Foreign Investment in the United States (CFIUS) was probing the app for possible national security risks.

In October 2019, Senators Tom Cotton and Chuck Schumer had written to Acting Director of National Intelligence Joseph Maguire, asking that the intelligence community please look into what national security risks TikTok and other China-owned apps may pose.

The senators pointed out that TikTok has been downloaded in the US more than 110 million times.

As far as Chinese censorship goes, TikTok has denied ever having been asked by the Chinese government to remove content and said it “would not do so if asked. Period.”

But how, Cotton and Schumer asked, would we even know if that were true? As it is, there’s no legal means to appeal a content removal request in China, they pointed out. Instead, we’re dealing with China’s “vague patchwork of intelligence, national security, and cybersecurity laws [that] compel Chinese companies to support and cooperate with intelligence work controlled by the Chinese Communist Party.”

TikTok declined multiple invitations to testify at hearings preceding the introduction of this bill, Hawley said. During a recent Crime and Terrorism Subcommittee hearing entitled “Dangerous Partners: Big Tech Beijing,” the senator had announced that he’d be introducing the bill. TikTok declined Hawley’s invitation to testify at that hearing, just as it did with the Subcommittee’s hearing in November.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JZyyYvFBZdk/

Open source bugs have soared in the past year

Open source bugs have skyrocketed in the last year, according to a report from open source licence management and security software vendor WhiteSource.

The number of open source bugs sat steady at just over 4,000 in 2017 and 2018, the report said, having more than doubled the number of bugs from pre-2017 figures that had never before broken the 2,000 mark.

Then, 2019’s numbers soared again, topping 6,000 for the first time, said WhiteSource, representing a rise of almost 50%.

By far the most common weakness enumeration (CWE – a broad classifier of different bug types) in the open source world is cross-site scripting (XSS). This kind of flaw accounted for almost one in four bugs and was the top for all languages except C. This was followed by improper input validation, buffer errors, out-of-bound reads, and information exposure. Use after free, another memory flaw, came in last with well under 5% of errors.

WhiteSource had some harsh words for the national vulnerability database (NVD), which it said only contains 84% of the open source vulnerabilities that exist. It adds that many of these vulnerabilities are reported in other places first and only make it into the NVD much later.

It also criticised the common vulnerability scoring system (CVSS), which was launched in 2005 and was recently upgraded to 3.1. It said that the system has changed the way it scores bugs over time, tending towards higher scoring. WhiteSource complained:

[…] how can we expect teams to prioritize vulnerabilities efficiently when over 55% are high-severity or critical?

FIRST, which organises CVSS, didn’t reply to our request for comment but we will update this article if they do.

Expect to see the number of bugs rise over time, predicted the report. It pointed to GitHub’s recently announced Security Lab as a key development in open source bug reporting. GitHub, which hosts many open source products, has an embedded disclosure process that will encourage project maintainers to report vulnerabilities, it said.

The 2017 bug spike isn’t specific to open source, which happens to be WhiteSource’s focus here. We saw a corresponding spike in general bugs as reported in the CVE database in 2017. However, the number of overall bugs reported as CVEs actually dipped below 2017 levels last year.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ex96Deh5Pk4/

Report calls for web pre-screening to end UK’s child abuse ‘explosion’

A UK inquiry into child sexual abuse facilitated by the internet has recommended that the government require apps to pre-screen images before publishing them, in order to tackle “an explosion” in images of child sex abuse.

The No. 1 recommendation from the independent inquiry into child sexual abuse (IICSA) report, which was published on Thursday:

The government should require industry to pre-screen material before it is uploaded to the internet to prevent access to known indecent images of children.

While most apps and platforms require users (of non-kid-specific services) to be at least 13, their lackluster age verification is also undermining children’s safety online, the inquiry says. Hence, recommendation No. 3:

The government should introduce legislation requiring providers of online services and social media platforms to implement more stringent age verification techniques on all relevant devices.

The report contained grim statistics. The inquiry found that there are multiple millions of indecent images of kids in circulation worldwide, with some of them reaching “unprecedented levels of depravity.”

The imagery isn’t only “depraved”; it’s also easy to get to, the inquiry said, referring to research from the National Crime Agency (NCA) that found that you can find child exploitation images within three clicks when using mainstream search engines. According to the report, the UK is the third greatest consumer in the world of the live streaming of abuse.

The report describes one such case: that of siblings who were groomed online by a 57-year-old man who posed as a 22-year-old woman. He talked the two into performing sexual acts in front of a webcam and threatened to share graphic images of them online if they didn’t.

How do we stem the tide?

The NCA has previously proposed that internet companies scan images against its hash database prior to being uploaded. If content is identified as a known indecent image, it can then be prevented from being uploaded.

Apple, Facebook, Google, Dropbox and Microsoft, among others, automatically scan images (and sometimes video) uploaded to their servers. The NCA says that, as it understands it, they only screen content after it’s been published, thereby enabling abusive images to proliferate.

The thinking: why not stop the images dead in their tracks before the offense occurs?

One reason: it can’t be done without disabling the end-to-end encryption in WhatsApp, for example, or other privacy-minded services and apps, according to Matthew Green, cryptographer and professor at Johns Hopkins University. Green explains that the most famous scanning technology is based on PhotoDNA: an algorithm developed by Microsoft Research and Dr. Hany Farid.

PhotoDNA and Google’s machine-learning tool, which it freely released to address the problem, have a commonality, Green says:

They only work if providers […] have access to the plaintext of the images for scanning, typically at the platform’s servers. End-to-end encrypted [E2E] messaging throws a monkey wrench into these systems. If the provider can’t read the image file, then none of these systems will work.

Green says that some experts have proposed a way around the problem: providers can push the image scanning from the servers out to the client devices – i.e., your phone, which already has the cleartext data.

The client device can then perform the scan, and report only images that get flagged as CSAI [child sexual abuse imagery]. This approach removes the need for servers to see most of your data, at the cost of enlisting every client device into a distributed surveillance network.

The problem with that approach? The details of the scanning algorithms are private. Green suspects this could be because those algorithms are “very fragile” and could be used to bypass scanning if they fell into the wrong hands:

Presumably, the concern is that criminals who gain free access to these algorithms and databases might be able to subtly modify their CSAI content so that it looks the same to humans but no longer triggers detection algorithms. Alternatively, some criminals might just use this access to avoid transmitting flagged content altogether.

Cryptographers are working on this problem, but “the devil is in the [performance] details,” Green says.

Does that mean that the fight against CSAI can’t be won without forfeiting E2E encryption? As it is, the inquiry is recommending fast action, suggesting that some of its recommended steps be taken before the end of September – likely not enough time for cryptographers to figure out how to effectively prescreen imagery before it’s published, as in, before it slips behind the privacy shroud of encryption.

The inquiry’s report is only the latest of a string of scathing assessments of social media’s role in the spread of abuse imagery. According to the report, social media companies appear motivated to “avoid reputational damage” rather than prioritizing protection of victims.

Prof Alexis Jay, the chair of the inquiry:

The serious threat of child sexual abuse facilitated by the internet is an urgent problem which cannot be overstated. Despite industry advances in technology to detect and combat online facilitated abuse, the risk of immeasurable harm to children and their families shows no sign of diminishing.

Internet companies, law enforcement and government [should] implement vital measures to prioritise the protection of children and prevent abuse facilitated online.

The UK and the US are on parallel paths to battle internet-facilitated child sexual abuse, though, at least in the US, privacy advocates view recent political moves as ill-disguised attacks on encryption and privacy. The EARN-IT Act is a case in point: now making its way through Congress, the bill has been introduced by legislators who’ve used the specter of online child exploitation to argue for the weakening of encryption.

One of the problems of the EARN IT bill: the proposed legislation “offers no meaningful solutions” to the problem of child exploitation, as the Electronic Frontier Foundation (EFF) says:

It doesn’t help organizations that support victims. It doesn’t equip law enforcement agencies with resources to investigate claims of child exploitation or training in how to use online platforms to catch perpetrators. Rather, the bill’s authors have shrewdly used defending children as the pretense for an attack on our free speech and security online.

You can’t directly compare British and US legal rights. But at least in the US, legal analysts say that the EARN IT Act, which would compel internet companies to follow “best practices” lest they be stripped of Section 230 protections against being sued for publishing illegal content, would be in violation of the First and Fourth Constitutional Amendments protections for, respectively, free speech and unreasonable search.

Private companies like Facebook voluntarily scan for violative content because they’re not state actors. If they’re forced to screen, they become state actors, and then they (generally; case law differs) legally need to secure warrants to search digital evidence.

Thus, as argued by Riana Pfefferkorn, Associate Director of Surveillance and Cybersecurity at The Center for Internet and Society at Stanford Law School, forcing scanning could actually lead, ironically, to court suppression of evidence of the child sexual exploitation crimes targeted by the bill.

How would it work in the UK? I’m not a lawyer, but if you’re familiar with British law, please do add your thoughts to the comments section.

Naked Security’s Mark Stockley saw another wrinkle in the inquiry’s recommendations about prescreening content: It reminded him of Article 13 of the European Copyright Directive, also known as the Meme Killer. It’s yet another legal directive that critics say takes an “unprecedented step towards the transformation of the internet, from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users.”

The directive will force for-profit platforms like YouTube, Tumblr, and Twitter to proactively scan user-uploaded content for material that infringes copyright… scanning that’s been error-prone and prohibitively expensive for smaller platforms. It wouldn’t make exceptions, even for services run by individuals, small companies or non-profits.

EU member states have until 7 June 2021 to implement the new reforms, but the UK will have left the EU by then. As the BBC reported in January, Universities and Science Minister Chris Skidmore has said that the UK won’t implement the EU Copyright Directive after the country leaves the EU.

How about the inquiry’s call for web pre-screening? Will it make it into law?

If it does, we’ll let you know.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ELFeXrr3UeQ/

Microsoft patches wormable Windows 10 ‘SMBGhost’ flaw

What’s the difference between a scheduled security update and one that’s out-of-band?

In the case of the critical Windows 10 Server Message Block (SMB) vulnerability (CVE-2020-0796) left unpatched in March’s otherwise bumper Windows Patch Tuesday update, the answer is two days.

That’s how long it took Microsoft to change its mind about releasing a fix after news of the remote code execution (RCE) flaw leaked in now-deleted vendor posts and word spread to customers. It even gained a nickname – ‘SMBGhost’ – in honour of its elusive status.

It wasn’t simply that word had slipped out about an unpatched flaw but the seriousness of the flaw itself, with one of the leaked advisories describing it as ‘wormable,’ in other words able to spread very rapidly.

Seeing double

To a lot of people, that sounded eerily similar to the wormable SMBv1 vulnerability exploited by the global WannaCry and the NotPetya attacks in 2017.

The SMB protocol is widely used to connect printers and network file sharing, so the possibility of a repeat alarmed admins. As Microsoft said:

To exploit the vulnerability against an SMB Server, an unauthenticated attacker could send a specially crafted packet to a targeted SMBv3 Server.

(There’s more on possible exploit scenarios in the detailed analysis from SophosLabs.)

After initially suggesting partial workarounds – disabling SMBv3.1.1 compression on servers and blocking port 445 using firewalls – Microsoft has now issued a patch, KB4451762.

That’s good news because blocking port 445 would be a last resort, as it’s used by other bits of Windows plus things like Azure file storage. These also didn’t do much for desktop computers.

The issue only affects 32/64-bit Windows 10 and Server versions 1903 and 1909 because earlier versions don’t support the affected SMBv3.1.1.

It’s unlikely the flaw is being exploited in real-world attacks yet, but that could change at any time. There are bound to be some servers that won’t receive the patch in the coming weeks.

Those will be at risk of a serious compromise. The solution is to make haste and patch now.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OqgpMudTrew/

4 Ways Thinking ‘Childishly’ Can Empower Security Professionals

Younger minds — more agile and less worried by failure — provide a useful model for cyber defenders to think more creatively.

Hackers are becoming increasingly bold, brazen, and cunning. To defend our connected world against the threat of increasingly mischievous, imaginative, and reckless hackers, cybersecurity experts must also learn to embrace “childish” qualities such as creativity, fearlessness, and natural curiosity.

As a mother of two, I have long observed that many of the qualities effortlessly expressed by my children are shared by the best cyber defenders in our company. Here are four important lessons white-hat hackers can learn from them.

Throw Out the Box
We are often advised to think outside the box to deliberate in new and creative ways, but for children there is no “box” — and perhaps the supposition that there is a box to begin with is what really boxes us in.

Cybersecurity is a constant game of cat and mouse. What’s secure today is at risk tomorrow as black-hat hackers continually find increasingly imaginative ways to threaten our connected world. To win this battle of the minds, cyber defenders must recapture their inner creative selves.

In a NASA study that tested the creativity of 4- to 5-year-olds, 98% of participants scored as “creative geniuses.” Testing on the same group every five years revealed that this percentage decreased as children matured. When the same test was carried out on adults, a mere 2% scored at creative genius level.

By challenging themselves to think beyond conventional constraints, cyber defenders can stay one step ahead of attackers. This means that even after securing an asset with a newly conceived architecture, there can be no room for complacency — defenders must constantly reconfigure solutions to solve problems they may have never even conceived before. 

Be a Sponge
Kids are constantly learning new things, both consciously and unconsciously — take language acquisition as a prime example. Cybersecurity professionals must figure out how to adopt this absorbent quality.

Why? Because cybersecurity as a profession requires a deep understanding of a wide array of technological fields in order to defend any given organization, whatever their specialty. From information technology to operational technology, networking to mobile phones to the Internet of Things, industry professionals must be constantly up to date with developing technologies and methodologies. Hackers themselves are well-versed in a world of knowledge outside of the cyber realm — and defenders must be as well.

To take an obvious example from the world of biology, some of the most sophisticated malware has been modeled on the way real viruses spread their natural destruction. A deep familiarity with the epidemiology of spreading pathogens could go a long way toward immunizing against computer viruses in the cyber realm.

Daydream!
As kids, we spend much time daydreaming, but as adults we are under immense pressure to stay focused. Ironically, this drive is not always necessarily the path to productivity.

Though daydreaming may feel like a waste of valuable time, studies show that it is actually a mentally engaging activity that can lead to creative connections and insights. Researchers at UC Santa Barbara found that people whose minds wander are generally the most creative and best problem solvers due to their ability to work on one task while processing other information and making connections among ideas.

It may sound counter-intuitive, but by slowing down, letting your mind wander, and letting ideas sink in, cybersecurity professionals will see a difference in their ability to connect the dots in a world fraught with chaos. Carving out time to daydream during your hectic work schedule might actually be just what you need.

If at First You Don’t Succeed…
Parents encourage their children to try new things and not fear failure. But too rarely do adults live according to their own advice.

A perfect example is the marshmallow tower challenge, in which a team is tasked with building the tallest possible tower using only spaghetti, tape, and string to support a marshmallow. Among the most successful at this task are young children, who outperform most adults with their tall, creative structures. While adults often execute what they believe is the single best solution, children use what designers call “the iterative process” — starting with the marshmallow on top and building successive prototypes down below. This method gives them constant feedback about what works and what doesn’t: In other words, they try and fail until they succeed.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Beyond Burnout: What Is Cybersecurity Doing to Us?

Michal Bar is the Head of Cybersecurity Professional Services at Cylus, the global leader in railway cybersecurity. Prior to joining Cylus, Michal served as a cybersecurity consultant at the Ernst Young Advanced Security Center in Tel Aviv for eight years. With over 15 … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/4-ways-thinking-childishly-can-empower-security-professionals/a/d-id/1337245?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

DDoS Attack Trends Reveal Stronger Shift to IoT, Mobile

Attackers are capitalizing on the rise of misconfigured Internet-connected devices running the WS-Discovery protocol, and mobile carriers are hosting distributed denial-of-service weapons.

Distributed denial-of-service (DDoS) attacks remain a popular attack vector but have undergone changes as cybercriminals shift their strategies. Today’s attackers are turning to mobile and Internet of Things (IoT) technologies to diversify and strengthen their DDoS campaigns, research shows.

Researchers with A10 Networks, which tracked nearly 6 million DDoS weapons in the fourth quarter of 2019, today published “DDoS Weapons and Attack Vectors” to share the trends in today’s DDoS landscape. These include the weapons being used, locations where attacks are launched, services exploited, and techniques attackers are using to maximize damage caused.

DDoS weapons are distributed around the world; however, the bulk of attacks start in countries with the most Internet connectivity. China is the origin of the highest number of DDoS attacks, with 739,223 starting in the country. The United States is second, with 448,169, followed by the Republic of Korea (440,185), India (268,864), Russia (253,609), and Taiwan (199,656).

The SNMP and SSDP protocols, long the top sources for DDoS attacks, continued to take the top spots in the fourth quarter with nearly 1.4 million SNMP weapons and nearly 1.2 million SSDP weapons tracked. The next one was a surprise: Researchers saw a sharp spike in WS-Discovery attacks, which rose to nearly 800,000 to become the third most common source of DDoS.

A10 Networks attributes this change to the growing popularity of attackers leveraging misconfigured IoT devices to amplify their campaigns. As part of this trend, called “reflected amplification,” attackers are focusing on the rising number of Internet-exposed IoT devices running the WS-Discovery protocol. WD-Discovery, a multicast UDP-based communications protocol, is used to automatically detect Internet-connected services. It does not perform IP source validation, researchers note, so it’s easy for attackers to spoof a victim’s IP address. Doing this resulted in the victim being flooded with data from nearby IoT devices, they say.

Reflected amplification has been “highly effective,” they note, with more than 800,000 WS-Directory hosts available to exploit and observed amplification reaching 95x. These attacks have reached a massive scale and account for the majority of DDoS attacks, researchers say. Most inventory has been found in Vietnam, Brazil, the US, the Republic of Korea, and China.

As more IoT devices connect to the Internet, and the growth of 5G drives network speed and coverage, researchers anticipate attackers will continue to find ways to leverage the IoT. DDoS-for-hire services will make it even simpler for any attacker to launch a destructive attack.

DDoS is also going mobile, researchers found, noting the popularity of DDoS weapons hosted by mobile carriers “skyrocketed” toward the end of 2019. The top-reflected amplified source for DDoS attacks, they noticed, was Guangdong Mobile Communication Co. Brazilian mobile company Claro was a top source of malware-infected drones.

They also looked at trends around autonomous number systems (ASNs), or collections of IP address ranges under a single entity or government, hosting DDoS weapons. The top ASNs hosting DDoS weapons also included Guangdong Mobile Communication Co. and Chinanet, as well as Korea Telecom, aligning with countries that also host a high number of DDoS attacks.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Beyond Burnout: What Is Cybersecurity Doing to Us?.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/iot/ddos-attack-trends-reveal-stronger-shift-to-iot-mobile/d/d-id/1337318?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple