STE WILLIAMS

What’s the best approach to patching vulnerabilities?

New research shows that most vulnerabilities aren’t exploited and those that are tend to have a high CVSS score (awarded on the basis of how dangerous and easy to exploit the vulnerability is). So, not surprisingly, the most easily exploited flaws are the ones exploited most frequently.

What’s more surprising is that there’s apparently no relationship between the proof-of-concept (PoC) exploit code being published publicly online and the start of real-world attacks.

The numbers: the researchers collected 4,183 unique security flaws used in the wild between 2009 and 2018. That’s less than half of the 9,726 discoveries of exploit code that had been written and posted online.

Those numbers come from a study in which a team of researchers from Cyentia, Virginia Tech, and the RAND Corporation took a look at how to balance the pluses and minuses of two competing strategies for tackling vulnerabilities.

What’s the best way to herd cats?

Fixing them all would get you great coverage, but that’s a lot of time and resources spent on sealing up low-risk vulnerabilities. It would be more efficient to concentrate on patching just some high-risk vulnerabilities, but that approach leaves organizations open to whatever vulnerabilities they didn’t prioritize.

How do you know which vulnerabilities are worth fixing? The researchers sought to figure that out by using data collected from a multitude of sources, along with machine learning to build and then compare a series of remediation strategies to see how they perform with regards to the tradeoff between coverage vs. efficiency.

The team’s white paper, titled Improving Vulnerability Remediation Through Better Exploit Prediction, was presented Monday at the 2019 Workshop on the Economics of Information Security in Boston.

The researchers used a list of all security flaws, scores, and vulnerability characteristics extracted from the National Institute of Standards and Technology’s (NIST’s) National Vulnerability Database (NVD). They also used data relating to exploits found in the wild that was collected from FortiGuard Labs, and evidence of exploitation was also gathered from the SANS Internet Storm Center, Secureworks CTU, Alienvault’s OSSIM metadata, and ReversingLabs metadata.

Information about written exploit code came from Exploit DB, Contagio, Reversing Labs, and Secureworks CTU and exploitation frameworks Metasploit, D2 Security’s Elliot Kit, and Canvas Exploitation Framework.

A crucial point: they made what they considered a significant change, and expansion, to earlier modeling. Namely, in order for a vulnerability to be accounted for in the researchers’ models, predictions about the likelihood that it would be exploited weren’t good enough. Rather, the vulnerability had to have been exploited for real, in the wild, for the researchers to take it into account.

From the white paper:

Notably, we observe exploits in the wild for 5.5% of vulnerabilities in our dataset compared to 1.4% in prior works.

They found that the 4,183 security flaws that had been exploited between 2009 and 2018 were a small percentage of the total of 76,000 of all vulnerabilities discovered during that time.

While that works out to be “only” about 5.5% of vulnerabilities being exploited in the wild, “only” one in 20 vulnerabilities being exploited is quite a lot more than the one in 100 shown “in prior works”.

The best strategy?

The research looked at three strategies for prioritising vulnerabilities: using the CVSS score, patching bugs with known exploits and patching bugs tagged with specific attributes such as “remote code execution”. The researchers also created a machine learning model for each strategy to see if it could outperform simple, rules-based approaches.

For people following a strategy based on CVSS scores, the researchers reckoned the best combination of coverage, accuracy and efficiency was achieved by patching anything with a CVSS score of seven or more:

…a rule-based strategy of remediating all vulnerabilities with CVSS 7 or higher would achieve coverage of slightly over 74% with an efficiency of 9%, and accuracy of only 57%. This appears to be the best balance among CVSS-based strategies, even though it would still result in unnecessarily patching 31k (76k total – 35k patched) unexploited vulnerabilities.

Interestingly, all the rules-based approach based on CVSS ran the machine learning model based on CVSS close, with the model performing only “marginally better” than a strategy based on patching any given CVSS score and above.

When looking at strategies based on the availability of exploit code in one of three exploit repositories – Elliot, Exploit DB, and Metasploit – the researchers found that your choice of repository matters:

…the ​Exploit DB ​strategy has the best coverage, but suffers from considerably poor efficiency, while Metasploit performs exceptionally well in efficiency (highest out of all the rules-based approaches), with considerable reduction in coverage, and drastically smaller level of effort required to satisfy the strategy

Unlike the CVSS-based strategy, the researchers found that their machine learning model following a “published exploit” strategy achieved a significantly better balance of coverage and efficiency than a rules-based approach.

For the final “reference tagging” strategy the researchers patched bugs if they had been tagged with one of 83 different keywords and then looked at the efficacy of a patching approach based on each one. None stood out for the researchers as an effective approach and all were outperformed by a “reference tagging” machine learning model:

Overall, like the other rules-based strategies, focusing on individual features (whether CVSS, published exploit, or reference tags) as a decision point yields inefficient remediation strategies. This holds true for all of the individual multi-word expressions.

And better than all those individual strategies, whether rules-based or driven by machine learning, was a machine learning model that used all the available data, they said.

The researchers think their work might be used to improve the CVSS standard and by bodies that issue threat and risk assessments, like the Department of Homeland Security. It could even by used, they suggested, in the Vulnerability Equities Process that determines whether vulnerabilities should be disclosed to the public or kept secret and used in offensive operations.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/-muJIFpoyEU/

Action required! Exim mail servers need urgent patching

Researchers have discovered another dangerous security hole hiding in recent, unpatched versions of the popular mail server, Exim.

Uncovered in May 2019 by security company Qualys, the flaw (CVE-2019-10149) affects Exim versions 4.87 to 4.91 inclusive running on several Linux distros, the latter released as far back as 15 April 2018. The next release, version 4.92, fixed the problem on 10 February 2019 although that wasn’t realised by the software’s maintainers at the time.

The low down: anyone still running a version from April 2016 to earlier this year will be vulnerable. Versions before that might also be vulnerable if EXPERIMENTAL_EVENT is enabled manually, Qualys’s advisory warns.

The issue is described as an RCE, which in this case stands for Remote Command Execution, not to be confused with the more often-cited Remote Code Execution.

As the term implies, what that means is that an attacker could remotely execute arbitrary commands on a target system without having to upload malicious software.

The attack is easy from another system on the same local network. Pulling off the same from a system outside the network would require an attacker to…

Keep a connection to the vulnerable server open for 7 days (by transmitting one byte every few minutes). However, because of the extreme complexity of Exim’s code, we cannot guarantee that this exploitation method is unique; faster methods may exist.

Remote exploitation is also possible when Exim is using any one of several non-default configurations itemised in the Qualys advisory.

What to do

The first stop is to check impact assessments issued by individual distros, for example Debian (used by Qualys to develop the proof-of-concept), OpenSUSE, and Red Hat. Users of Sophos XG Firewall, which includes Exim, should read Knowledge Base article 134199.

As Qualys points out, exploits for the flaw are likely to follow within a matter of days. In that scenario, hackers would scan for vulnerable servers, potentially hijacking them. Clearly, this is a flaw admins will want to patch as soon as possible.

Unfortunately, if the slow patching of another serious flaw revealed in February (CVE-2018-6789) is anything to go by, a rapid roll out is unlikely. That too was another vulnerability discovered retrospectively, affecting all Exim versions from 1995.

As of June, Exim’s market share is 57% of mail servers polled, which makes it the internet’s number one platform with over half a million servers. For criminals, that’s a lot of servers to trawl through for easy targets.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/CDtaGi3qTaY/

Praise the lard! Police hook up with Microsoft to school us on National Phish and Chip Day

Today is National Fish and Chip* Day, and tech giant Microsoft has wasted no time wading in with the police to school the UK about phishing scams.

In what must be one of the industry’s most tenuous links, the City of London Police has urged internet pond life to “mullet over” before falling victim to a phishing email on National Fish and Chip Day. Cod-dit?

Even the pun lovers here at Vulture Central thought that was below sea level.

Who are we kidding? We just wish we’d come up with it first instead of plunging the depths for ever more convoluted ways of torturing the word “Huawei”.

Fish/phish punning aside, the message is important. The UK’s Action Fraud said over a quarter of a million folk had dropped it a line between April 2018 and March 2019, mostly regarding emails purporting to be from a well-known brand. Other users said they’d netted phisher folk on phone calls or text messages.

Pointing to figures showing that not-so-koi Brits handed over more than £19m to fraudsters over the last year, Microsoft joined City of London Police to tell users to bass-ically contact a brand directly rather than being reeled in by a message that is merely housed in a legitimate-looking shell.

The kindly software giant also modestly nodded at the $1bn bait it shells out every year for research into cybersecurity. Herring about that, its customers might pond-er the amount of time their PCs spend downloading patches for the company’s software.

As well as research, the Windows giant discreetly waved a hand in the direction of its soon-to-be-defunct Edge browser, which topped the security charts back in 2017 for warding off phishing websites. Carp-e diem, right?

At the time, Edge blocked more than 90 per cent of fishy links, with Chrome swimming some way behind. That incarnation of Edge is, of course, due to be replaced by something shinier based on Chromium in the very near future.

For its part, Google would remind you about the Safe Browsing idea it’s floating around, aimed at spotting slippery websites.

Sadly, a look at Google’s statistics shows that its service alone has spotted thousands of new unsafe sites per week over the last year. By March, the count for phishing sites was heading toward the 1.5 million tide-mark. On the bright side, better security has seen the number of out-and-out malware sites sink.

Still, getting the message out to that special salmon in internet land to be wary of scams by dipping into the Fish and Chip Day lard bucket is surely not just floundering their resources for the halibut like some sort of cybersecurity minnow. The Reg would also suggest that, come July, readers should be on the lookout for phishing scams themed around hardcore prawn. ®

* For those confused, “chip” in the UK refers to those delicious bags of starchy grease available from fried food emporiums up and down Blighty. Not to be mixed up with the American interpretation, known as “crisps” on this side of the pond.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/07/microsoft_police_phishing/

The Minefield of Corporate Email

Email security challenges CISOs as cybercriminals target corporate inboxes with malware, phishing attempts, and various forms of fraud.

Employee email accounts are prime targets for fraudsters, phishing attacks, and malware delivery as hackers assume someone will click, download, or respond to fall into their traps.

The threats causing concern are far from new — in fact, they’re among the oldest. As Cisco researchers point out in a new report, “Email: Click with Caution,” spam is 40 years old and phishing is over 30. Over time they’ve grown in prevalence and sophistication, joined by malware and fraud as criminals find new ways to cause problems for security practitioners.

“It seems never-ending sometimes, the kinds of threats that arrive through email,” says Ben Munroe, director at Cisco Security, who also says “the fact that it’s still a problem for us … is incredible. Things are not getting better and in fact, things are getting worse.”

Incredible, yes, but as researchers note in the report, email’s structure is “an almost ideal format” for scammers. Employees are forced to read every message, make judgment calls about what they receive, and decide what to open, click, download, and respond to. The right amount of social engineering can manipulate recipients into accidentally letting attackers in.

More than half (56%) of CISOs say defending against user behavior is “very” or “extremely” challenging, Munroe says, citing Cisco’s 2019 CISO Benchmark Study. Only 51% think they’re doing an “excellent” job managing employee security. Users often get a bad reputation, but it’s tough when cybercriminals have an advantage and their targets don’t know what’s coming.

“It’s not fair for everyone to criticize users for doing what users have to do,” he says. Employees have to answer emails, even if an email from their “boss” came from a hacker. “It isn’t fair to always say users are not sophisticated — attackers are extremely sophisticated,” Munroe adds.

Here, we take a closer look at the emailed threats arriving in their inboxes.

To Click or Not to Click?
Wendy Nather, head of Advisory CISOs at Duo Security, now part of Cisco, says attackers’ efforts are going in two directions: one is technological; trying to exploit new methods and plant new payloads. The other is psychological, encompassing social engineering efforts. “You have to defend both on the technical side and the social engineering side,” she emphasizes.

Credential theft is one of the most common email-based scams, Cisco researchers found. Cybercriminals send emails from fake addresses ([email protected], for example) and users who click a link are redirected to a fake webpage where they’re prompted to enter credentials. Those go to the attacker, who may use them to log in to other Microsoft services.

Cisco cites an Agari report that says 27% of advanced email attacks are launched from compromised accounts — up from 20% in the last quarter of 2018. Microsoft accounts aside, attackers also target popular cloud-based email services like Gmail and G Suite.

Business email compromise (BEC), which caused $1.2 billion in losses in 2018, is another threat to watch. Researchers note attackers don’t typically use compromised accounts for these; instead, two-thirds of BEC scams still use free webmail attacks and 28% use registered domains. BEC messages are often personalized; one in every five includes the name of the recipient.

Digital extortion, in which attackers try to convince their victims they have footage of them accessing an adult film site or other content, is another lucrative move for attackers. Munroe notes how advancements in technology have changed the game in extortion: “the improvement in hardware, allowing all of us to have a camera; the improvement in bandwidth, allowing us to stream video” have also driven techniques for malicious activity. Researchers do point out profits from extortion have fluctuated along with the value of Bitcoin over time.

Malware Delivery
Email is still a reliable vector to deliver malware, though attackers have been forced to change their tactics as their victims learned the risk of clicking .exe files. Now, malware is more likely to be served indirectly via documents with less suspicious file types or malicious URLs in messages.

Between January and April 2019, binary files only made up 2% of all malicious attachments. Attackers now gravitate toward .zip files, which make up one-third of attachments. JavaScript files make up 14.1% and, researchers note, and attackers are more frequently using them. It’s now the third most-common malicious extension, following .doc (41.8%) and .zip (26.3%).

“Throw PDF documents into the mix, and more than half of all malicious attachments are regularly used document types, ubiquitous within the modern workplace,” according to the report.

Malware itself has also evolved, Munroe points out. “I’d say in terms of what they’re distributing — the idea of malware or what you could call modular malware — having landed on a host or an endpoint, it’s able to detect a number of different parameters about where it is or what to do.” Is it in a low-cost environment? Valuable environment? Is there high bandwidth?

“That idea is certainly quite worrisome and given email is a prevalent attack vector, it’s certainly concerning for CISOs,” he adds.

What’s a CISO to Do?
The challenge of email security lies in its commonality. “It’s the mundane nature of email, especially trying to trick people into clicking things they spend all day clicking,” says Nather.

Some CISOs recognize different users have different risk profiles, she continues, and they adjust their security practices accordingly. Human resources employees, who have to open attachments all day, or financial analysts, who are prime BEC targets, are some examples. “They’re putting more security controls around them, isolating them more, hardening their systems more,” Nather explains. Some employees have different external email addresses.

In the report, researchers explain how to spot a phishing attempt. A few telltale signs:

  • Sense of urgency: if it demands immediate action, act with caution and confirm with the sender
  • Illegitimate-looking URL: Phishing URLs often look unusual. If it’s hidden within text, hover over the link for a closer look. Hesitant? Don’t click.
  • Unusual file types: most business emails rely on the same few file types. If you don’t recognize it, don’t open it.
  • Spelling mistakes and blurred logos: Emails that seem careless may be signs of a phish.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/the-minefield-of-corporate-email/d/d-id/1334903?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Learn the Latest Hacking Techniques at Black Hat Trainings Virginia

At Black Hat’s upcoming Trainings-only October event you’ll have opportunities to get up to speed on the newest hacking tricks for operating systems and cloud providers.

Come spend two days honing your cybersecurity skills at Black Hat Trainings in Virginia, an October event offering some of the most practical, hands-on courses in the business.

Get up to speed on Python hacking in two days flat by attending Python Hacker Bootcamp – Zero to Hero, a Training designed to teach you hacker programming methodology. Instead of learning formal programming practices that you might never use, this course focuses on core concepts taught through information security-centric projects.

Hands-on labs accompany each lecture to help you focus on solving commonplace and real-world security challenges. The labs have been designed to apply to both attackers and defenders. The entire bootcamp is designed to be fun, practical, and fast-paced.

If you’re more interested in getting inside the minds of cloud hackers, sign up for Astute Hunting in the Cloud – Bring The Thunder! This two-day Training is a great opportunity to get your hands dirty and find the hackers hiding within the systems of top cloud computing providers.

With a focus on AWS and Azure, you will discover the tactics, techniques, and procedures (TTPs) needed to hunt threats in your cloud environment. You’ll get inside the mind of a cloud hacker, see the vulnerabilities, and understand what clues attackers often leave behind.

Advanced Infrastructure Hacking – 2019 Edition is a fast-paced version of the original four-day class, concentrated down into two efficient days of training and demos.

This course focuses on the vulnerabilities of operating systems and covers a wide variety of neat, new and ridiculous techniques to compromise modern OSes, networking devices and everything in-between. While prior pentest experience is not a strict requirement, familiarity with both Linux and Windows command line syntax will be greatly beneficial for attendees.

These cutting-edge Black Hat Trainings and many more will be taking place October 17 and 18 at the Hilton Alexandria Mark Center in Alexandria, Virginia. From infrastructure hacking to incident response, there’s a course for hackers and security pros of all experience levels, so register today.

Article source: https://www.darkreading.com/black-hat/learn-the-latest-hacking-techniques-at-black-hat-trainings-virginia/d/d-id/1334897?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

End User Lockdown: Dark Reading Caption Contest Winners

Phishing, cybersecurity training, biometrics and casual Fridays. And the winners are …

Dark Reading reader Nick Walker (aka ntwalk) earns the top honors and a $25 Amazon gift card for his password cracking play on words in the caption, penned below.

When not penning cartoon captions, Nick, of Lowell, Arkansas, works as a security analyst at J.B. Hunt Transport Services, Inc.

Coming in as a close second to win a $10 Amazon gift card is Mark Bartel, a software engineer from The Dalles, Oregon, with the clever, “I told you we needed to pass that cybersecurity training class.

Many thanks to everyone who entered the contest and to our loyal readers who cheered the contestants on. Also a shout out to our judges, John Klossner and the Dark Reading editorial team: Tim Wilson, Kelly Jackson Higgins, Sara Peters, Kelly Sheridan, Curtis Franklin, Jim Donahue, Gayle Kesten, and yours truly.

If you haven’t had a chance to read all the entries, be sure to check them out today.

Related Content:

Marilyn has been covering technology for business, government, and consumer audiences for over 20 years. Prior to joining UBM, Marilyn worked for nine years as editorial director at TechTarget Inc., where she launched six Websites for IT managers and administrators supporting … View Full Bio

Article source: https://www.darkreading.com/application-security/end-user-lockdown-dark-reading-caption-contest-winners/a/d-id/1334881?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The FBI is sitting on more than 641m photos of people’s faces

Toss the FBI’s massive facial recognition (FR) databases into the wash, add recommendations about privacy laws that the government watchdog GAO (Government Accountability Office) laid out back in 2016, and set the dial to three years later.

Do you get a shrunken collection of people’s faces at the end of the spin cycle?

No, you get a puffed up database or hundreds of millions of FR photos that the FBI can get at without warrants or reasonable suspicion of wrong-doing.

On Tuesday, the GAO said that the FBI’s FR office can now search databases containing more than 641 million photos, including 21 state databases.

That’s up quite a bit from the 412 million images the FBI’s Face Services unit had access to at the time of the May 2016 GAO report – a massive collection of databases that a House oversight committee seethed over in March 2017, calling for stricter regulation of the technology at a time when it’s exploding, both in the hands of law enforcement and in business.

Seethe away, Congress. Make as many recommendations as you like, GAO. Three years later, the FBI has addressed only one of the GAO’s recommendations, it said on Tuesday.

The GAO noted one example of being ignored on its FR advice when it pertains to the accuracy of searching face databases:

While the FBI has conducted audits to oversee the use of its face recognition capabilities, it still hasn’t taken steps to determine whether state database searches are accurate enough to support law enforcement investigations.

What the FBI’s done and not done about…

Privacy

In 2016, the GAO recommended that the Justice Department (DOJ) develop its privacy documentation, including privacy impact assessments (PIA), which analyze how personal information is collected, stored, shared, and managed in federal systems, and system of records notices, which inform the public about, among other things, the existence of the systems and the types of data collected. The DOJ has taken some action, but there’s still work to be done.

Also, in 2016, the GAO recommended that the FBI conduct audits to make sure that its FR users are conducting face image searches in accordance with those DOJ policies. That’s the one thing that the FBI has accomplished. The GAO thinks its recommendations are still valid and, if implemented, would lead to more transparency about how people’s personal information is being collected, used and protected.

Accuracy (and lack thereof)

This is a big one, and it’s one where the FBI hasn’t done much, the GAO found.

The use of FR by surveillance-happy governmental and law enforcement agencies has been of increasing concern in large part due to its inaccuracy. Last month, the technology-forward but still civil-rights-sensitive city of San Francisco banned the use of facial recognition by police and city agencies, citing inaccuracy as one of multiple reasons.

There’s been plenty of evidence that FR is prone to misidentification. For example, when the American Civil Liberties Union (ACLU) last year tested its use by police in Orlando, Florida, it found that FR falsely matched 28 members of Congress with mugshots.

Another example: After two years of pathetic failure rates when they used it at Notting Hill Carnival, London’s Metropolitan Police finally threw in the towel on FR last year. In 2017, the “top-of-the-line” automated facial recognition (AFR) system they’d been trialling for two years couldn’t even tell the difference between a young woman and a balding man.

Then there’s the oft-cited study from Georgetown University’s Center for Privacy and Technology which found that AFR is an inherently racist technology.

In another study published earlier this year by the MIT Media Lab, researchers confirmed that the popular FR technology it tested has gender and racial biases.

The FBI is still doing little to ensure the accuracy of its FR, the GAO found. For example, it hasn’t assessed the accuracy of systems operated by external partners, such as state or federal agencies. Nor has it been conducting annual reviews to determine if the accuracy of FR searches is meeting user needs.

Why the GAO did this study (again)

It’s not that FR accuracy hasn’t gotten better over the past few decades, helping law enforcement to identify criminals. One notable example is the case of Charles Hollin, a child molester caught in January 2017 after spending 18 years as a fugitive, thanks to FR and the State Department’s database of passport photos.

But while it’s gotten better, questions remain about the technology’s accuracy and the protection of privacy and civil liberties when it’s used to identify people for investigations, the GAO said.

Chairman Elijah Cummings – one of the lawmakers who scathingly took the FBI to task over this issue in 2017 – said in his opening statement at the House Committee on Oversight and Reform on Tuesday:

This technology is evolving extremely rapidly without any real safeguards. There are real concerns about the risks this technology poses to our civil rights and liberties and our right to privacy.

Gretta Goodwin, director of homeland security and justice for the GAO, said that unless the FBI can determine how accurate the data is, there’s no way to know how much use the technology is:

Until FBI officials can assure themselves that the data they receive from external partners are reasonably accurate and reliable, it is unclear whether such agreements are beneficial to the FBI.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/8pK9V6rFv-M/

Someone slipped a vuln into crypto-wallets via an NPM package. Then someone else siphoned off $13m in coins to protect it from thieves

Blockchain biz Komodo this week said it had used a vulnerability discovered by JavaScript package biz NPM to take control of some older Agama cryptocurrency wallets to prevent hackers from doing the same.

The digital currency startup said it had socked away 8 million KMD (Komodo) and 96 BTC (Bitcoin) tokens – worth about almost $13m – from the wallets, and stashed them in two digital wallets under its control, where the assets await reclamation by their owners.

Komodo has outlined which Agama wallets are affected on its support page, and said it intends to provide details about the vulnerability and a postmortem once it has done what’s necessary to secure customer funds.

It received word of the exploitable security weakness from NPM, which detected the vulnerability through its source code scanning system. Over the past few years, the JavaScript custodian has been beefing up its security vulnerability detection capabilities following several security incidents.

In a blog post about how the vulnerability ended up in the Agama source code, NPM said the situation fit a pattern that has become common: publishing a useful package – in this case, electron-native-notify – and waiting until it gets integrated into a target application and then updating it with malicious code to steal information or worse.

“This attack focused on getting a malicious package into the build chain for Agama and stealing the wallet seeds and other login passphrases used within the application,” explained Adam Baldwin, VP of security at NPM.

Baldwin said the vandalism originated with a commit by GitHub user sawlysawly on March 8 that added electron-native-notify ^1.1.5 as a dependency in EasyDEX-GUI, which is used in Agama. On March 23, electron-native-notify was updated to version 1.1.6 with malicious code.

Agama v0.35, with the compromised code, was released on April 13 and three days later, electron-native-notify was updated to 1.2.0 and sawlysawly thereafter revised Agama’s dependencies to require that version of the library.

bitcoin

Bucharest’s Bayrob boys blasted based on bogus buys, Bitcoin banditry, bound to be behind bars

READ MORE

The incident recalls a similar attack last year on the event-stream module, which saw one of its dependencies altered to steal Bitcoin.

A research paper [PDF] published in February, “Small World with High Risks: A Study of Security Threats in the npm Ecosystem,” found that “installing an average npm package introduces an implicit trust on 79 third-party packages and 39 maintainers” and that up to 40 per cent of the registry’s 800,000 packages include at least one publicly known vulnerability.

Compromising just one popular npm package, the paper says, can affect as many as 100,000 more packages.

Baldwin reassured users of npm that the npm audit command will identify known malicious packages in code projects.

NPM’s flaw finding service will also notify users of packages with vulnerabilities. The subcommand npm audit fix may replace a vulnerable module with a patched version, if available. But manual review may be necessary and there may not be a fix available for insecure modules. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/07/komodo_npm_wallets/

There’s a reason why my cat doesn’t need two-factor authentication

Something for the Weekend, Sir?

Access denied. Enter Access Code.

That’s a good start. Just a few moments ago I was handed a card on which is written, in blue ballpoint, a newly compiled string of alphanumerics that is supposed to identify me as a unique user. Oh well, maybe I fumbled the buttons. Let’s try again.

Access denied. Enter Access Code.

I am standing in the driving rain – this is London in the summer – in front of a large electronically operated vehicle barrier that keeps the riff-raff from getting anywhere near the car park and loading bay behind the building where I am to be working this week.

The vertical stainless steel keypad into which I am pushing my access code is weather-resistant. I am not. You’d think they could have installed the keypad at car-window level but no, it’s at lorry level. And it’s not on the driver’s side anyway, so anyone not rolling up in an unmodified US or continental import vehicle is forced to exit and walk over to the access terminal.

Access denied. Enter Access Code.

As far as it is concerned, I am riff-raff. I look behind me to see a steel-grey car has pulled up behind mine. Steel-grey = bland, unimaginative, company car, must be management. As I trudge back towards the street entrance around the corner to ask the security desk for an alternative access code, remembering this time to express an explicit preference for one that actually provides access, I notice the driver in the grey car has started to harrumph.

Security systems like this exist to protect me and my possessions, whether physical or electronic. They keep out the nasties and foil the mischievous. They allow access to the honest and prevent it to the unauthorised.

They are a pain in the arse.

Security is essential, of course, but only for other people. Not me. I’m the nice guy here and this sodding keypad is stopping me from getting in.

But then security authentication is one of those functions whose philosophical concept is hampered by self-contradictory details of its own design. To pick a topical example, it is the right of European Union citizens to enjoy free movement between EU countries without being stopped by border controls. However, how can the border controls know whether you are an EU citizen or not unless they stop you to ask for your EU identification? So it’s only by presenting your passport or ID card that you can exercise your right not to have to present your passport or ID card.

The forces of law and order, from police to night club bouncers, face the same recursive logic. Why do they insist on frisking me? Why can’t they concentrate their stop and search efforts only on those who are carrying concealed weapons?

As they say, there is a fine balancing act between adequate security and easy user experience. My cat has it easy: he was chipped at the rescue centre when we acquired him, and now he just wanders in and out of the house via a cat-flap that unlocks only when it detects his unique code.

The system also allows my cat to entertain himself by sitting indoors, looking though the clear plastic flap and waiting for other cats to come near. When they do, he leans forward so that the electronic detector unclicks the flap, daring the other cat to enter, then chuckles to himself as the potential intruder bashes its head on the door just as it locks itself again automatically.

Mind you, any electronic system has its failings. In the case of the cat-flap, it’s the need to change the batteries. They always seem to run out at 3am on the morning that we’re setting off on holiday and I end up having to race around the neighbourhood hunting for all-night petrol stations that can sell me eight AAs.

Batteries aside, what makes it so consistently reliable for my cat, and only my cat, to come and go without interference is partly the system’s ease of use: his ID is surgically inserted in the scruff of his neck. This kind of tech isn’t exclusive to feline operatives. Employees working in security-critical environments have been known to get chipped in the fleshy bit between thumb and forefinger, allowing them to open electronically locked doors by gesturing an Air Wank.

I did say “partly”. The challenge with digital security systems is that they are fluid and programmable, therefore re-programmable or liable to interference by unwanted external forces. The only reason it works brilliantly for my cat is that the other cats in my neighbourhood don’t have any programming skills. This isn’t the case for humans. For us, whatever security system you roll out has to be protected by additional levels of alternative security, and so the ease-of-use aspect quickly evaporates.

One method that is slowly gaining momentum is ground-level invisibility. If you don’t want social media giants to slurp and misuse your personal data, don’t give them any to start with. For many of us, it’s a bit late to wipe clean our muddy online footprints without expert help but, to mix a clothing metaphor, the sooner you zip up the better.

To my mind, like the first rule of Fight Club, anyone who blogs about IT security is stumbling at the first hurdle. It’s another of those contradictions in data security culture that talking about security in public is likely to make yourself a target and therefore less secure, and you can’t blame the rest of us for questioning your expertise and motives. It’s a bit like horoscope writers who consistently fail to win the Lottery or get-rich-quick life coaches who still aren’t rich enough to stop being a get-rich-quick life coach.

Returning to my car with the time-honoured advice “Try it again now” still ringing in my ears as rainwater dribbles down my neck, I see several more cars are queueing behind the grey one, waiting for mine to make way at the front. It is a harrumphing convention but nobody risks stepping out into the rain to assist. Righty, let’s give it a go.

Access denied. Alarm On.

Ooh, that’s a new one. Perhaps I’m getting somewhere. One more try?

Access denied. Commencing Lockdown.

A pair of amber lights illuminate and begin swirling dramatically through the driving rain. A rolling steel shutter shuts off the entrance with a metallic scream. It’s like I’m inside a Ridley Scott movie.

Enter 2FA Code. Press ? For Help.

I oblige and spend the next 10 minutes reading instructions in a 13-character LCD strip above the keypad on how to register myself online as a new user at a website that requires me to override a security warning just to see it, only to discover that I must update Google Authenticator before being asked to point my phone’s camera at the QR code that is now showing on my phone’s display.

The rainstorm intensifies but, hey, look on the bright side: I can no longer hear the harrumphing. It is being drowned out by the honking of car horns.

Oh to be a cat.

Youtube Video

Alistair Dabbs
Alistair Dabbs is a freelance technology tart, juggling tech journalism, training and digital publishing. He would like to apologise to readers who may recently have lost a loved one in a freak car park barrier accident. He also apologises for failing to warn readers that this week’s column features some strong language and flashing images. @alidabbs

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/07/theres_a_reason_why_my_cat_doesnt_need_twofactor_authentication/

You. Quest and LabCorp. Explain these medical database super-hacks, say US senators as 425,000 more people hit

As healthcare companies come forward to confirm hackers would have been able to access millions of patients’ personal information from a compromised American Medical Collections Agency (AMCA) database, US senators are demanding answers.

Quest Diagnostics was yesterday on the receiving end of an open letter (PDF) issued by Senators Robert Menendez and Cory Booker (both Democrats from New Jersey) seeking some basic information on the blood-testing outfit’s security practices and how it plans to handle the massive security fail by its business partner AMCA. Records of nearly 12 million of Quest’s customers, stored in an AMCA-hosted data silo, were accessible to hackers for nearly eight months, it emerged this week.

“As the nation’s largest blood-testing provider, this data breach places the information of millions of patients at risk,” the senators’ letter reads. “The months-long leak leaves the sensitive personal information vulnerable in the hands of criminal enterprises.”

This comes after Quest told the SEC it was informed by AMCA, the debt collection company hired to extract payments from Quest customers, that its databases of patients had been broken into by hackers. The AMCA-hosted Quest database, which was under the control of one or more intruders from August 1, 2018 to March 30, 2019, contained approximately 11.9 million customer records from Quest.

blood drive

Bloody awful: Hell-thcare hackers break into databases of 20m medical test biz patients

READ MORE

While the New Jersey congressional duo note that it was AMCA who was hacked, they still want Quest to explain when and how it learned of the incident and what it plans to do as far as notifying customers and protecting their data from further misuse.

Additionally, the senators are curious as to how the hack was not noticed by Quest nor AMCA for eight months, and whether Quest performed any tests or audits on the security both its internal records and the data it entrusted to outside partners.

The letter gives Quest execs until June 14 to respond. The company did not return a request for comment on the matter.

Similarly, the senators also wrote [PDF] to LabCorp this week, demanding answers. LabCorp had 7.7 million patient records stored in a hacked AMCA database, and is almost certain 200,000 of those entries contained credit card or bank account info that was siphoned off by the intruders.

Meanwhile, add one more medical testing company to the ranks of those hung out to dry by AMCA. OPKO Health, a test and diagnostics firm headquartered in Florida, told the SEC that it too had data stored on compromised AMCA systems. Specifically, records of 422,600 people that included patients’ names, dates of birth, addresses, phone numbers, dates of service, and balance information.

Of those 422,600 patients exposed to the hackers, 6,600 had credit card or bank account information included in their file, and will be offered two years of credit and identity theft monitoring service free of charge. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/06/congress_amca_leak_quest_labcorp/