STE WILLIAMS

Brave 1.0 launches, extends ad-watching payouts to iOS

Nearly four years after the Brave browser inserted its we-will-pay-for-your attention pitch into the adblockers v. publishers war, it’s finally showtime.

On Wednesday, Brave announced that the browser that, to quote,

Ends Surveillance Capitalism

…is now out of beta and ready for general consumption. The beta version has already drawn 8.7 million monthly users, but now, the full, stable release is available for Windows, macOS, Linux, Android, and iOS.

The browser, based on Chromium – the open-source version of Google’s Chrome browser – promises these kind of protections, some of which have motivated cautious users to resort to adblockers:

  • Privacy from ads that track us across the web.
  • Speed: what Brave says is a 3-6x faster browsing experience.
  • Security from malvertising, when ad networks deliver malware.
  • Cash in exchange for users agreeing to be shown ads vetted to weed out the annoying and the malware-risky, users will be paid in Brave’s own virtual ad currency, called Basic Attention Token (BAT)… unless they opt to contribute the little dabs of cryptocurrency back to the publishers, or to their favorite content creators.

Brave started displaying vetted ads in April 2019. The so-called Brave Ads digital advertising model offered a new way to work out the economics of the current web model, which pits publishers who need revenue and get it through ads vs. web visitors who need their privacy, security and sanity.

The Brave new model: users agree to see the vetted ads, and they get BAT.

Specifically, Brave users who agree to see ads get 70% of the gross ad revenue, paid out with BAT, while Brave keeps the rest. At the end of the Brave Rewards monthly cycle, users can claim the accumulated tokens and either donate them to their most visited sites or cash out.

You have to turn on Brave Rewards if you want to pocket the BAT. The default setting puts the money into the pockets of publishers.

From ‘don’t touch our ads!’ to ‘sign me up’

The publishing world’s initial response to Brave’s plan: Hell, no. Lawyers from the likes of The New York Times, Washington Post, The Wall Street Journal and other members of the Newspaper Association of America published a letter in April 2016, threatening legal action if Brave messed with their ads.

Over the past 3.5 years or so, it’s managed to convince some big players: it’s signed up over 300,000 verified websites, including the initially “we will sue you” Washington Post, as well as The Guardian and Wikipedia, and creators on YouTube, Twitch, Twitter, GitHub and more.

Most users will presumably choose to recompense those publishers, considering how little a user can make if they decide to pocket their BAT. Brave chief product officer David Temkin, talking to Wired, estimated that an average user could make about $5 worth of BAT each month.

Still, if you have a deep need for mini spatulas, you’ll be glad to hear that there’s now a way to actually monetize BAT. You can cash them yourself via Brave’s partner, the digital money platform Uphold, or, eventually, exchange the tokens for gift cards and restaurant vouchers, according to The Verge.

Brave promises privacy…

Brave’s platform, which is based on an Ethereum-based blockchain, natively blocks trackers, invasive ads, and device fingerprinting by default. That means that users can browse the web without their interests and browsing histories being snorted by trackers: while advertisers can see users’ activity, they can’t connect that activity to individuals.

The blockchain also ensures that BAT payments will be transparent and secure.

…But don’t they all?

It all sounds great, but Brave is far from the only browser that’s promising better privacy these days.

Firefox, for one, has rolled out a slew of tracker-flummoxing changes over the past year: blocking some trackers by default, new controls to make it easier for users to dodge ads, a project called Track THIS that makes out-of-sight ad-snooping visible to all, and a drop-down menu that shows the trackers detected on each site, among other things.

Safari goes even further. It added a similar feature a few years ago but took it further still: it blocks nearly all third-party trackers on sites you don’t visit frequently by default, rather than just known trackers collected on a blacklist.

Microsoft is still experimenting with preventing tracking in its Edge browser – a feature expected by 15 January 2020.

In May 2019, Google announced new privacy tools to limit how much advertisers can track us online. Those tools aren’t out yet, but Google has said that it’s working on changing how certain classifications of cookies get blocked by default in Chrome – a change that it expects to deliver by February 2020.

The difference between all these privacy protections and Brave? Brave promises to block third-party ads, trackers and autoplay videos automatically, by default, no tinkering with settings required (though it’s an option).

Now that it’s out of beta and into 1.0, we’ll be able to see how much that matters to users. If you’re using the browser now, please do tell us how you like it and whether you think it measures up to its promise.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/oVgTjmFNurY/

Data thieves blew cover after maxing out victim’s hard drive

An anonymous cybercriminal (or perhaps a gang) whose over-pilfering from a victim’s filesystem blew the “disk full” whistle on their massive data-stealing operation.

The Federal Trade Commission (FTC) has reached a settlement with InfoTrax, a Utah-based company that provides business operations software for multi-level marketers, after thieves stole a million sensitive customer records from its servers in 2016. The only reason it spotted the theft was because the crook filled up one of its server’s hard drives collecting the information, said the FTC in its complaint.

InfoTrax held data on almost 12 million consumers in September 2016, according to an FTC complaint which detailed what it called “unreasonable data security practices”.

The company didn’t delete consumer information held in its databases when it was no longer necessary, and didn’t audit the security of its software or network, the Commission said. Neither did it segment its network to stop attackers moving laterally through it. Perhaps the most damning allegation was that the company stored social security numbers (SSNs), full payment card information, bank account data and login credentials unencrypted.

These loopholes enabled an attacker to break into the company’s network back in May 2014 and insert a malware back door. Over the next two years, this hole let them view, download, and delete files on the company’s servers, and upload more software at will. The attacker accessed the network 17 times over the following two years before harvesting the lion’s share of the company’s sensitive data.

On 2 March 2016, they stole a million peoples’ private data, including names, addresses, email and telephone numbers, and SSNs. The FTC added that one of the compromised databases was a legacy system containing data that the company didn’t even know about.

The thing that finally alerted InfoTrax to the two-year problem was that the hacker was stealing more information than they could handle, explained the FTC complaint:

The only reason Respondents received any alerts is because an intruder had created a data archive file that had grown so large that the disk ran out of space. 

The incident response was, shall we say, leisurely. When InfoTrax finally discovered the presence of the intruder(s), 

Only then did Respondents begin to take steps to remove the intruder from InfoTrax’s network. 

While those steps were being taken, more data was being pilfered: On 14 March 2016, the attacker hit the company through a website portal for its distributors. On 29 March they uploaded more malicious code via an InfoTrax client’s web portal, collecting fresh data that included “newly submitted full names, payment card numbers, expiration dates, and CVVs.” 

InfoTrax agreed to settle the case with the FTC. The settlement, with the company and its founder Mark Rawlins, forces InfoTrax to create an information security program with cybersecurity safeguards including network segmentation, detection of unknown file uploads, an intrusion prevention system, and data encryption. It must also enlist a penetration tester and software code review, the settlement added, and InfoTrax must get regular security audits from third-party providers.

InfoTrax posted a public statement saying that it had already put in place many of the FTC’s mandated steps, adding:

We deeply regret that this security incident happened. Information security is critical and integral to our operations, and our clients’ and customers’ security and privacy is our top priority.

The settlement agreement doesn’t impose any monetary penalties on the company. Commissioners passed it unanimously.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/W6A0rfdd15M/

How the Linux kernel balances the risks of public bug disclosure

Last month a serious Linux Wi-Fi flaw (CVE-2019-17666) was uncovered that could have enabled an attacker to take over a Linux device using its Wi-Fi interface. At the time it was disclosed Naked Security decided to wait until a patch was available before writing about it.

Well, it’s been patched, but the journey from discovery to patch provides some insights into how the Linux open-source project (the world’s largest collaborative software development effort) manages bug fixes and the risks of disclosure.

The Linux community worked hard last month to patch a bug in one of the operating system’s wireless drivers. The bug lay in RTLWIFI, a driver used to run Wi-Fi chips produced by processor manufacturer Realtek.

To be vulnerable to the bug, a device would have to include a Realtek Wi-Fi chip. These processors can be found in everything from Wi-Fi access points and routers through to some laptop devices explained the person that found it, GitHub’s principal security researcher Nicolas Waisman.

If a device does contain this chip, the consequences could have been serious, he told Naked Security at the time:

You could potentially obtain remote code execution as an attacker.

An attacker in radio range of the device could send a packet using a Wi-Fi feature called Wi-Fi Direct, which enables devices to talk to each other directly via radio without using a central access point. The attacker could add information to the packet that would trigger a buffer overflow in the Linux kernel.

Given that Realtek chips turn up in all kinds of equipment including routers and laptops, the bug seemed like a pretty big deal. It’s also an old one – it’s been in the Linux codebase since 2013.

Waisman – a security researcher of note with a good reputation and a responsible outlook – revealed the bug before the Linux team had fixed it, which had us scratching our heads and wondering why he’d do that.

On Tuesday 15 October 2019 he notified the group via [email protected], a private email list. Two days later he released it to the public via Twitter on 17 October. It would be almost three weeks before the security team’s patch appeared in v 5.3.9 of the Linux kernel on 6 November.

The answer to why he did that was that after he submitted the bug report to the private [email protected] mailing list, as per standard practice, the Linux security team sent the code for a proposed patch to the publicly viewable Linux Kernel and Linux Wireless mailing lists. Another public patch proposal followed two days later.

Waisman took the exposure of this code as a form of public disclosure. He told us:

As soon as it hits that mailing list, it means that everyone that is monitoring that mailing list knows about the vulnerability.

Waisman hadn’t produced exploit code for this bug. However, someone else might, he said. He worried that someone could reverse-engineer a zero-day bug by watching patch proposals like the ones flowing over the public mailing list.

By announcing it on Twitter, he was following what he believed were appropriate responsible disclosure guidelines, enabling a greater number of people to take avoidance measures (basically turning off their Wi-Fi functionality) until a patch became available. A report by Mitre (which maintains the CVE database) appeared on the same day with a link to the public mailing list, and the company assigned it a CVE (CVE-2019-17666).

Was he right to worry? Does the community see the publication of patch proposals before their inclusion in the mainstream kernel branch as a security risk? There is a level of risk involved, admit senior organizers on the Linux kernel team.

Greg Kroah-Hartman, a Fellow at the Linux Foundation responsible for the Linux kernel stable releases, told us that the community does have procedures in place to keep bug discussions under wraps until a patch is ready:

For some issues, yes, it is good to do the work originally on the issue on the [email protected] list and then when it is ready, publish it and merge it like normal. That happens all the time.

So, if a bug is big and ugly enough, the team will keep the discussion on the private list.

There are also extra measures that the Linux kernel team can take to shield discussions of very serious bugs. There’s a private mailing list (described here) for communicating with individual Linux distribution vendors, giving them time to prepare kernel patches in advance of public disclosure.

Eventually, though, the code for a patch will have to make it onto the public repositories that house the source code for the Linux kernel. Making its way through that forest of different patches-in-waiting is a complex process. Kroah-Hartman:

There is no way to “hide” our work or patches as everything we do is released publicly in our trees. So yes, if you really wanted to see what is ‘broken’ in Linux, you ‘just’ watch everything that is merged into the kernel tree as it has hundreds of fixes for problems every week.

Linux has a mainline repository, maintained by the operating system’s creator Linus Torvalds. Before a patch makes it into that repo, though, it typically has to go through one of many subtrees, maintained by ‘lieutenants’, addressing different subsystems like networking and memory management.

The maintainers do their own quality control, accepting or rejecting patches for their own trees. Then they wait for an appropriate ‘merge window’, collect a handful of the patches that have made it into their tree, and offer them to Torvalds for possible inclusion into the mainline. Acceptance is at his discretion.

The mainline spits out a new ‘minor’ version of the Linux kernel every eight weeks or so and at the time of writing, the version that contains a fix for CVE-2019-17666 – version 5.4 – is still a week or two from being released.

However, when the security fix was accepted into the mainline on 23 October 2019 it received one of the green lights it needed to be back-ported to older versions of the kernel. An update to version 5.3 of the kernel, the current stable version, appeared on 6 November 2019 in the form of 5.3.9.

As Kroah-Hartman implies, watching all the trees to find patches for juicy looking bugs would be a difficult job, but it doesn’t sound impossible for a well-resourced adversary, especially thanks to the use of something’ called ‘-next trees’, which collect likely patches for the next mainline merge window.

Turning those bug fixes into workable, usable exploits would ad considerably to the adversary’s workload too, but it still leaves them with a difficult job, rather than an impossible one.

So, this bug became public knowledge before a patch was available because the person that found it disagreed with exposure decisions made by the folks that maintain the kernel. Everyone had the best interests of the community at heart, and now there’s a patch which will percolate to most users via numerous kernel distributions.

Of course, it’s one thing for a patch to be available and quite another for users to actually apply it. So, for fans of the wlan0 interface the usual advice applies: patch early, patch often.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/k8HeNuageMA/

How ransomware attacks

More than a decade after it first emerged, is the world any closer to stopping ransomware?

Judging from the growing toll of large organisations caught out by what has become the weapon of choice for so many criminals, it’s tempting to conclude not.

The problem for defenders, as documented in SophosLabs’ new report How Ransomware Attacks, is that although almost all ransomware uses the same trick – encrypting files or entire disks and extorting a ransom for their safe return  – how it evades defences to reach data keeps evolving.

This means that a static analysis technique that stopped a strain of ransomware today may not stop an evolved counterpart in just a few weeks time. This creates a major challenge for organisations and security companies alike.

As the growing number of high-profile ransomware attacks reminds us, sugar coating the issue would be deluded – ransomware has grown as an industry because it works for the people who use it, which means it beats the defences of victims often enough to deliver a significant revenue stream.

The report covers the operation of the most prominent ransomware examples in recent times in detail, including Ryuk, BitPaymer, MegaCortex, Dharma, SamSam, GandCrab, Matrix, WannaCry, LockerGoga, RobbinHood, and Sodinokibi.

Knowledge is defence

Defenders can, however, arm themselves with knowledge. In its report, SophosLabs teases apart and demystifies the common techniques used by ransomware, starting with its distribution mechanisms.

The first type are cryptoworms which set out to replicate to as many machines as possible, as fast as possible, using known and sometimes unknown vulnerabilities to boost their effectiveness.

Although relatively rare – wormlike replication draws a lot of attention to itself. When cryptoworms work they are inclined to be spectacular, for example the global WannaCry attack that happened in 2017.

A more targeted technique is ‘automated active adversary’, a manual technique in which cybercriminals actively search for vulnerable organisations by scanning for network configuration weaknesses such as poorly secured Remote Desktop Protocol (RDP) or software vulnerabilities. Once behind firewalls, the ransomware is planted on as many servers as possible, locking defenders out of their own systems.

Most common of all is ransomware-as-a-service (RaaS), which essentially allows novice cybercriminals to build automated campaigns using third-party kits sold on the dark web. A good example of this is Sodinokibi (aka Sodin or REvil), a GandCrab derivative blamed for numerous attacks during 2019.

Once they have a foothold, attackers use a similar palette of tools to bypass what defences remain, including deploying stolen legitimate digital certificates to make their malware appear trustworthy.

Naturally, lateral movement is used to reach important servers and shares – as is privilege escalation to gain the power to elevate an attack to the admin status necessary to do more damage.

Careful timing is also a common theme of successful attacks, says SophosLabs’ director of threat mitigation Mark Loman, who authored the report:

In some cases, the main body of the attack takes place at night when the IT team is at home asleep.

It sounds blindingly obvious, but it happens again and again. Attacking at night is one of the simplest ways to buy more time for ransomware (which takes time to perform all of its encryption) for no extra effort.

Stopping ransomware

Loman’s advice is a version of careful vigilance, starting with ensuring machines are patched against the major vulnerabilities such as EternalBlue, which is still making a nuisance of itself two and half years after it powered WannaCry.

It sounds simple enough – just apply the patches. But this must be done on all vulnerable systems because ransomware only needs one weak machine to gain a foothold.

That demands that defenders audit the software state of all systems too, something that not all admins bother with to the degree necessary to spot weak points in advance of attacks.

Second, enable multi-factor authentication in every place possible. This is a reliable extra layer of security attackers should find hard to jump if it has been properly configured.

At a bare minimum, not only keep backups but think about how they will be reinstated. Often, victims have backups but not the human resources, time or money to spend days or weeks to putting things back as they were.

There are also some integrated controls in operating systems such as Windows 10, for example, the Controlled Folder Access (CFA) introduced in 2017 to limit which applications can access certain data folders. Researchers have poked holes in it since then so it’s not infallible but still worth deploying on endpoints.

For more advice read How Ransomware Attacks and check out Sophos’s End of Ransomware page.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PHgUHbe0Xgc/

DevSecOps: The Answer to the Cloud Security Skills Gap

There’s a skills and resources gap industrywide, but a DevSecOps approach can go a long way toward closing that gap.

Digital transformation is driving change in every corner of the security industry. Organizations are evolving and expanding into the cloud rapidly, but their security teams and legacy data centers are holding them back. A skills and resources gap exists industrywide, but implementing a DevSecOps approach is key toward bridging that gap. 

DevSecOps, an inherently agile and nimble approach to security, is well suited for a more cloud-enabled future. Bringing together formerly isolated teams under one function allows security to be built in throughout the development process, also known as security by design, meaning that security can be more than an afterthought at the beginning and the end. 

But DevSecOps only works if organizations are willing to give their teams the resources and tools to successfully bridge those skill and resource gaps. Before you change your direction, you first need to address the issues with your current posture, understand the capacity of your current security teams, and find a guiding principle to drive your development. Here’s where to start.

New Solution, Same Old Mistakes
Faced with the evolving threats brought on by digital transformation, organizations need to be aware of their existing postures and shortcomings in protecting their data.

This means asking questions about how you can improve your security posture in the cloud and rethink your best practices, finding the elements that are most germane to your organization. Instead of using the monolithic code that lived inside your data center, you have to architect your infrastructure from the beginning to continuously monitor based around these principles.

Because of digital transformation, your security posture will change and your best practices will need to be tweaked. Having a DevOps team that is questioning the past, and improving your security posture throughout this iterative transformation will pay off as your organization continues to scale.

Cloud-enabled security is meant to be iterative. It’s a foregone conclusion that you’re going to have incidents, but a DevSecOps team that is constantly iterating can catch them faster and fix issues as they come up. The security exists throughout the process, not just at the endpoints. This is the goal.

After solving this problem, you then need to make sure you have the right team for the job.

Mind the Skills Gap
I’ve talked with many organizations that tell me they’ve been doing DevOps for a long time, but that’s often not entirely true. Sure, they have people monitoring their data centers, making sure the lights on the boxes are still blinking, but they aren’t actually digitally transforming that team. As security moves into the cloud, that team is going to be responsible for rebuilding that infrastructure in the cloud, and if security isn’t a part of the conversations around this infrastructure, organizations are missing a huge opportunity.

When organizations decide they want to do DevSecOps, they turn to a team, be it development, operations, or security, and tell them they need to get on board with transforming, often without the proper skills, resources, or guidelines. You need to know your DevOps teams’ comfort level with security, and around digital transformation. For example, if they don’t know about serverless infrastructure, beyond the obvious, then you’re in for trouble.

Expecting a team to exclusively learn on the fly is basing a strategy on hope, which is always doomed to fail. Instead, take your spare moments and offer your DevSecOps team opportunities to learn from their blind spots, whether with additional certifications or shadowing. It doesn’t have to be perfect, but every bit helps. This way, they are constantly getting better and modernizing, improving themselves as they improve your security infrastructure. 

Find the North Star 
DevSecOps can only help solve for issues if you have a guiding principle, or north star, for what you’re trying to accomplish.  Doing this means making sure you know how you’re implementing and improving your security posture in the cloud.

It’s important to factor in security as you go, and embody visibility and contextual application of controls. For a long time, security teams pushed for access within the enterprise perimeter, but the transition to cloud computing is making all of that access obsolete, clouding their visibility and the effectiveness of how perceived controls will work in the cloud.

DevSecOps is all about consolidating teams to pool resources, better leverage skills, and realize unified goals and perspectives, which are key. And as I noted earlier, this doesn’t have to be perfect.  Success with DevSecOps comes with being cloud curious and learning what matters most to your organization. This is an opportunity to take a modernized security stack to the cloud, in addition to business innovation. It’s a fresh start to security, forecasting increased visibility and contextual controls applied at scale, all while supporting the agility of the business, all of which makes for a well-defined transformation. The lessons learned are:

  • If you’re not transforming your security teams and capabilities, you’re falling behind.
  • If you’re resting on your existing security capabilities, you’re going blind.
  • Context is key, resistance to change is not innovative, and most of your traffic is traversing the Web.

You’re going to have bumps along the way, and have to iterate, but each iteration will make your DevSecOps team learn and better prepared for whatever the next threat presents.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “8 Backup Recovery Questions to Ask Yourself.”

Lamont Orange has more than 20 years of experience in the information security industry, having previously served as vice president of enterprise security for Charter Communications (now Spectrum) and as senior manager for the security and technology services practice at … View Full Bio

Article source: https://www.darkreading.com/cloud/devsecops-the-answer-to-the-cloud-security-skills-gap/a/d-id/1336311?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Attackers’ Costs Increasing as Businesses Focus on Security

Based on penetration tests and vulnerability assessments, attackers’ costs to compromise a company’s network increases significantly when security is continuously tested, a report finds.

Companies that focus on continuously testing their security through automated means and regular penetration testing roughly double the cost to attackers of finding exploitable vulnerabilities in their systems, according to data from security assessments and red-team engagements collected by crowdsourced security firm Synack.

The company found that the average number of times that a red-team member had to probe an asset to find a vulnerability more than doubled — increasing by 112% — on average over the past two years. In addition, the average severity of the vulnerabilities found by red-team members have decreased to a Common Vulnerability Scoring System (CVSS) score of 5.95 in 2018, down from aa CVSS score of 6.41 in 2016.

The findings suggest that companies that incorporate security into their development and operations are succeeding in hardening their systems, says Anne-Marie Chun Witt, a director of product marketing at Synack.

“You are seeing fewer vulnerabilities and/or taking longer to find them,” she says. “It is taking more effort to find them and they are having to find more complex stuff. So they [companies focused on security] can say they are increasing the costs for attackers.”

The data underscores that security efforts do result in measurable improvement in the security posture of companies that undertake them. Overall, companies that automated security testing — conducting it on essentially a continuous basis — had a 43% higher measure of security using Synack’s proprietary metric. 

Most companies — 63% — remediated vulnerabilities in less than three months. Among the laggards were e-commerce companies, retailers, and state and local government and education.

“Some industries deserve honorable mentions for their proactive approach to security through testing for vulnerabilities, remediating them, and making the adjustments necessary to instill long-term, cultural changes to improve security posture,” Synack stated in its report. “The results reflect that.”

The crowdsourced security firm is not the only one to note the impact security can have on hardening against compromises and breaches. Earlier this month, bug-bounty management provider HackerOne calculated — albeit, self-servingly — that four large breaches, where vulnerability was the known vector of attack, could have been prevented by bounty programs in the tens of thousands of dollars.

Pointing to the British Airways breach that cost the company $230 million in fines, the company noted that a JavaScript vulnerability led to the compromise.

“Attackers are believed to have gained access via a third-party JavaScript vulnerability, which, on the bug bounty market, carries a value between $5,000–$10,000,” the company stated.

Other research has shown the impact that security investment can have on the cost of cyberattacks. The annual “Cost of Cybercrime Study,” conducted by the Ponemon Institute and most recently sponsored by Accenture, found that four main technologies can help reduce the costs associated with breaches: security intelligence and threat sharing; automation, artificial intelligence and machine learning; advanced identity and access management; and cyber and user behavior analytics.

“The main driver for the rise in containment costs is the increasing complexity and sophistication of cyberattacks,” the report stated. “Another factor is the expansion of compliance and regulatory requirements.”

A significant portion of the Synack report promotes the company’s proprietary security metric — a single number that attempts to combine data on the theoretical cost to that attacker, the severity of vulnerabilities found by Synack’s penetration testing teams, and how efficiently the company remediates vulnerabilities. 

The manufacturing and critical-infrastructure industry has the highest median attacker resistance score — 69 on a scale of 100 — but bucks the trend of continuous testing leading to higher scores. While seven of the nine industries highlighted in the report had higher scores from continuous testing, both the manufacturing and healthcare industries only conducted discrete, point-in-time testing.

The higher security posture of manufacturing and critical infrastructure is more likely due to the serious adversaries the industry faces, Synack stated. 

“The sector has had to adopt a more proactive approach to securing their infrastructure because the industry is a top target for attacks by governments and large entities or ‘state actors,'” according to the report. “In turn, they are more mature in their testing than other industries.”

While the technology industry is in the middle of the pack, the segment did have a much higher threshold of application security, resulting in a much higher average time to find a vulnerability, according to Synack. 

“The longer the time to find a vulnerability, the higher the cost to the attacker and the less attractive the target,” the company said in the report. “This is in line with other trends we’ve seen within the technology industry [and its] proactive approach to security.”

Related Content

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “8 Backup Recovery Questions to Ask Yourself.”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/risk/attackers-costs-increasing-as-businesses-focus-on-security/d/d-id/1336376?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Apple fires employee after he texts customer’s pic to his own phone

She had this funny feeling.

So before she took her cracked iPhone screen to the Apple store for repair, she backed it up, and she went on a wiping spree. Apps with financial information or that linked to her bank account? Deleted. Social media apps? Gone.

(Wise!)

I didn’t want them going through them.

But the phone’s voluminous photos? Argh! No time. You can undoubtedly see where this is going, so let’s go there…

Her appointment had been moved up. The Valley Plaza Apple store in Bakersfield, California, was texting her, so she rushed over without deleting those photos, Gloria Elisa Fuentes said in a Facebook post earlier this month.

Fuentes placed her phone in the hand of one of the Apple store employees, and then, like one does, she waited. The employee “messed around” with it for quite a while, she said, but hey, that’s what phone store employees do:

I didn’t really pay any mind to it because I just figured he’s doing his job, looking into my insurance info or whatever.

Perhaps Fuentes might have grown suspicious when the employee asked for her password. Twice. But she didn’t think anything about it. In the end, he told Fuentes she would have to take it to her phone company for a screen fix.

Deep scrolling until he got to that year-old photo

So Fuentes left the store. It’s when she got home that she realized that that Apple employee had been doing quite the excavation into her photo roll. The tipoff: somebody had used her phone to send a message to an unsaved number.

In fact, the Apple store employee had apparently scrolled through her gallery and sent himself one of Fuentes’s “EXTREMELY PERSONAL” pictures: one that she took for her boyfriend and which had geolocation data, meaning that he also found out where she lives.

It would have been pretty hard for this to be a slip-up, given how old the photo was, Fuentes said:

THIS PICTURE WAS FROM ALMOST A YEAR AGO SO HE HAD TO HAVE SCROLLED UP FOR A WHILE TO GET TO THAT PICTURE being that I have over 5,000 pics in my phone!!!!

Fuentes went back to the store and confronted the guy, whose response was, Goodness gracious, he has no idea how that happened!

I could not express how disgusted I felt and how long I cried after I saw this!! I went back to the store and confronted him and he admits to me that this was his number but that “he doesn’t know how that pic got sent 🤬!!

Fuentes wondered: How many women – or underage girls, for that matter – has that now ex-employee done this to? If he’s done it to girls, he should be very worried: possession of explicit photos of minors is a felony in some states.

Apple has responded to media inquiries with a statement saying that the clerk’s actions were way out of bounds. Hence, he’s way out of a job:

We are grateful to the customer for bringing this deeply concerning situation to our attention. Apple immediately launched an internal investigation and determined that the employee acted far outside the strict privacy guidelines to which we hold all Apple employees. He is no longer associated with our company.

Case closed? Not exactly. According to a local publication, BakersfieldNow, the Bakersfield Police Department said that as of Friday, there was an open, active investigation, meaning that charges might be coming.

How can you protect your photos?

There are ways to hide your photos, and then, there are problems with those ways. On an iPhone, you can launch the Photos app, select the images to hide, tap the Share icon, then choose Hide.

The problem is that those photos aren’t going anywhere. They stay right on your phone, moved off your main album and into an album titled…(heads-up for the world’s most non-cunning game of hide-and-seek)… “Hidden.” Which is available by scrolling down in the photo app’s main screen until you hit “Other Albums.”

Not! Subtle! Anybody who knows where to look can find them, and one imagines that includes people who sell and fix phones.

Business Insider recently mentioned a bunch of third-party apps that can supposedly help, letting you move photos into a locked album that you access with a PIN.

I wish we could tell you whether you can trust those apps. I can’t. What I can tell you is that way back in 2012, we took a look at a bunch of these apps. Many of them stuck photos in a poorly hidden directory from which the photos could be viewed and shared, while others glued some extra characters after the file extension in a flimsy attempt to disguise the images: all very easy to overcome with just a file browser.

Some actually used encryption, but that came with a price, in terms of “you’ve used your freebies – now pay up!” and/or forcing you to look at ads (that transmitted unencrypted, phone location and identification) and/or in terms of having to trust some third-party app enough to hand over a lot of phone permissions that didn’t strike us as all that necessary for hiding photos, such as the ability to dial numbers and view/edit your browser history.

Granted, that was a while ago. Maybe there are marvelous apps that hide your photos nowadays by using a bit more sophistication than the ones we looked at previously. If you’ve used one that you trust, please do tell us which one.

While you’re at it, please also tell us if it passes this litmus test: do you trust it enough to hand over your phone to a phone store employee without first deleting your nudes? If so, WHY?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/pJ0UYPjeVG8/

Try as they might, ransomware crooks can’t hide their tells when playing hands

Common behaviors shared across all families of ransomware are helping security vendors better spot and isolate attacks.

This according to a report from British security shop Sophos, whose breakdown (PDF) of 11 different malware infections, including WannaCry, Ryuk, and GandCrab, found that because ransomware attacks all have the same purpose, to encrypt user files until a payment is made, they have to generally perform many of the same tasks.

“There are behavioral traits that ransomware routinely exhibits that security software can use to decide whether the program is malicious,” explained Sophos director of engineering Mark Loman.

“Some traits – such as the successive encryption of documents – are hard for attackers to change, but others may be more malleable. Mixing it up, behaviorally speaking, can help ransomware to confuse some anti-ransomware protection.”

Some of that behavior, says Loman, includes things like signing code with stolen or purchased certificates, to allow the ransomware to slip past some security checks. In other cases, ransomware installers will use elevation of privilege exploits (which often get overlooked for patching due to their low risk scores) or optimize code for multi-threaded CPUs in order to encrypt as many files as possible before getting spotted.

“Ransomware creators are acutely aware that network or endpoint security controls pose a fatal threat to any operation, so they’ve developed a fixation on detection logic,” Loman explained.

“Modern ransomware spends an inordinate amount of time attempting to thwart security controls, tilling the field for a future harvest.”

Uh oh, someone just got some bad news

If it sounds too good to be true, it most likely is: Nobody can decrypt the Dharma ransomware

READ MORE

Even with these countermeasures, however, Loman notes that Sophos and other anti-malware vendors have an advantage as they know that, sooner or later, the malware has to access the file system and begin to encrypt the data. This is the point where the attacks have to expose themselves and the spot where security tools can stop them.

“It’s important to recognize there’s hope in this fight, and a number of ways admins can resist: Windows 10 Controlled Folder Access (CFA) whitelisting is one such way, allowing only trusted applications to edit documents and files in a specified location,” says Loman.

“But whitelisting isn’t perfect – it requires active maintenance, and gaps or errors in coverage can result in failure when it’s most needed.”

The report is the latest indication that the good guys are making some headway in the battle against ransomware infections. The Sophos attack comes as other vendors have noted that many state and local governments that had previously been prime targets for ransomware are better protecting themselves, forcing criminals to look to more remote areas in search of low-hanging fruit. ®

Sponsored:
Beyond the Data Frontier

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/15/sophos_ransomware_analysis/

What a pair of Massholes! New England duo cuffed over SIM-swapping cryptocoin charges

Two men from Massachusetts have been arrested and charged with 11 criminal counts stemming from a string of account takeovers and cryptocurrency thefts.

21 year-old Eric Meiggs and 20 year-old Declan Harrington each face charges of wire fraud, conspiracy, computer fraud and abuse, and aggravated identity theft for their alleged roles in a crime spree stretching from November of 2017 to May of 2018, which resulted in the theft of $550,000 worth of cryptocoins.

Prosecutors say that Meiggs, of Brockton, and Harrington, of Rockport, specifically targeted executives of cryptocurrency firms and other known high-rollers for account takeovers, with the aim of draining the targets’ cryptocurrency wallets.

Additionally, the pair sought to take ownership of highly-valuable “OG” social media accounts created in the early days of their respective networks when common names were still available.

To do this, it is alleged that Meiggs and Harrington systematically took control of their marks’ smartphone and email accounts via SIM-swapping. One of the two men would call the target’s phone provider and, pretending to be the person, have the number transferred to a new SIM card.

That hijacked SIM would then be used to contact the email provider and receive account reset and two-factor login codes for the target’s address. Police say this allowed them to crawl the target’s messages for login details on other services, usually social networks and cryptocurrency exchanges. In other cases, they are accused of requesting password resets be sent to the email accounts.

According to the 11-count indictment (PDF), the scheme produced mixed results for the alleged crooks. Prosecutors say that the first two attempts at accessing a target’s cryptocurrency wallet failed when, after swapping the SIM and taking over email accounts, the pair were unable to get access the victim’s cryptocoin wallet.

In four other cases, however, police say the duo were able to either take over the victim’s cryptocurrency wallet or exchange accounts and extract money. In one of those cases, the stolen account was used to socially engineer a contact of the victim into sending over $100,000 worth of digital currency.

Twitter logo

JACK OF ALL TIRADES: Twitter boss loses account to cunning foul-mouthed pranksters

READ MORE

Aside from the 2017-2018 coin thefts, prosecutors allege that from 2015 to 2017 Meiggs also dabbled alone in takeover of valuable “OG” social media accounts via SIM-swapping. In those cases, it is charged that Meiggs took over the victim’s phone number then held it for ransom in exchange for access to the social media account.

In another case, it is charged that rather than bother swapping the SIM, Meiggs simply threatened to kill the victim’s wife if they did not hand over the account.

In total, Meiggs faces one count of conspiracy to commit computer fraud and abuse and wire fraud, four counts of wire fraud, one count of identity theft, and one count of violating the computer fraud and abuse act.

Harrington is charged with one count of conspiracy to commit computer fraud and abuse and wire fraud, five counts of wire fraud, one count of violating the computer fraud and abuse act, and one count of aggravated identity theft. ®

Sponsored:
Beyond the Data Frontier

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/14/massachusetts_pair_sim_swapping/

US-CERT Warns of Remotely Exploitable Bugs in Medical Devices

Vulnerabilities in key surgical equipment could be remotely exploited by a low-skill attacker.

US-CERT has issued an advisory for vulnerabilities in Medtronic’s Valleylab FT10 and Valleylab FX8 Energy Platforms, both key surgical equipment that could be remotely exploited by a low-skill attacker. Vulnerabilities also affect Valleylab Exchange Client, officials report.

The advisory details three vulnerabilities. One is the use of hard-coded credentials (CVE-2019-13543). Affected devices use multiple sets of hard-coded credentials; if discovered, they could be used to read files on the equipment. The flaw has been assigned a CVSS base score of 5.8.

These products also use a reversible one-way hash for OS password hashing. While interactive, network-based logons are disabled. An attacker could use other vulnerabilities disclosed to gain local shell access and obtain these hashes. This flaw (CVE-2019-13539) has a CVSS score of 7.0.

Improper input validation (CVE-2019-3464 and CVE-2019-3463) marks the third type of vulnerability. The affected devices use a vulnerable version of the rssh utility to enable file uploads, which could give an attacker administrative access to files or the ability to execute arbitrary code. This vulnerability has been given a CVSS score of 9.8.

The affected medical devices’ network connections are disabled by default, officials report, and the Ethernet port is disabled upon reboot. However, network connectivity is often enabled.

Until updates can be applied, Medtronic advises users to disconnect affected products from IP networks or segregate the networks so devices aren’t accessible from the Internet. Software updates are now available for the FT10 platform and will be available for the FX8 in early 2020.

Read the full advisory here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “8 Backup Recovery Questions to Ask Yourself.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/us-cert-warns-of-remotely-exploitable-bugs-in-medical-devices/d/d-id/1336362?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple