STE WILLIAMS

The Cybersecurity Automation Paradox

Recent studies show that before automation can reduce the burden on understaffed cybersecurity teams, they need to bring in enough automation skills to run the tools.

Cybersecurity organizations face a chicken-and-egg conundrum when it comes to automation and the security skills gap. Automated systems stand to reduce many of the burdens weighing on understaffed security teams that struggle to recruit enough skilled workers. But at the same time, security teams find that a lack of automation expertise keeps them from getting the most out of cybersecurity automation. 

A new study out this week from Ponemon Institute on behalf of DomainTools shows that most organizations today are placing bets on security automation. Approximately 79% of respondents either use automation currently or plan to do so in the near-term future.

For many, automation investments are justified to management as a way to beat back the effects of the cybersecurity skills gap that some industry pundits say has created a 3 million person shortfall in the industry. Close to half of the respondents to Ponemon’s study report that the inability to properly staff skilled security personnel has increased their organizations’ investments in cybersecurity automation. 

Nevertheless, the fact remains that automation isn’t magical. It takes boots on the ground to roll out cybersecurity automation and true expertise at the helm of these tools to reap significant security benefits from them over the long haul. Ponemon’s study shows that 56% of organizations report a lack of in-house expertise is one of the biggest challenges impeding adoption of security automation. In fact, it was the No. 1 obstacle, named more frequently than legacy IT challenges, lack of budget, and interoperability issues.  

Sentiments are relatively evenly split between those who think automation will cause a net increase, net decrease, or have no effect on headcount over time. However, those who think it’ll mean hiring more staff still have the plurality on that count — 40% of respondents say they’ll need to hire more people to support security automation.

In another report released by SANS Institute on security automation, SANS analyst Barbara Filkins warns that organizations must fight the misconception that automation is easy or quick to implement.

“Automation takes a tremendous amount of effort to arrive at the point where it makes things look easy,” Filkins writes. “Don’t underestimate the resources needed to define the processes — in the light of more effective tools — and close the semantic gaps in the data gathered.”

That study shows while automation is on the uptick at most organizations, only a scant 5.1% are at a high level of maturity with extensive automation of key security processes. 

Part of the difficulty in assessing or measuring the level of automation maturity and its effect on the security industry is that experiences vary wildly. A huge chasm between the haves and have-nots of cybersecurity automation currently exists in the industry, explains Gartner’s Anton Chuvakin. On one end, he says, there are plenty of organizations that don’t even have the resources to run security automation, let alone effectively operationalize it.

“They do not have the people to install a tool and to keep it running. I’ve met people who say they don’t have time to install and configure a basic log management tool,” Chuvakin writes. “On the other edge of the chasm, we have organizations with resources to WRITE tools superior to many/most commercial tools.” 

This chasm may impact the staffing equation to some degree, as more than likely it will precipitate the creation of more quality service providers to fill the gap in expertise for those organizations that simply do not have the staff to add more layers of complicated automation tools. 

Related Content:

  

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/the-cybersecurity-automation-paradox/d/d-id/1334470?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cloud Security Spend Set to Reach $12.6B by 2023

Growth corresponds with a greater reliance on public cloud services.

Businesses are expected to spend $12.6 billion on cloud security tools by 2023, up from $5.6 billion in 2018 and with an especially close focus on public cloud native platform security, Forrester reports.

The data comes from its “Cloud Security Solutions Forecast, 2018 to 2023,” which investigates the ways companies will invest in cloud security tools over the next five years. This global increase in cloud security spend aligns with a broader trend toward cloud spend overall: Researchers report public cloud services spend will reach $236 billion by 2020, up from $178 billion spent in 2018.

More than half (54%) of infrastructure decision makers have implemented or are expanding use of the public cloud, up from 25% in 2015. While respondents have become used to public cloud services, they’re still wary of risk. Businesses typically use a mix of public, private, and hybrid clouds and work with multiple service providers. Different implementations serve different needs, but complexity creates challenges in monitoring data and detecting threats.

“There’s more and more of a sensitivity that if data is personal, it needs to be kept private,” says Jennifer Adams, senior forecast analyst at Forrester, citing penalties from regulations such as GDPR. “As more data is stored in the cloud, we need to use tools to make sure data in the cloud is safe.”

Public cloud native platform security dominates the spending forecast, which she says stood out for its rapid growth. The category wasn’t even specified in Forrester’s earlier cloud security surveys, Adams says, and now the sector is expected to grow 20% per year. Companies spent $4 billion on it in 2018, when it made up over 70% of total spending on all cloud security tools. Researchers anticipate it’ll be the fastest-growing sector for cloud security, reaching $9.7 billion in spend by 2023.

“This is a huge change from what we’ve seen previously,” Adams noted. These tools, provided by cloud platforms, typically include data categorization, data segmentation, server access control, resource-based access control and access control lists, user IAM, data-at-rest encryption, data-in-transit encryption, encryption key management, logging and anomaly detection, and role-based access control. As more companies partner with cloud platforms like Amazon Web Services and Microsoft Azure, they’re adopting their security tools as well.

Because companies often use multiple cloud providers, cloud workload security (CWS) tools are also seeing growth. CWS is designed to centralize and automate security controls across platforms; with businesses relying on multiple clouds, it’s expected to grow 17.3% each year.

Some Businesses Still Wary 
Researchers asked respondents how worried they are about risks that software-as-a-service (SaaS), platform-as-a-service (PaaS), and infrastructure-as-a-service (IaaS) bring to their environments, using a scale of one (not concerned) to five (very concerned). Nearly 60% responded with a four or five for SaaS, 57% said the same for PaaS, and 57% echoed great concern for IaaS.

“I think we’re finding that when we step back … the cloud isn’t the biggest area of risk,” Adams notes. There is a perception the cloud brings risk, but it isn’t where threats typically appear.

Existing cloud security measures seem to be working, experts found, and organizations are coming around to the idea that the cloud may be better for their data protection. Research shows only 12% of breaches targeted public cloud environments; further, 37% of decision makers cited better security as an important reason to make the transition to public cloud.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/cloud-security-spend-set-to-reach-$126b-by-2023/d/d-id/1334473?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

6 Takeaways from Ransomware Attacks in Q1

Customized, targeted ransomware attacks were all the rage.

Ransomware attacks may be declining in number but almost every other metric related to the threat is trending upward: higher ransom payments, more downtime loss, and longer recovery time.

Those are some of the findings from a new report from Coveware that studied data from ransomware attacks in the first quarter of 2019. Overall, victims paid more ransom money, experienced greater downtime, and took longer to recover from an attack than ever before.

Much of these trends were driven by an increase in ransomware types such as Ryuk, Bitpayment, and Iencrypted, that were used in customized, targeted attacks on large enterprises, Coveware said.

“Ransomware is no longer a ‘one employee clicked an email, and their workstation is encrypted,’ type of incident,” says Bill Siegel, CEO and co-founder of Coveware.

Source: Coveware

The majority of ransomware attacks are targeted and require multiple layers of security, access restrictions, and backups to properly defend against. “Also, there is no such thing as being too small to be on the radar for an attack. If you are lax in your security, and don’t continually invest in IT security, it is just a matter of time before you are attacked,” he says.

Here are six trends from ransomware attacks so far this year:

1. Ransom demands are getting higher.

The victims of targeted, custom attacks are being asked to pay substantially higher ransoms to get their data back compared to victims of opportunistic attacks. As a result, the average ransom amount paid by victims in cases handled and resolved by Coveware’s incident response team jumped 89% from $6,733 in Q4, 2018 to $12,762 in Q1, 2019.

2. Attackers are getting more hands-on.

Instead of automated attacks, threat actors are increasingly staging manual attacks against targeted organizations using compromised credentials, says Oleg Kolesnikov, director of threat research at Securonix. They are specifically targeting high-value systems such as e-mail servers, database servers, document management servers, and public-facing servers.

“In some cases, the ransomware attacks are performed in a semi-automated, operator-assisted fashion, which is not commonly seen with the traditional ransomware attacks,” he says. “[This] often makes the attacks much more damaging for businesses.”

Researchers believe that the threat actor behind the recent, devastating attack on Norsk Hydro manually copied their LockerGoga ransomware from computer to computer on the aluminum manufacturer’s network.

3. Downtime is increasing.

Companies on average spent more time last quarter recovering from an attack than they did in any previous quarter.

The average downtime following a ransomware attack increased sharply, from 6.2 days in Q4 last year to 7.3 days in Q1 2019. Much of that had to do with increased activity tied to Ryuk, Hermes, and other similarly hard to decrypt malware types, Coveware found. Some ransomware, like Hermes, also caused high-data loss rates compared to other types of ransomware.

Another factor for longer recovery time: an increase in attacks where data backups were wiped or encrypted, according to Coveware.

4. Ransom-related downtime costs are becoming substantial.

A vast majority of ransomware victims fortunately don’t end up incurring anywhere near the $40 million in costs that Norwegian aluminum manufacturer Norsk Hyrdo racked up in just the first week following its attack.

But average downtime cost, per attack, per company, was substantial all the same, at $65,645. Costs varied significantly by industry and geography. Companies without cyber- or business-interruption insurance felt the pain the most, Coveware said.

“Downtime is often the most costly aspect of an attack and companies that are part of high velocity supply chains, or that extend high-availability service-level agreements are particularly exposed,” Siegel says. Hosting companies are also at risk of their client base walking away if they violate their uptime and availability and guarantees, he notes.

5. Manufacturing companies are now heavily targeted.

No organization is completely safe from ransomware attacks. But entities in the manufacturing sector appear to be getting hit harder than companies in other verticals, says Adam Kujawa, director at Malwarebytes Labs.

“It’s hard to tell if this is intentional or just a result of the kind of security these organizations have,” he says. Regardless, for attackers, manufacturing companies present an attractive target, he says. Manufacturers whose operations have been degraded or disrupted by ransomware are more likely to pay a ransom to get things moving again, Kujawa says.

6. Victims that pay up recover their data (mostly).

Security and law enforcement officials strongly recommend that ransomware victims do not pay a ransom to get their data back. Many believe that acceding to a ransom request only encourages more attacks.

Even so, Coveware’s data shows that when companies paid up last quarter, they got a key for decrypting their data 96% of the time. That’s a 3% increase over the fourth quarter of 2018. On average, victims that paid their attacker were able to recover 93% of their data with the decryption key.

Data recovery rates though tended to vary substantially by ransomware type, however. Victims of Ryuk ransomware, for instance, were generally able to recover only about 80% of their data with the decryption key, while those hit with GandCrab got back almost 100%. The variance had to do with the encryption processes used by different ransomware, faulty decryption tools, and sometimes because of modifications to encrypted files, Coveware said in its report.

Not all who paid received the promised decryption key, either. Some ransomware purveyors, like the group behind the Dharma ransomware family, tended to default often. “Other types of ransomware like Ryuk almost always deliver a decryption tool, but the efficacy of the tool is relatively low,” Siegel says.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/6-takeaways-from-ransomware-attacks-in-q1/d/d-id/1334472?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Creator of Hub for Stolen Credit Cards Sentenced to 90 Months

Coming eight years after he launched the site, the steep sentence for the cybercriminal operator is based on a tab of $30 million in damages calculated by Mastercard and other credit card companies.

A federal judge sentenced a Macedonian man responsible for creating and operating a now-defunct hub for the collection and sale of stolen information on credit card accounts — called Codeshop — to 90 months in prison, federal prosecutors said on April 17. 

The sentence for Djevair Ametovski, also known as “xhevo” and “sindrom,” capped an eight-year investigation and prosecution by the US Secret Service and the US Attorney’s Office for the Eastern District of New York.

Codeshop launched in 2011 and operated for more than three years. In 2014, Slovenian authorities arrested Ametovski, and two years later, prosecutors successfully extradited him to the United States. While Ametovski initially maintained his innocence, he pleaded guilty to two of three charges in August 2017.

“Ametovski and his co-conspirators were merchants of crime, stealing victims’ information and selling that information to other criminals,” Richard P. Donoghue, US Attorney for the Eastern District of New York, said in a statement on April 17. “This Office and our law enforcement partners will tirelessly pursue cybercriminals who seek to profit at others’ expense.”

The US Secret Service investigated the Codeshop.su website, including seizing servers in the Netherlands and the Czech Republic. The computers hosted both the website and a database of more than 400,000 stolen credit card accounts. A forensics analysis, however, revealed that more than 1.3 million stolen credit card numbers had been part of the database at one time or another. The credit card account information included the cardholder’s name and address, the credit card number, the expiration date, and the security code printed on the card.

The investigation revealed the site attracted more than 28,000 criminal users in its three years of operation. Codeshop allowed potential buyers to easily search for cards based on the account holder’s location, the financial institution issuing the card, and the credit card brand.

“To supply the Codeshop website with stolen credit card and account data, the defendant enlisted the services of criminal hackers and fraudsters, [including enlisting] his co-conspirators to hack into the computer databases of financial institutions and other businesses, including businesses in the United States,” prosecutors stated in an October 2018 statement in support of their sentencing request.

When he created the Codeshop, Ametovski had little experience in running such an operation. In March 2011, he sent an email to the administrator of another carding operation asking about “the webshop script to buy,” according to the October 2018 sentencing statement issued by the US Attorney’s Office. A month later, he advertised his new shop offering “canadian cvvs,” “USA Fulls,” and “usa cvvs,” prosecutors stated.

The cybercriminal operation only last three years before Ametovski was arrested in Ljubljana, Slovenia, on January 23, 2014. He fought extradition for more than two years, before being extradited to the United States in May 2016

At the time, the US Attorney’s Office called the extradition a warning to other cybercriminal operators.

“Cybercriminals who create and operate online criminal marketplaces in which innocent victims’ financial and personal information are bought and sold erode consumer trust in modern-day payment systems and cause millions of dollars in losses to financial institutions and unsuspecting individuals,” Robert L. Capers, US Attorney for the Eastern District of New York, said in a May 2016 statement. 

In addition to Ametovski, investigators identified three other people who allegedly had permission to upload stolen information to the servers.

While the 90-month sentence is significant, the penalty ended up being less than half of the 17 years requested by prosecutors, who based their request on damages calculated to be in excess of $30 million, primarily due to a loss of nearly $30 million alleged by Mastercard.

“Even assuming arguendo [for the sake of argument] that the defendant served as no more than a traditional ‘fence’ … his crimes are still extremely serious,” Donoghue argued in a February 2019 sentencing document. “Furthermore, even assuming arguendo that the Codeshop website was neither unique nor sophisticated … the need for general deterrence of those who would seek to operate such purportedly easy-to-create websites is significant.”

Ametovski’s public defender could not immediately be reached for comment.

Related Content

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/creator-of-hub-for-stolen-credit-cards-sentenced-to-90-months/d/d-id/1334471?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cisco Issues 31 Mid-April Security Alerts

Among them, two are critical and six are of high importance.

A busy month for Cisco router owners got busier yesterday when the networking giant introduced 31 new advisories and alerts. These announcements came on top of 11 high- and medium-impact vulnerabilities announced earlier in the month.

Of the 31 alerts, 23 are of medium impact, six are of high impact, and two are of critical impact to the organization and its security team.

Most of the medium-impact alerts are for cross-site scripting vulnerabilities, denial-of-service vulnerabilities, or vulnerabilities affecting unauthorized users and access. These were found on devices ranging from LAN controllers to wireless network access points to Cisco’s new Umbrella security framework.

The two critical alerts are for two very different vulnerabilities. In one, a vulnerability in Cisco IOS and IOS XE could allow an attacker to reload the system on a device (potentially replacing the legitimate system with one containing malicious code), or remotely execute code at a privilege level above the level of the user being spoofed to gain access.

This vulnerability is found in the Cisco Cluster Management Protocol (CMP) and was discovered when the documents in the infamous Vault 7 disclosure were analyzed. That’s bad news because those documents have been available to hackers around the world for more than two years. And the news gets worse: Researchers at Cisco Talos have published a blog post showing this vulnerability has been exploited in the wild as part of a DNS hijacking campaign dubbed “Sea Turtle.”

Cisco already has released a software patch for this critical vulnerability, which has no operational workaround for successful remediation.

The second critical vulnerability could allow a remote attacker to gain access to applications running on a sysadmin virtual machine (VM) that is operating on Cisco ASR 9000 series Aggregation Services Routers. This vulnerability, Cisco says, was found during internal testing and has not yet been used in the wild. The source of the vulnerability – insufficient isolation of the management interface from internal applications – has been fixed in a pair of Cisco IOS XR software releases and does not, therefore, warrant a separate update, Cisco says.

Between the medium and critical vulnerabilities are six high-importance vulnerabilities that affect systems including telepresence video servers, wireless LAN controllers (three separate vulnerabilities), Aironet wireless access points, and the SNMP service.

Cisco ranks the severity of vulnerabilities using the Common Vulnerability Scoring System (CVSS) Version 3. Vulnerabilities with a CVSS score of 9.0 to 10.0 are critical, those in the range of 7.0 to 8.9 are high, and a score of 4.0 to 6.9 warrants a medium label. Anything ranking below medium is given an informational alert only.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/cisco-issues-31-mid-april-security-alerts/d/d-id/1334476?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Oracle issues nearly 300 patches in quarterly update

Oracle is keeping people busy before the Easter weekend. The company has issued a raft of quarterly security updates for 297 vulnerabilities, along with an urgent warning to patch now.

The latest Critical Update Patch contains vulnerabilities spanning dozens of products including its Fusion Middleware product set, which received 53 new security fixes overall – 42 of them for vulnerabilities that could in theory be exploited remotely over a network with no user credentials

The Oracle E-Business Suite accounted for 35 new security fixes in the critical patch update – 33 of them for remotely exploitable bugs. The Suite encompasses business applications including enterprise resource planning, customer relationship management, and supply chain management.

Also high on the list of affected product groups was Oracle Communications Applications, which received 26 security fixes for vulnerabilities, 19 of which were remotely exploitable.

The software giant’s suite of retail applications got 24 security fixes between them; Oracle Database Server had six; Java SE, which Oracle acquired along with Sun Microsystems in 2010, had five holes patched.

Oracle is eager for customers to patch as quickly as possible and avoid any temporary workarounds, it said:

Oracle continues to periodically receive reports of attempts to maliciously exploit vulnerabilities for which Oracle has already released fixes. In some instances, it has been reported that attackers have been successful because targeted customers had failed to apply available Oracle patches. Oracle therefore strongly recommends that customers remain on actively-supported versions and apply Critical Patch Update fixes without delay.

The vulnerability count seems high, but it’s on par for a company with such a vast range of products. The January 2019 critical patch update fixed 284 bugs, while the one before it in October 2018 saw 301.

Oracle could help alleviate security patching concerns for some users as it moves them to the cloud. Services that it patches automatically on its own infrastructure will hopefully be safer for users than those rushing to test and deploy patches on their own servers.

Last year, it announced a new cloud-based online transaction processing database service that automatically repairs itself and automates updates and security patches for customers. It said:

Security patches are automatically applied every quarter. This is much sooner than most manually operated databases, narrowing an unnecessary window of vulnerability.

The company is doing its best to bolster its cloud services business, which executives have said is a higher-margin operating line than on-premise software.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/iNS-4FahOy4/

Chrome flaw on iOS leads to 500 million unwanted pop-up ads

If you own an iOS device and use the Chrome browser, there is a chance during the last week that you’ve encountered some strange-looking advertising pop-ups.

According to security company Confiant, which took a closer look at these campaigns, a typical message might look something like this:

There are no rewards, of course, because these pop-up ads are run by a cybercrime group and exist to generate revenue for the crooks – you don’t get to share the spoils.

But the bigger question that bugged Confiant’s researchers when they analysed the pop-ups was how they were bypassing Chrome’s iOS ad-blocking protection.

The volume of campaigns was massive – 500 million pop-ups since 6 April 2019, apparently – featuring 30 adverts connected to a cybercrime group called eGobbler.

Aiming such a large volume of ads at the users of one platform and browser, iOS Chrome, also looked a little unusual.

Sure enough, Confiant discovered the campaigns had found a way to beat Chrome’s pop-up blocker by exploiting a previously unknown and unpatched security vulnerability.

Google was told of the issue last week, which Confiant hasn’t yet explained in detail because it remains unpatched:

We will be offering an analysis of the payload and POC [proof-of-concept] exploit for this bug in a future post given that this campaign is still active and the security bug is still unpatched in Chrome as of this blog post.

Dodging the bullet

The ads are not easy to avoid because they trigger on legitimate US and European websites, giving the ads an apparent legitimacy.

Each campaign lasts for 24 to 48 hours:

In attempt to fly under the radar, eGobbler attempts to smuggle their payloads in popular client-side JavaScript libraries such as GreenSock.

Publishers aren’t choosing to serve these ads – they’re bogus, unwanted and unexpected adverts that winkle their way into the patchwork of ad systems upon which the industry is based.

By the time publishers work out what’s going on – that could take hours, days or even longer – the crooks have moved on to a new campaign with different ads linking to new domains.

What to do?

One giveaway of eGobbler’s pop-up ads is that they often use a .world Top-Level Domain (TLD) as a landing page, so be cautious of those domains if you don’t usually visit .world sites.

Without more detail on the vulnerability, it’s hard to assess what sort of wider threat it might pose or whether it’s mostly about nuisance and inconvenience.

However, until this popup bypass is patched in Chrome, you could just stick to Safari, Apple’s own built-in iOS browser.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/l28r0KbykG8/

Google plays Whack-A-Mole with naughty Android developers

Following updates to Android application programming interfaces (APIs) and Google Play policies, some developers have been surprised to find they’ve been blocked from distributing apps through Google Play.

Sorry, Google said on Monday: we’re playing Whack-A-Mole with “bad-faith” developers.

Google said that the “vast majority” of Android developers are good at heart, but some accounts are rotten to the core.

At least, some accounts are suspended after “serious, repeated” violations of policies meant to protect Android users, according to Sameer Samat, VP of Product Management, Android Google Play.

Samat said that such developers often try to slip past Google’s checks by opening up new accounts or hijacking other developers’ accounts in order to publish their unsafe apps.

In order to fend off those repeat offenders, developers without an established track record can henceforth expect to be put through a more thorough vetting process, Samat said.

Sorry for the 1% of blunders

As with any move made to boost Android security, this one’s bound to misfire, he said – although he claimed that 99% of Google’s suspension decisions are correct.

The company isn’t always able to share the reasoning behind deducing that a given account is related to another, he said, but developers can immediately appeal any enforcement.

Appeals are reviewed by humans, he noted, in spite of what may feel like responses coming from automated reject-o-bots.

If a human on the team finds that an account was mistakenly suspended, Google will reinstate the account.

Taking more time to review apps that come from developers without track records should help Google make fewer mistaken decisions on developer accounts, he said, though Samat didn’t give details about the additional checks.

Samat said that the reason for the change is that people want their data protected when apps get control of it, and they expect that Android should be calling the shots to make sure it is:

Users want more control and transparency over how their personal information is being used by applications, and expect Android, as the platform, to do more to provide that control and transparency.

Enforcing the wall around the app garden

Putting new developers through a tighter wringer follows other recent policy changes, including taking a closer look at app permissions by requiring app makers to disclose what data they intend to collect and restricting access to some features on phones.

Late last year, Google started by changing SMS and Call Log permissions to protect sensitive user data better.

For example, SMS permissions are now restricted to specific uses, such as when you choose an app to be their default text message handler.

As a result of that change, Google said that the number of apps with access to this sensitive information has decreased by more than 98%.

But it sounds like that reduction came with some developer pain, given the not-so-rosy feedback.

Google got complaints about unclear documentation; slow answers to questions on policy requirements; a cumbersome appeals process; and difficulty getting to speak to an actual person.

Google’s response: we’re going to use clearer language in emails about policy rejections and appeals, and we’re going to use more humans to speed up appeals responses and make them more personalized.

Starting in August, Google’s also going to require apps to work with the latest, most secure versions of Android APIs.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3bcdYVGu0l4/

Facebook user data used as bargaining chip, according to leaked docs

User privacy is super-duper important, Facebook has said publicly for years out of one side of its mouth, while on the other side it’s been whispering to third-party app developers to come on in and feast – this user data is tasty.

Well, that’s confusing, its own employees have said, according to yet more newly revealed internal discussions.

NBC News, one of a handful of media outlets that got its hands on the documents, said that the cache contains about 4000 pages of leaked company documents that largely span Facebook communications from 2011 to 2015.

(Computer Weekly reported on Monday that it was 7000. At any rate, it was a lot of documents.)

Photos visible to “Only me?” Says YOU

As NBC reports, the documents show that in April 2015, Facebook product designer Connie Yang told colleagues that she’d discovered apps collecting profile data she’d marked as visible only to herself. Yang wrote that apps were displaying her “only [visible to] me” data as being visible to…

…both you and *other people* using that app.

The documents show that regardless of users locking down their accounts so that their photos and other data were visible to “only me,” they could still be transferred to third parties, according to the documents.

That’s only one of an ocean’s worth of revelations in the cache of internal documents, which include emails, chats, presentations, spreadsheets, and meeting summaries that show that top Facebook execs – including CEO Mark Zuckerberg and chief operating officer Sheryl Sandberg – mulled the idea of selling access to user data for years.

The internal documents were reportedly leaked anonymously to the British investigative journalist Duncan Campbell. Besides NBC, Campbell – a computer forensics expert – shared them with Computer Weekly and Süddeutsche Zeitung.

NBC reports that Facebook isn’t contesting the documents’ authenticity. It is, however, taking what Computer Weekly dubbed “extraordinary” legal steps to contain the leak, including lodging urgent legal applications on 11 April 2019, asking that two suspected leakers from Six4Three be questioned in court.

The leak that turned into a flood

If this all sounds familiar, it’s because we’ve already heard about a subset of this new cache.

A few months ago, Facebook staff’s private emails – NBC says it was about 400 documents – were published in connection with the British Parliament’s inquiry into fake news, after the CEO of former “wanna see your gal pals in bikinis?” app developer Six4Three handed the goodies to MP Damian Collins.

Six4Three has been battling Facebook in court for years over the shutdown of user data, which in effect killed its “Pikinis” app.

The new documents reportedly show that other apps that starved to death after the 2015 cut-off of broad access to user data include Lulu, an app that let women rate the men they dated; an identity fraud-detecting app called Beehive ID; and Swedish breast cancer awareness app Rosa Bandet (Pink Ribbon).

Six4Three has alleged that the internal emails show that in spite of what Facebook claimed after the Cambridge Analytica situation exploded, the company was not only aware of the implications of its privacy policy but also exploited them actively.

“Sort of unethical”

The newly leaked documents show that internally, employees compared Facebook’s uneven playing field for app developers to villains from Game of Thrones. One employee, senior engineer David Poll, called the treatment of outside app developers “sort of unethical,” the documents reportedly show.

Yes, Facebook has said, it did explore ways to build a sustainable business by selling user data access to developers. What company doesn’t explore ways to make money, after all? But ultimately, as the company told NBC, it decided against pursuing the plans.

NBC highlighted one email from Zuckerberg in which he shrugs off the risks of having any user data leak were the company to share it with developers. The outlet quoted from an email Zuck sent to a close friend, the entrepreneur Sam Lessin:

I’m generally skeptical that there is as much data leak strategic risk as you think. I think we leak info to developers but I just can’t think of any instances where that data has leaked from developer to developer and caused a real issue for us.

That “real issue” became really real within a year, when Facebook had what director of engineering Michael Vernal called a “near-fatal” brush with a data privacy breach when a third-party app came close to disclosing Facebook’s financial results ahead of schedule.

The response from Avichal Garg, then director of product management:

Holy crap.

Vernal:

DO NOT REPEAT THIS STORY OFF OF THIS THREAD. I can’t tell you how terrible this would have been for all of us had this not been caught quickly.

What do you think – are we looking at cherry-picked communications, as Facebook understandably suggests, taken out of context and designed to bolster Six4Three’s argument in its acriomonious and long-running lawsuit, or at a genuine smoking gun?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/KsPfjF5NaNg/

Serious Security: Ransomware you’ll never find – and how to stop it

Imagine that you’ve been hit by ransomware.

All your data files are scrambled, you’re staring at a ransom note demanding $1000, and you’re thinking, “I wish I hadn’t put off updating that cybersecurity software.”

When the dust has settled – hopefully after you’ve restored from your latest backup rather than by paying the blackmail charge – and you’ve got your anti-virus situation sorted out, your burning question will be…

…where did the malware come from?

But what if, no matter how carefully and deeply you scan, you can’t find any trace that there ever was any malware on your computer at all?

Unfortunately, as our friends over at Bleeping Computer recently reported, that can happen, and it’s one case where not being infected yourself is actually a bad sign, rather than a good one.

The Bleeper crew have had several reports of users whose files were scrambled from a distance across the internet, by ransomware running on someone else’s computer.

It’s a bit like suffering from a malware attack while you’ve got a USB disk plugged in – if your computer can access files on the plug-in device over the USB cable, you’ll end up with files scrambled on both your laptop and the USB disk, but the malware program itself will only ever show up on your laptop.

The USB drive will be affected but not infected

The same sort of thing often happens across the local network in ransomware attacks inside a company, where a single infected computer on the network ends up scrambling files on all your servers, because the user happened to be logged in with an account that had widespread network access.

In the end, hundreds of users and hundreds of thousands of files many get affected, even though only one user and one computer were ever infected.

Over the internet?

Bleeping Computer has dubbed this latest strain of remote-control ransomware NamPoHyu – that’s the moniker that pops up when you visit the malware’s web page – but the name doesn’t help much, because there isn’t any malware file that you can go looking for if the attack started from afar.

It could have been almost any ransomware that did the damage, and that’s the problem.

Of course, this raises the questions, “How on earth can file-scrambling malware work over the internet, and how can crooks purposely aim it at me?”

Sure, lots of companies, and many home users, run web servers, gaming servers, remote access servers, and so on, but who runs plain old file servers over the internet?

Who would leave their computer sitting online so that crooks anywhere in the world could type in a Windows network mapping command such as the one below?

  C: net use j: \203.0.113.42C$

If your computer is online at the IP number 203.0.113.42 and accepting Windows networking connections, the above command will leave the crooks with a J: drive that lets them wander around your files at will, as easily as if those files were on their C: drive.

Few, if any people, would let crooks share their local drives on purpose, but surprisingly many leave their local disks open by accident.

Microsoft’s file sharing protocol – the protocol that lets you open up your disks with the command net share and connect to other people’s disks with net use – is now officially known as CIFS, short for Common Internet File System, but it started life with the jargon name of Server Message Block, or SMB.

Back in the early 1990s, when prolific Aussie coder Dr Andrew Tridgell started his open source implementation of SMB so that Linux and Windows computers could work together more easily, the acronym SMB was turned into the pronounceable name “Samba”, and that’s the name you’ll probably hear used most frequently these days, by Windows and Linux users alike.

Samba is what does the sharing, and shares are what you connect to on servers that you’re supposed to access.

You can create your own shares (use the command net share to list them all) with handy names, such as DOCUMENTS or SOURCECODE, and Windows will automatically add some special ones of its own, notably two default (and hard-to-remove) shares called C$ and ADMIN$ that give remote access directly to your C: drive and your Windows directory respectively.

Annoyingly, shares with names ending in $ are hidden, so it’s easy to forgot they’re there – something that many people, sadly, do.

Not just anyone can hack into C$ and ADMIN$, of course – you need network access directly to the target computer, which you wouldn’t normally get through a firewall or home router, and you need an Administrator’s password.

So far, so good…

…except that, as we write about rather too often, many users have sloppy habits when it comes to choosing passwords, making them easy to guess, and many devices that were never supposed to be accessible to the outside world show up by mistake in internet search engines.

WARNING. It’s tempting, and dangerously easy, when your’re sitting at home having troubles playing the latest game, to get round your setup hassles by simply lowering your firewall security shields. Maybe you went into your router and temporarily told it that your laptop was your “gaming server”, for example? If you allowed in all traffic for troubleshooting, how many crooks took a peek while your security was off? If everything started working while you were testing, did you remember to put your shields back up afterwards, or did your temporary fix become your permanent one?

Remote ransomware attacks

Simply put, if crooks can see your Samba shares from out there on the internet, and can guess your password, they can theoretically wander in and do what they like to your files.

They can therefore attack your computer – manually or automatically – simply by pointing one of their computers, or someone else’s hacked computer, at yours and deliberately “infecting” themselves with any network-enabled ransomware they like.

Many, if not most, modern ransomware samples include a feature to find and attack any drives visible at the time of infection, in order to maximise damage and boost the chance that you’ll end up having to pay – that includes secondary hard disks, USB devices plugged in at the time, and any open file shares.

In other words, if you’re at risk of a remote ransomware scrambling attack, the real situation is actually much worse than that.

It may sound like cold comfort, but a ransomware attack is one of your “least worst” outcomes, because your files get overwritten but not stolen.

Instead of ruining your files, the crooks could choose simply to copy them off your network to use later, and that sort of attack [a] would be much less noticeable [b] would be impossible to reverse and [c] would affect and expose anyone else whose data was stored in those files.

What to do?

  • Pick strong passwords. And don’t re-use passwords, ever. You can assume that crooks who finds your password in a data dump from a hacked website will immdeiately try the same password on any other accounts or online services you have. Don’t let the password for your online newspaper subscription give the crooks a free ride into your webmail, your social media and any computers and file shares you have.
  • Keep your shields up. If you’re having connection troubles, resist the temptation to “turn off the firewall” or “bypass the router” to see if that solves the problem. That’s a bit like disconnecting your car’s brakes and then going for a ride to see if performance improves.
  • Run anti-malware software. Even on servers. Especially on servers. Your laptop isn’t supposed to be open to the internet, and generally won’t be. But many of your servers are online and accessible to the world on purpose, so although they can be protected by a firewall, they can’t be fully shielded by it, and that’s by design.
  • Consider using a ransomware blocker. Tools like Sophos’s own Cryptoguard can detect and block the disk-scrambling part of a ransomware attack. This offers you protection even if the malware file itself, and its running process, is out there on someone else’s computer that you can’t control.
  • Make regular backups. And keep at least one recent copy offline, so you can access your precious data even if you’re locked out of your own computer, your own network or your own accounts. By the way, encrypt your backups so that you don’t spend the rest of your life wondering what might show up if any of your backup devices go missing.

LISTEN NOW

Learn more about strong, unique passwords and why they matter (starts at 01’18”).

(Audio player above not working? Download MP3, listen on Soundcloud or access via iTunes.)


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/y7l2Y0swbIU/