STE WILLIAMS

Russia Chooses Resiliency Over Efficiency in Cyber Ops

New analysis of the software used by espionage groups linked to Russia finds little overlap in their development, suggesting that the groups are siloed.

Russian cyber espionage groups surprisingly do not share much code in their development, suggesting that the nation’s various attack groups are isolated from one other, according to new analysis by security firm Check Point Software Technologies and machine-learning startup Intezer.

The companies analyzed more than 2,000 code samples, reverse engineering them to remove common open-source code, and then comparing the non-public code samples — the “genes,” in Intezer’s parlance — to determine shared roots of the software. A map created from the data shows shared code within groups, but only a few connections between software thought to be used by different groups.

“We were surprised to see these notable disconnections between different actors,” says Itay Cohen, a researcher and reverse engineer with Check Point. “This shows that Russia is willing to invest a lot of money in these operations to make sure that … if one group’s malware is detected, and a defense created, it won’t cause problems for other groups.”

The report is the perhaps the first broad analysis of potential code similarities between the various tools used by groups thought to be connected to the Russian government. Check Point and Intezer focused on a dozen different groups, including the major Turla, Sofacy, and Black Energy espionage groups, finding that only in a few cases did the groups appear to share code. 

The analysis discovered 22,000 connections between the samples, including almost 4 million shared code samples. The analysis grouped the samples into 200 different modules and 60 different families, the report stated.

The conclusion: The coders behind the Russian advanced persistant threat (APT) infrastructure are largely distributed and unconnected to each other. 

“Every actor or organization under the Russain APT umbrella has its own dedicated malware development teams, working for years in parallel on similar malware toolkits and frameworks,” the researchers stated. “Knowing that a lot of these toolkits serve the same purpose, it is possible to spot redundancy in this parallel activity.”

The interactive map created by the company illustrates the commonality between the different groups. Black Energy has almost a dozen components that share a great deal of code, creating a tight group on the visualization of the data.

“Each edge represents similar code between two families: it could be a lot of code, or just one function,” Cohen says. “We released this information open source, so other researchers can investigate the connections themselves.”

The companies originally thought that the groups would have more shared code because that would be more efficient and less costly. Instead, each of the twelve groups seem to be independent of each other, which means that the nation is likely paying significant development costs, says Cohen. 

“Different people worked on the same functionality for different development efforts,” he says. “So it obviously cost a lot of money, because there is redundant code being used.”

Along with MITRE’s Attck framework, the effort is one of the few to try to make sense of the landscape of APTs, rather than mostly analyzing specific threats. To date, security firms typically focus on reverse engineering the tools and techniques used in major campaigns, such as whether Fancy Bear’s tools have become more complex or more simple, or the amount of profit North Korea has made from its cyber operations.

Too Many Names

In the report, Check Point and Intezer’s researchers criticized the security industry for the “frustrating” failure to settle on a common naming standard for advanced persistent threats. The group known as Fancy Bear by Crowdstrike, for example, is called APT28 by FireEye, Sofacy by Kaspersky Lab, and Pawn Storm and TG-4127 by Secureworks. Without a common lexicon for such threats, any analysis has to connect all the disparate names for the same threats, the researchers stressed.

“Every Russian APT actor and every malware family have more than a few names given to them by different vendors, researchers, and intelligence institutions,” the report stated. “Some names will be used by different vendors to describe different families; some malware families would be described with different names by the same vendor; other malware families simply do not have a clear name.”

The report relies heavily on other security firms’ and threat researchers’ attribution of code and modules to specific groups. While Check Point and Intezer connected code based on their similarities, the attribution of that code came from other groups. The older BlackEnergy and more recent Energetic Bear, for example, both had a matching sample of code that hides the attackers’ tracks by deleting the tool, but that code likely came from a public source, the report stated.

“Despite the fact that self-delete functions are pretty common in malware, it is rare to see an exact 1:1 match in the binary level, which matches only for these two malware families out of all the malware families indexed,” the report stated.

As part of the research, the companies released a tool – dubbed the Russian APT Detector – that uses the code signatures to detect programs involved in Russian-attributed espionage.

Related Content:

 

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The 20 Worst Metrics in Cybersecurity

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/russia-chooses-resiliency-over-efficiency-in-cyber-ops/d/d-id/1335896?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cloudflare Introduces ‘Bot Fight Mode’ Option for Site Operators

Goal is to help websites detect and block bad bot traffic, vendor says.

Content delivery network Cloudflare has launched a new feature that it says will help users of its services prevent malicious bots from scraping their websites, stealing credentials, misusing APIs, or launching other attacks.

Starting this week, site operators now have the option to turn on a “bot fight mode” in the firewall settings of their Cloudflare dashboards. When enabled, Cloudflare will begin “tarpitting” any automated bots on their sites it detects as being bad or malicious. It will also attempt to have the IP from which the bot originated kicked offline.

Tarpitting is a technique that some cloud service providers use to increase the cost of a bot attack to bot operators. Some tarpits work by significantly delaying responses to a bad bot request or by sending bots down blind alleys in the same way honeypots for malware work.

In Cloudflare’s case, when its security mechanisms detect traffic coming from a malicious bot, it deploys CPU-intensive code that slows down the bot and forces the bot writer to expend more CPU cycles, increasing costs for them in the process.

To identify whether a bot is bad, Cloudflare analyzes data from a variety of sources, including its Gatebot DDoS mitigation system and from the over 20 million sites that use its service. The company looks at data such as abnormally high page views or bounce rates, unusually high or low session durations, and spikes in traffic from unexpected locations to automatically detect bad bots. According to Cloudflare, its bot detection mechanisms challenge some 3 billion bot requests per day.

“Tarpitting is taking measures to slow down the attack first rather than block it outright,” a Cloudflare spokeswoman says. Blocking outright allows a bot to move onto another target quickly, she says. “Tarpitting allows us to impact the bot by wasting some of its time and resources,” she adds. An example of this would be requiring the bot to solve a very computationally heavy math challenge, the spokeswoman notes.

The Bad Bot Problem
Such measures have become crucial because of the high and growing proportion of Internet traffic comprised of automated bots. Not all of them are malicious. Many bots, such as those used by search engines to crawl the Web or those used to monitor website metrics or for copyright violations, serve useful and often critical functions.

However, many more are used for malicious and other potentially unwanted purposes, such as for credential stuffing attacks, submitting junk data via online forms, scraping content, or breaking into user accounts. Sometimes even bots that are considered legitimate to use — such as inventory hoarding bots that lockup a retailer or ticketing website’s inventory — can be a major problem.

A Distil Networks report earlier this year described nearly 38% of all Internet traffic in 2018 as comprising automated bots — both bad and good. Bad bots alone accounted for a startling 20.4% of all traffic on the Internet last year.

“Depending on the business of the organization, the problem can range from problematic to some parts of the business, such as stuffing sales leads on a website, to absolutely crippling, [such as] inventory hoarding and outright theft,” the Cloudflare spokeswoman says.

Current approaches of blocking are effective in preventing one bot from attacking one website, but they do little to prevent the bot from just moving on to a softer target. “The intention of bot fight mode is to make bots spend more time and resources before being able to move on,” the spokeswoman noted.

In addition to tarpitting, Cloudflare will also work to have any IP that is sending out bad bots shut down. If the provider hosting the bot happens to be a partner, Cloudflare will hand over the IP to the partner. If the provider is not a partner, Cloudfare will still notify them of the bad IP while continue to tarpit any traffic that originates from it.

Franklyn Jones, chief marketing officer at Cequence Security, says one reason for the high proportion of bad bots is the ease with which they can be deployed. “Launching an automated bot attack is a surprisingly simple process,” Jones says. “It requires only previously stolen credentials, software to plan and orchestrate the launch, and a proxy infrastructure to scale and obfuscate the attack.”

Because the total price tag could be just a few hundred dollars, bad actors see this strategy as a path of least resistance, he says. A survey that Osterman Research conducted on behalf of Cequence last year found that average enterprise organizations experience some 530 botnet attacks daily.

“These automated attacks have many goals, including account takeover, fake account creation, gift card fraud, content scraping, and other application business logic abuse,” Jones says.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “‘Playing Around’ with Code Keeps Security, DevOps Skills Sharp.”

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/cloudflare-introduces-bot-fight-mode-option-for-site-operators-/d/d-id/1335900?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How to Define & Prioritize Risk Management Goals

As risk management programs differ from business to business, these factors remain constant.

When evaluating the goals for a risk management program, many organizations focus on compliance or filling perceived gaps in their capabilities. The problem is, these priorities fall short of considering the full breadth of risks a business could face, security experts say.

“Those may or may not be the things they should be focusing on or making the center point of their program,” says Jack Jones, chairman of the FAIR Institute and executive vice president of RD at RiskLens. “But they can’t know that until they do risk analysis.”

At the 2019 FAIR Conference, held this week in Washington, D.C., Jones moderated a panel focused on defining goals of an effective risk management program. Some of these goals are constant across organizations, he says in an interview with Dark Reading ahead of the event.

“From my perspective, one of the objectives of any risk management program is to be cost-effective,” he notes, as an example. The cost of a $5 million risk management program is far more when opportunity cost is considered, and it could be doubled when the business considers what it would have to do to implement such a program, he explains. Firms that exclusively focus on best practices and compliance are not going to be cost-effective, he argues.

While there should be “philosophical alignment” in the nature of risk management programs, their paths and considerations will likely differ depending on industry, culture, and resources.

Another core objective should be to bring all risk management groups together: audit, compliance, legal, enterprise risk management “need to have very open and honest conversations,” Jones says. In previous roles as a CISO, he adds, clear communication “served me incredibly well,” especially when working with external auditors and regulators.

Joey Johnson, CISO of Premise Health and member of the aforementioned panel, takes this a step further and says security leaders should prioritize looking at risk holistically and treat it as a central business function. Risk should be considered outside the spectrum of IT, he explains.

“It feels so often the message security is trying to convey is lost,” says Johnson. The business will quickly “gloss over” security risk but are attuned to the conversation around overall risk. It’s up to security to deliver tangible metrics and a narrative to corporate stakeholders. “Whatever metrics you deliver have to provide a narrative that the audience can respond to,” he adds.

Now Hiring: Risk-Focused Leadership
The first step to implementing a risk management program is finding a leader who can translate risk into terms that inform a dialogue within the business. This leader should understand the company, where it’s headed, and its overall risk tolerance. Without one, any organization will have a hard time implementing risk resources in an appropriate say, Johnson explains.

Of course, finding this person is a challenge. There are typically two schools of leadership from a security perspective: those who come from a business background and have no security experience, and those who come from security but don’t know how it fits into the business.

“Getting a leader in place, setting up a function to see all different kinds of risk, is paramount,” says Johnson. Many security experts are predisposed to view through the lens of risk but often forget security is only one level; there is financial risk and market perception, among others. They need to be able to identify what is critical. If a company starts the process of acquiring a software company, it should recognize it may not have the resources to do software security.

Alignment between the business and security risk management is both “critical and overlooked,” Johnson emphasizes. Historically, security programs have involved a lot of “blocking and tackling” to keep people out of trouble, Johnson adds. But with the right strategy, security can be used to deliver valuable outcomes through a risk management program.

As an example, he points to his company’s third-party risk management program. In starting a vendor risk assessment, the security team found they were dealing with hundreds of thousands of vendors, many of them duplicative. They could have done a vendor risk assessment for each one — a time-consuming and expensive process — or eliminate some from the start, coach the ones they kept, and shape their product road map with fewer trusted vendors, he explains.

The company went with the latter. “All of that massively reduces our threat landscape, and it’s a zero-dollar initiative that reduced risk while fostering the business,” Johnson says. As a result, the business wanted to engage with security from the beginning and understood security’s position within the organization gave it a perspective that nobody else in the company had.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “‘Playing Around’ with Code Keeps Security, DevOps Skills Sharp.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/risk/how-to-define-and-prioritize-risk-management-goals-/d/d-id/1335903?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Instagram phish poses as copyright infringement warning – don’t click!

Last month, we wrote about an Instagram scam that presented you with what looked like a two-factor authentication (2FA) code.

This time, the crooks are tapping into a concern that many of us have – falling foul of copyright law.

Lots of us innocently post and repost photos, GIFs, video clips and screenshots that we find amusing, informative, scary, and so forth…

…but even if we’re only ever posting photos that we took ourselves, we may occasionally find ourselves asked either to demonstrate our entitlement to use them, or to risk getting shut out of our account:

No one wants to get locked out of their social media account, even temporarily, over an unresolved argument about an image.

As a result, the temptation to click the link on the email is high – especially if you know that the ‘dispute’ is bogus or easily resolved, perhaps because you think you can quickly prove that you took the photos yourself.

Of course, in this case, clicking through immediately puts you in harm’s way:

As in the previous case of Instagram phishing, the crooks are using a free .CF domain name, “left stuffed” with subdomain text that disguises its bogus origins.

Remember that once you have the right to use a domain such as example.com, you also acquire the right to create subdomains such as www.example.com, anytext.youlike.example.com, or even (as in this case) insta­gram.copy­right­infringement­appeal.example.com.

If there isn’t room in your browser’s address bar for the full domain name – and on a mobile device, there almost certainly won’t be – then the browser will show you the believable left-hand end of the domain and hide the important part at the right-hand end.

As you can see see above, the crooks have acquired an HTTPS certificate for their imposter website, so you will see the necessary and expected padlock in your browser.

In Firefox, you can simply click on the padlock to view the certificate, which quickly reveals the deceit:

If you do click through, however – and it’s hard (or effectively impossible) to drill down into the details of a web certificate in a mobile browser – then you hit the phishing attempt proper:

Notice how the crooks have added an age check to the page, apparently in a two-faced effort not only to make it look more realistic (a lot of American web services insist on age confirmations for legal reasons) but also to go after an additional item of personal data, namely your birthday.

(Ironically, in our tests this phishing page only actually uploaded the username and password fields when we clicked [Submit] – the date of birth we put in was ignored.)

If you enter a password, it gets uploaded via a web POST request back to the same .CF site used to serve up the original bogus notification page.

After that, a bogus Loading... page that adds a drop of realism…

…and then crooks present you with a decoy page that makes it look as though something positive has happened:

After all that, you’re calmly and automatically redirected to Instagram’s real login page for a final touch of verisimilitude:

Why Instagram?

You might be surprised to find that crooks are interested in accessing your Instagram account at all, rather than, say, your bank account, your RDP password or your cryptocoin wallet.

But as we pointed out in our previous Instagram phishing article:

Social media passwords are […] valuable to crooks, because the innards of your social media accounts typically give away much more about you than the crooks could find out with regular searches.

Worse still, a crook who’s inside your social media account can use it to trick your friends and family, too, so you’re not just putting yourelf at risk by losing control of the account.

If you receive an outlandish business proposal or a bogus-sounding news report from someone you’ve never heard of, you’re unlikely to give it a second glance.

But a friend who cheerfully recommends a weird and wacky website is much more likely to persuade you to take a look, because… hey, that’s what friends do.

Thus, Instagram phishing.

What to do?

Instagram copyright infringement reports are a real thing, but they don’t unfold in the way the crooks are pretending in this attack.

We recommend that you read Instagram’s official explanation from the company’s own help pages – if you know what the real deal is supposed to look like, then you’ll never fall for a fake warning like this one.

Notably, Instagram says that if it removes content without contacting you first,

you’ll receive a notification from Instagram that includes the name and email address of the rights owner who made the report and/or the details of the report. If you believe the content shouldn’t have been removed, you can follow up with them directly to try to resolve the issue.

Here are five more tips for staying out of trouble:

  • Look out for obvious errors. In this attack, the crooks were careless with the email they sent. It contains numerous grammatical and typographic errors, which are a big giveaway. Closer inspection would reveal that the email came from a Turkish hosting company, and that the clickable button in the email itself leads to a bogus .CF domain, not where you might expect in the case of an Instagram page.
  • Check your address bar. If a web address is too long to fit cleanly into the address bar of your browser, take the trouble to scroll rightwards in the address text to find the right-hand end. Closer inspection would quickly reveal the bogus domain name here.
  • Consider using a password manager. Good password managers associate usernames and passwords with already-known login pages, so your password manager wouldn’t offer to fill in an unexpected password field on an unknown web domain – it simply wouldn’t know what account to use.
  • Never login via email links. If you need to login to a site such as Instagram for some official purpose, find your own way there, for example via a bookmark you created earlier, or by using the official mobile app. That way, you’ll avoid putting your real password into the wrong site.
  • Learn how your online services really handle disputes or security issues. Don’t get taken in by warnings you receive by email. Find your own way to the real site and use the service’s own help pages to find out how things really work. That way, you’ll be much harder to con.

And a bonus sixth tip if you’re looking after other users…

  • Make sure your users are clued up. Phishing emails like the one shown here are easy to fall for because of their elegant simplicity – by copying distinctive pages from well-known brands, the crooks keep your suspicions low. Sophos Phish Threat lets you train and test your users using realistic but safe phishing simulations.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Gsj6ZQF0XFA/

Jira development and ticketing software hit by critical flaws

Admins looking after Atlassian’s Jira development and ticketing tools have a spot of patching work on their hands after the company released updates addressing two critical flaws.

Two product families are affected by the advisory:

  • Jira Service Desk Server and Jira Service Desk Data Center (CVE-2019-14994), and
  • Jira Server and Jira Data Center (CVE-2019-15001).

According to Atlassian’s alert, customers and employees should only be able to use Jira Service Desk to “raise requests and view issues,” such as IT tickets.

However, by exploiting the critical URL path traversal flaw in CVE-2019-14994, an attacker with access to the portal could bypass these restrictions, viewing issues and making requests relating to Jira Service, Desk projects, Jira Core projects, and Jira Software projects.

Although Atlassian has seen no evidence of exploitation, independent research by security company Tenable has found 25,000 portals that are vulnerable to this issue:

belonging to organizations in healthcare, government, education and manufacturing in the United States, Canada, Europe and Australia.

The researcher who discovered the flaw, Sam Curry, tweeted on 18 September that he plans to reveal more details of the vulnerability using a proof of concept exploit.

The other critical flaw, CVE-2019-15001, is described as an “authenticated template injection vulnerability in the Jira Importers Plugin (JIM)” through which an attacker could remotely execute code on vulnerable servers running a vulnerable version of Jira Server or Jira Data Center.

The limitation is that an attacker would need Jira Admin access, said the advisory.

The vulnerability is credited to researcher Daniil Dmitriev, who also discovered a similar server-side injection flaw in July, CVE-2019-11581.

Affected versions

CVE-2019-14994

All versions of Jira Service Desk Server and Jira Service Desk Data Center before 3.9.16, version 3.10.0 before 3.16.8, version 4.0.0 before 4.1.3, version 4.2.0 before 4.2.5, version 4.3.0 before 4.3.4, and version 4.4.0 before 4.4.1 are on the fix list.

Jira Service Desk Cloud, and Jira Core or Jira Software on servers where Jira Service Desk is not installed are not affected.

The patched versions are v3.9.16, v3.16.8, v4.1.3, v4.2.5, v4.3.4, and v4.4.1.

CVE-2019-15001

For Jira Server and Jira Data Center, it’s best to study the complete list published with the advisory but the issue appears to go back to version 7.0.10, released only months after the product’s launch in 2015.

As with the first flaw, Jira Service Desk Cloud, and Jira Core or Jira Software on servers not using Jira Service Desk are unaffected.

The patched versions are v7.6.16, v7.13.8, v8.1.3, v8.2.5, v8.3.4, and v8.4.

If admins are unable to upgrade immediately, Atlassian suggests the temporary workaround of blocking the PUT request to the endpoint /rest/jira-importers-plugin/1.0/demo/create (unblocking this when the update is applied) rather than disabling the Jira Importers plugin.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/VgbcYoVOVyk/

Apple restricts old adblocking tech

With the 19 September release of iOS 13, Apple quietly turned off the ability for adblocking companies to use their own blocking mechanisms in the Safari browser. Developers of iOS 13 and macOS Catalina applications will have to use an Apple-supplied alternative which adblocking companies have said is too limited.

Browsers use an application programming interface (API) to talk to extensions. Safari had its own, based on JavaScript, and even published a list of the extensions that used it in its Extensions Gallery.

Then, in 2014, Apple showed developers what was to come by introducing App Extensions – a new way for applications to talk to the operating system and each other on iOS and Safari. With App Extensions, developers can let users share content and functionality between apps.

This was part of a two-pronged strategy for content blocking on Apple’s part. Apple readied app developers for the new approach to blocking in 2015 when it quietly announced a new feature in iOS 9 called Content Blocking. This enabled developers to put content blocking rules directly in their apps.

This gave adblockers a choice: They could either switch to Content Blocking or keep using their extensions the old way. AdBlock eventually did both, building Content Blocking support into its extension.

In a blog post explaining its support for Content Blocking, AdBlock admitted that Safari runs faster using this feature than it does using the traditional AdBlocker legacy extension. However, there are some downsides, it warned.

Content blocking doesn’t let you whitelist websites, which makes it difficult to selectively support sites by viewing their ads, said the AdBlock team.

However, the biggest problem according to AdBlock is the Content Blocker’s 50,000 rule limit. This constrains adblockers’ capabilities because they routinely use far more rules than that. If you want to write lots of custom whitelist filters of your own, then you might also find that some of them don’t work, it warned.

As ZDNet pointed out, Apple has gradually been putting the thumbscrews on Safari extension developers by discouraging the old JavaScript in favour of App Extensions, and last year it announced that it would turn them off altogether.

That means there’s no longer a place for adblocker extensions that use the old, more liberal rule base for Safari. This has caused some blockers, like uBlock Origin, to abandon Safari altogether. One developer explained why in this GitHub post:

Apple has begun phasing out Safari extensions as extensions, and has instead been implenting a new extensions framework which is extremley limited in adblocking functions, only allowing “content blockers”, which are just links bundled as an app which Safari enforces [sic].

Others welcomed the move. They argue that relying on Apple to process Content Blocking in the browser rather than having a third-party adblocker intercept web requests from pages offers more security.

Google drew lots of flak from the community when it did something similar to Apple with version 3 of Manifest, the API that lets extensions talk to Chromium-based browsers. People complained at the time that Google’s interests were conflicted because it makes such a lot of money from advertising. Apple prides itself on the opposite, though, making its money through hardware, software, and cloud services.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/O7GTlWpHw1M/

Facebook has booted tens of thousands of data-grabbing apps

Facebook announced in May 2018 that to date, it had suspended 200 apps in the app investigation and audit that it promised after the Cambridge Analytica scandal.

A few months later, in August 2018, it announced that the number of yanked apps had doubled to 400.

On Friday, Facebook updated its ongoing App Developer Investigation, which it launched in March 2018.

It said that the current roster of castigated apps is in the “tens of thousands”… or long enough to make a swing out of if you tie them together to fly into Congress, next time you get hauled in to chat with lawmakers about data-scraping, election security, Facebook’s role in society, censorship of conservative voices, regulation, an impenetrable privacy policy, racial discrimination in housing ads or what have you.

Facebook didn’t give specifics on the exact number of suspended apps, but it did say that they’re associated with 400 developers. Ime Archibong, VP of Product Partnerships, said in the post that all those tens of thousands of apps aren’t necessarily all creepster apps. Many, he said, weren’t actually live and were instead being tested when Facebook suspended them. Others simply never bothered to respond to Facebook’s audit, so out they went.

Archibong:

It is not unusual for developers to have multiple test apps that never get rolled out. And in many cases, the developers did not respond to our request for information so we suspended them, honoring our commitment to take action.

The “commitment to take action” was declared in March 2018 by CEO Mark Zuckerberg. In a Facebook post, he announced a crackdown on abuse of Facebook’s platform, strengthened policies, and pledged an easier way for people to revoke apps’ ability to use their data.

There were a number of other changes made at that time, including disabling the ability to look people up by their phone numbers or email addresses, plus cracking down on third-party data access by yanking apps’ ability to see personal information about users.

In Friday’s post, Archibong said that some of the tens of thousands of suspended apps have been banned completely, for a range of reasons, including that they slurped up Facebook user data, then made it publicly available without protecting people’s identity.

Which brings us to Archibong’s reference to Facebook taking legal action “when necessary.”

It found it necessary in May 2019, when it filed a suit against Rankwave, a South Korean social media analytics firm, alleging that the company abused Facebook’s developer platform’s data, that Rankwave refused to cooperate with the platform’s mandatory compliance audit, and that it likewise spurned Facebook’s request to delete data.

In Archibong’s post on Friday, he also pointed to other contexts in which the platform has gone after app developers. Last month it went after two app developers – LionMobi and JediMobi – for putting apps onto Google Play that allegedly installed malware on users’ phones. The malware then created fake user clicks on Facebook ads, making it look like the phones’ owners had clicked on ads that they hadn’t actually touched.

Facebook says that it refunded advertisers for the phony clicks. In a separate case, in March 2019, it sued two Ukrainians – Gleb Sluchevsky and Andrey Gorbachov – for allegedly scraping private user data through malicious browser extensions that masqueraded as quizzes.

There’s more to our app-dev eyeballing

Stay tuned, there’s more to come, Archibong said:

We are far from finished. As each month goes by, we have incorporated what we learned and re-examined the ways that developers can build using our platforms. We’ve also improved the ways we investigate and enforce against potential policy violations that we find.

Example: beyond the investigation, Facebook has also improved how it evaluates and sets policies for all developers that build on its platforms. It’s also removed a number of application programming interfaces (APIs): the channels that developers use to access various types of data. It’s also fattened up the teams it’s dedicated to investigating bad actors and to slapping/litigating them into shape.

That sets the stage for Facebook to annually review every active app with access to more than basic user information, Archibong says, and to pick from a range of enforcement actions when it finds violators.

For one, Facebook’s cooked up new rules to more strictly control a developer’s access to user data:

Apps that provide minimal utility for users, like personality quizzes, may not be allowed on Facebook. Apps may not request a person’s data unless the developer uses it to meaningfully improve the quality of a person’s experience. They must also clearly demonstrate to people how their data would be used to provide them that experience.

Archibong said that Facebook has also clarified that the platform can suspend or revoke a developer’s access to any API that it hasn’t used in the past 90 days. Beyond that, it’s barring apps that request what it deems a “disproportionate amount of information from users relative to the value they provide.”

We can expect yet more app developer requirements to come out of the company’s recent agreement with the Federal Trade Commission (FTC): the one from July 2019, where Facebook got fined $5 billion for losing control of users’ data.

The FTC said at the time that the new, 20-year settlement order will overhaul how Facebook makes privacy decisions and boosts accountability at the board level. It called for establishment of an independent privacy committee of Facebook’s board of directors, thereby removing “unfettered control” by Zuckerberg over decisions affecting user privacy. The new agreement also requires developers to annually certify compliance with Facebook policies. Archibong said in Friday’s post that any developer that doesn’t comply “will be held accountable.”

A bit of perspective

Facebook didn’t bumble into this mess by accident, critics have stressed. As Senator Ron Wyden told the New York Times on Friday, it was asking for it:

Facebook put up a neon sign that said ‘Free Private Data,’ and let app developers have their fill of Americans’ personal info.

App developers aren’t simply a plague of privacy locusts sucking Facebook dry without its permission or its knowledge. Rather, Facebook has apparently used access to user data sometimes as a carrot, and sometimes as a stick, depending on whether a developer or company was seen as a friend or a rival.

This was illustrated in December 2018, when Facebook staff’s private emails were published by a fake news inquiry in the UK. One example: after it limited the data on users’ friends that developers could see in 2014/2015, it kept a whitelist of certain companies that it allowed to maintain full access to friend data.

Facebook said its investigation into app developers, their use of user data and their adherence to Facebook policies, will continue.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SBAAxQd8xAE/

Why do cloud leaks keep happening? Because no one has a clue how their instances are configured

The ongoing rash of data leaks caused by misconfigured clouds is the result of companies having virtually no visibility into how their cloud instances are configured, and very little ability to audit and manage them.

This less-than-sunny news comes courtesy of the team at McAfee, which said in its latest Infrastructure as a Service (IaaS) risk report that 99 per cent of exposed instances go unnoticed by the enterprises running them.

Such unsecured instances (usually storage buckets or databases left accessible to the general public) have been responsible for many of the largest data leaks in recent years after researchers or, in some cases, hackers, stumbled upon the exposed servers and made off with their contents.

McAfee’s study, based on a sample of 1,000 enterprises in 11 countries as well as anonymized customer data, suggests that most businesses are woefully unaware of what data they have facing the internet.

Customers told the security house they had, on average, around 37 instances of misconfigured systems and folders arise per month. In reality, McAfee places this number closer to 3,500 incidents per month as databases, storage buckets and cloud servers are inadvertently left open or exposed by a vulnerable web application.

The problem, said McAfee, is most enterprises have little way to actually see what is exposed and where. The study reckons just 26 per cent of the firms it polled have the ability to audit their cloud configurations.

Additionally, companies usually end up running a greater variety of services than execs and IT admins realise. Of those surveyed, 76 per cent thought they used multiple cloud vendors, when McAfee’s study found the actual number was more like 92 per cent.

“It’s possible the speed of cloud adoption is putting some practitioners behind,” McAfee said in the paper.

“Infrastructure changes rapidly in the cloud, opening the door for mistakes as code is released in continuous integration/continuous delivery (CI/CD) practices.”

While such findings are not particularly new (we have known for a while that most enterprises keep poor track of where their clouds are running and what data is being shared), the sheer number of companies vulnerable has to be more than a little alarming, especially after years of major incidents that collectively should have served as a wake-up call.

“We hypothesize that there is a practitioner-leadership disconnect at work here,” McAfee added.

“Ninety per cent of companies told us they’d experienced some security issue in IaaS, misconfiguration or otherwise. But twice as many manager-level IT personnel, those closest to the IaaS environment, thought they’d never experienced an issue compared to their CISO, CTO, and CIO leadership.”

As for what can be done, McAfee noted a number of strategies, including the regular use of auditing tools and security frameworks to make sure your cloud platforms aren’t spitting out VMs with the wrong settings. ®

Sponsored:
How to Process, Wrangle, Analyze and Visualize your Data with Three Complementary Tools

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/24/mcafee_cloud_leak_study/

6 Best Practices for Performing Physical Penetration Tests

A cautionary tale from a pen test gone wrong in an Iowa county courthouse.

Physical penetration tests are an excellent and often overlooked way to test an organization’s security posture. However, they can come with serious consequences for testers if they aren’t properly prepared. Look no further than the recent arrest of two pen testers probing Iowa’s Dallas County courthouse security.

According to a September 13 report in the Des Moines Register, the men were employed with Coalfire, a cybersecurity adviser with headquarters in Colorado, and outfitted with “numerous burglary tools.” They told authorities they were “hired to test out the courthouse alarm system’s viability and to gauge law enforcement’s response time, an alleged contract that Dallas County officials said they had no knowledge of….”

The Register reported:

Authorities later found out the state court administration did, in fact, hire the men to attempt “unauthorized access” to court records “through various means” in order to check for potential security vulnerabilities of Iowa’s electronic court records, according to Iowa Judicial Branch officials.

But the state court administration “did not intend, or anticipate, those efforts to include the forced entry into a building,” a Wednesday news release from the Iowa Judicial Branch read.

Coalfire, in a September 18 press release, said that the company and the Iowa State Court Administration “believed they were in agreement regarding the physical security assessments for the locations included in the scope of work. Yet, recent events have shown that Coalfire and State Court Administration had different interpretations of the scope of the agreement.” The statement further noted that both parties plan to conduct independent reviews and release the contractual documents executed between both parties

Obviously, there are many sides to this story, and more will come out. In the meantime, here are six lessons learned from my own experience conducting physical penetration testing:

1. Get It in Writing
Out of the gate, define the rules of the engagement (ROE) in as much detail as possible. You don’t want to find yourself in a holding cell wondering why “various means” didn’t give you carte blanche to break in. The statement of work (SOW) also needs to specifically define what you’re going to be testing, what you are trying to access or accomplish, and how. Remove as much ambiguity from the ROE and SOW as possible; otherwise, the engagement and its results will be left to interpretation rather than hard facts.

Furthermore, there needs to be indemnification language in the SOW that covers when you’ve been given bad scope information from the client. Assume that if it can go wrong, it will, and then cover it here.

During physical assessments, we suggest having the client first conduct a “dry run” of your approach to see what problems occur. We also have their signatory authority review the operation order (OPORD) and approve it, paying special attention to the avenues of approach or the concept of operations (CONOPS) in how the engagement will be executed. That way, things like whether “forced physical entry has been authorized” will never be an issue, which is in the mutual best interest of all.

2. Take Your Paperwork with You
When doing physical penetration testing, our consultants carry the basic equivalent of a “get out of jail free” card. In most cases, that means a signed letter of permission with legally binding language allowing them to be doing exactly what they are doing. Having copies of the OPORD and ROE aren’t bad ideas either.

3. Have a Dedicated Point of Contact During Testing
It’s imperative that somebody is available to keep a misunderstanding from escalating into something worse. The key is to have multiple ways to resolve problems if things go bad. If nothing else, your own contacts need to know what’s going on.

It’s also important to ensure that you have properly planned your engagement. We use the PACE approach — having Primary, Alternate, Contingency, and Emergency plans of action throughout our engagements. Chances are, a properly planned engagement may still have some drift, but knowing whom to contact and when can save you from a lot of grief — getting arrested or worse.

4. Go Further if You Can
If possible, inform local law enforcement, in advance of the testing, about what you will be doing, and provide them with signed permission from your client. If not out of scope, make sure the facilities and/or physical security heads know about your testing plans, too. It matters. 

5. Understand What You’re Walking Into
You need to understand your client’s escalation and response procedures. There are certain situations during physical pen testing against critical infrastructure or other secure facilities when someone breaking in, even if conducted during an authorized test, will trigger response actions and will still be considered a compliance violation such as, for example, critical infrastructure protection standards from the North American Electric Reliability Corporation, a nonprofit international regulatory authority over Canada, United States, and Mexico. Due to these special considerations, we make arrangements to be greeted on the other side by an observer who is “escorting” us from a distance, so we aren’t in violation of any federal laws or regulatory requirements.

Each situation is unique, and all factors need to be considered. For instance, commercial facilities that frequently receive threats of violence, and/or have increased risk of terrorist attacks may have their physical security personnel armed. Understand the dangers before you go in, what regulations you might violate, and get in sync with your client on handling complexities when they arise. 

6. Know Your Exposure
Ultimately, it’s not on the client to protect you. It’s on you to protect both yourself and the client. From a legal perspective, you want to know whether your errors and omissions/professional liability insurance has coverage for these kinds of situations. You want to make sure both liability and indemnification calls out how general situations of scope breach are handled and define some baseline rules. And you want to understand the difference between negligence and gross negligence.

It’s important to go to great lengths to define your engagements with clients to include scope and authorized versus unauthorized actions. All engagements are partnerships, and by providing clear and proactive communication, you can ensure the best outcome for all parties involved.

 

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s story: “‘Playing Around’ with Code Keeps Security, DevOps Skills Sharp.”

Daniel Wood (CISSP, GPEN) is the Associate Vice President of Consulting at Bishop Fox, where he leads all service lines and develops strategic initiatives. He has over 15 years of experience in cybersecurity and is a subject matter expert in red teaming, insider threat, and … View Full Bio

Article source: https://www.darkreading.com/risk/6-best-practices-for-performing-physical-penetration-tests/a/d-id/1335871?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Nine words to ruin your Monday: Emergency Internet Explorer patch amid in-the-wild attacks

Microsoft today issued a rare emergency security update for Internet Explorer to address a critical flaw in the browser that’s being exploited right now in the wild.

Redmond says the vulnerability, a scripting-engine memory-corruption bug designated CVE-2019-1367, can be abused by a malicious webpage or email to achieved remote code execution: that means Windows PCs can be hijacked by viewing a suitably booby-trapped website, or message, when using Internet Explorer. Malware, spyware, and other software nasties can be injected to run on the computer, in that case.

Discovery of the flaw, and its exploitation in the wild by miscreants to commandeer systems, was attributed to Clément Lecigne of the Google Threat Analysis Group. The programming blunder is present in at least IE 9 to 11.

Such flaws are not uncommon, and Microsoft typically patches anywhere from 10-20 browser and scripting engine remote code execution each month with the Patch Tuesday bundle. Because they allow remote code execution with little or no user warning or interaction, Redmond considers such bugs to be critical security risks.

In this case, the severity of the flaw combined with the fact that vulnerability is being actively targeted has prompted Microsoft to break its normal patch cycle and release the update today, rather than wait until October 8 when the next Patch Tuesday drop is due to arrive.

Granted, Internet Explorer is not the ubiquitous web browser it once was. According to figures from Netmarketshare, IE lags behind Chrome and Firefox, accounting for just 8.3 per cent of the desktop world. Microsoft is pushing users to move to its newer Edge browser and its improved security protections.

Even those who don’t use IE as their primary browser are likely to still have it installed on their PCs, however, so it’s worth downloading and installing this patch (via Windows Update for most) even if you don’t use IE often.

While you’re updating, grab this Windows Defender fix

Microsoft also dropped a fix for a less-severe denial of service vulnerability in the Windows Defender security tool.

CVE-2019-1255 describes a file-handling error in Defender that will cause the security tool to generate a false positive when scanning an application. An attacker who already has access to the system could abuse the feature to make the tool block some applications.

“An attacker could exploit the vulnerability to prevent legitimate accounts from executing legitimate system binaries,” Microsoft said.

Because the flaw would only prevent access, and because it already requires the attacker to have code already running on the target machine, this flaw should be considered a far lower priority than the IE bug. It has not been previously disclosed nor targeted in the wild.

In most cases, users will not even notice the update being installed, as the fix is automatically pushed out with the Malware Protection Engine update. Credit for the discovery was given to Charalampos Billinis of F-Secure Countercept and Wenxu Wu of Tencent Security Xuanwu Lab. ®

Sponsored:
Beyond the Data Frontier

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/23/microsoft_internet_explorer_cve_2019_1367/