STE WILLIAMS

Apple urged to legalize code injection: Let apps do JavaScript hot-fixes

Faced with an existential threat to its hot patching service, Rollout.io is appealing to Apple to extend its app oversight into post-publication injections of JavaScript code.

CTO and cofounder Eyal Keren has penned an open letter to Apple asking the i-obsessed device maker to develop and deploy a “Live Update Service Certificate” as a way to sign JavaScript code so it can be pushed safely to iOS apps for instant content updates.

Apple already reviews iOS apps destined for its App Store, to make sure they conform with its shifting and sometimes vague rules. In so doing, it manages mostly to limit the presence of malicious apps while also enforcing modest minimum standards for quality.

The review process, however, can take anywhere from a few days to a week or more, which turns out to be inconvenient when app developers want to make immediate changes to their code.

Code pushing (or hot patching) frameworks like Rollout and JSPatch emerged to give developers the ability to deploy code without Apple’s involvement, an arrangement Apple until recently has tolerated.

But over the past week, developers using Rollout and JSPatch in their apps have reported receiving warning notices from the company.

While app modifications of this sort can be harmless – replacing interface elements with different designs, for example – they also have the potential to alter previously approved behavior through a technique known as method swizzling, which involves swapping one function for another.

Apple’s concern appears to be specific to Rollout and JSPatch because the two frameworks can “pass arbitrary parameters to dynamic methods.”

In theory, a developer could exploit this capability to call private APIs or to activate and deactivate a malicious function without detection. Other hot patching frameworks that haven’t elicited a response from Apple, like Expo and CodePush, have more limited capabilities because they don’t have the same access to native Objective-C or Swift methods.

Keren, however, in a phone interview with The Register, said, “Rollout blocked private framework API calls a long time ago.” He said he believes Apple’s main concern has to do with the possibility of a man-in-the-middle attack against the patching system, by which private information could be stolen.

Keren added he wasn’t aware of whether any such attack has occurred, but he acknowledges that Apple has a legitimate concern.

Apple’s desire to limit JavaScript injection may be reasonable, but the barrier it’s putting into place is more of an obstacle to misuse than a guarantee of safety. Any app that communicates with a remote server can be coded to pass private information or perform other actions without Apple’s knowledge or approval.

Still, there are benefits to being able to update apps without seeking permission and waiting for a green light from Apple. Keren suggests Apple offer a service to legalize what has until now been a gray market activity. He would like to see Apple develop a means to issue Live Update Service Certificates, which would be similar to other Apple signing certificates.

“Just as Apple signs .ipa files, which are pushed to the App Store and then downloaded to end user devices, we propose Apple begin to sign Javascript code, which is returned to the developer, who can then push it directly to live devices,” Keren suggested in his post. “The Apple SDK would verify the signature authenticity and only execute verified code.”

The Register asked Apple to comment, but the company continued its habit of silence.

Keren doesn’t expect an immediate response from Apple. But pointing to the 50 million devices using Rollout and to the popularity of other frameworks that support pushing code changes through JavaScript, he said, “I think there’s a need. Developers are struggling with the ability to continuously improve their apps.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/14/apple_urged_to_legalize_code_injection/

Hyper-V guest escape, drive-by PDF pwnage, Office holes, SMB flaws – and more now patched

Patch Tuesday After taking a month off, Microsoft’s Patch Tuesday is back – and it’s a blockbuster edition. There are 18 bundles of patches covering 140 separate security vulnerabilities.

These flaws range from a hypervisor escape in Hyper-V, remote-code execution via PDF and Office files and malicious SMB traffic, to the usual barrage of information leaks and privilege escalations.

This follows Microsoft postponing its February Patch Tuesday due to problems within its build system: Microsoft is consolidating more and more of its Windows code – from Server and client to mobile – into one source base, dubbed OneCore. Issuing security patches last month proved problematic enough to delay their distribution, El Reg understands.

An SMB link-of-death bug disclosed before February’s Patch Tuesday was patched by a third-party security vendor – and now Redmond has its official patch out, and so sysadmins can get their fix from the horse’s mouth.

We’ve got a full rundown of this month’s security fixes – make sure you install them ASAP before miscreants start exploiting them in the wild:

  • MS17-006 This fixes 12 CVE-listed flaws in Internet Explorer. The bulk deal with memory corruption issues, but the worst would allow a remote code execution attack when an IE user visited a malicious website. “An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights,” said Microsoft, which is super bad if the user is an administrator.
  • MS17-007 Microsoft’s other browser, Edge, was supposed to be lighter weight and more secure, but this bundle resolves a whopping 32 vulnerabilities. “The most severe of the vulnerabilities could allow remote code execution if a user views a specially crafted webpage using Microsoft Edge,” said Redmond. “An attacker who successfully exploited these vulnerabilities could take control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.”
  • MS17-008 Hyper-V gets an 11-fix bundle this month, the worst being a hypervisor escape from guest to host. Gulp. Microsoft warns “an authenticated attacker on a guest operating system runs a specially crafted application that causes the Hyper-V host operating system to execute arbitrary code.” However, if you’re not running the Hyper-V hypervisor then you’re safe from this kind of attack.
  • MS17-009 This patch contains a single critical fix for the Windows PDF library. The internet is full of dodgy PDFs, and reading one using Windows 8 or above or any version of Windows Server from 2012 on could allow remote code execution. Windows 7 systems aren’t affected by this issue. Opening a PDF booby-trapped with malicious code on a vulnerable machine will cause that code to run; if you open a page on Windows 10 with Edge, with a bad PDF embedded, you’ll be potentially owned immediately. Here’s the skinny from Microsoft:

    A remote code execution vulnerability exists when Microsoft Windows PDF Library improperly handles objects in memory. The vulnerability could corrupt memory in a way that enables an attacker to execute arbitrary code in the context of the current user. An attacker who successfully exploited the vulnerability could gain the same user rights as the current user. If the current user is logged on with administrative user rights, an attacker could take control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.

    To exploit the vulnerability on Windows 10 systems with Microsoft Edge set as the default browser, an attacker could host a specially crafted website that contains malicious PDF content and then convince users to view the website. The attacker could also take advantage of compromised websites, or websites that accept or host user-provided content or advertisements, by adding specially crafted PDF content to such sites.

  • MS17-010 Windows SME Server gets 6 vulnerabilities patched. In the worst case, a specially built Server Message Block (SMB) 1.0 packet can inject malicious code into a server on the network, and run that code. Microsoft admitted: “Remote code execution vulnerabilities exist in the way that the Microsoft Server Message Block 1.0 (SMBv1) server handles certain requests. An attacker who successfully exploited the vulnerabilities could gain the ability to execute code on the target server. To exploit the vulnerability, in most situations, an unauthenticated attacker could send a specially crafted packet to a targeted SMBv1 server.”
  • MS17-011 Based on this patch update, Uniscribe is a mess. Redmond included 29 fixes in this bundle going all the way back to Windows Vista. The bulk cover information disclosure problems, but there are a handful of remote code execution flaws that allow code execution but not privilege escalation.
  • MS17-012 Windows only gets six bugfixes, but they are critical ones, including for the SMB link-of-death vulnerability as well as code injection and execution. There are patches here for all versions of Windows going back to Vista and the worse of them could “allow remote code execution if an attacker runs a specially crafted application that connects to an iSNS Server and then issues malicious requests to the server.”
  • MS17-013 The last of the so-called critical patches contains a dozen fixes for the Microsoft Graphics Component used in Office, Skype, Lync, and Silverlight. Visiting the wrong website or opening a malware-ridden document could completely pwn your system without this patch, and everyone back to Windows Vista is vulnerable.
  • MS17-014 The first of the important (arguably still critical) patches covers 12 flaws in Office. Some of these go back to Office 2007 (including Mac versions), and can be exploited by a dodgy Word or Excel file to run code on a system when opened.
  • MS17-015 Microsoft Exchange Server 2013 and 2016 get a single bugfix, but it’s a doozy. A link within an email could exploit a vulnerability in Outlook Web Access to run code on a vulnerable system.
  • MS17-016 Microsoft Internet Information Services gets a long-term issue going back to Windows Vista resolved. This would allow an attacker to monitor a user’s web sessions.
  • MS17-017 Redmond’s kernel gets four fixes and all versions of the operating system are affected. While none of them would allow a crook to break into a system, they would all allow someone already in to elevate their status to admin level by fooling the Transaction Manager or instigating buffer overruns.
  • MS17-018 Kernel-mode drivers also need an upgrade, with eight flaws resolved, going back to Vista. Again, memory problems are to blame and can give a logged-in attacker admin privileges to ransack an infected PC.
  • MS17-019 This fixes a single hole in Active Directory Federation Services that is for server-side-only operating systems. Unpatched versions of Server 2008 and above would allow a hacker to read information on the system.
  • MS17-020 Fans of retro storage media who are using Windows DVD Maker will need to patch this single vulnerability for the Vista and Windows 7 builds. It’s not a critical flaw, but would allow an attacker to scan a vulnerable system for information-gathering purposes.
  • MS17-021 Windows DirectShow gets a fix for a flaw affecting all client and server operating systems since Vista. Again, it’s an information gathering bug that can be exploited by code hidden in a website’s media display engine.
  • MS17-022 Microsoft XML Core Services also gets a single fix for a problem spread by social engineering. Clicking on the wrong link could cause information leakage to a cunning criminal using this vulnerability.
  • MS17-023 It wouldn’t be a Patch Tuesday without a critical hole in Adobe’s Flash Player, and this month is no exception. Windows 8.1 machines and those with more recent operating systems will need to patch their Adobe Flash libraries with this update, which makes IE 10 and 11, and Edge browsers, unsafe.

Adobe has also released its own patches, one Windows-only and the other hitting users of Macintosh, Linux and Chrome OS as well.

  • APSB17-07 This is the big one for all Flash users, pretty much whatever your operating system if you’re running version 24.0.0.221 and earlier. The patch fixes memory and buffer overflow issues that would allow remote code execution and others that cause information leakage.
  • APSB17-08 This fix is for Windows users only using version 12.2.7.197 and earlier of Shockwave. It’s not a critical flaw, but would allow escalation of privileges.

As ever with all of these, get your patching done early – the bad hombres won’t wait. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/15/microsoft_massive_patch_tuesday_bundle/

News in brief: site helps translation from geek to English; sex toy maker settles suit; social media under fire

Your daily round-up of some of the other stories in the news

Online dictionary explains tech in human-friendly terms

Ever wondered how to explain something complex and technical to a layperson? Maybe you’re doing family computer and security support and you’ve been looking for the best way to explain what, say, a firewall does, or how Tor works?

Look no further: a project nurtured by Jigsaw, Google’s tech incubator, has launched a very nicely done website that offers human-friendly definitions of technical terms.

Contributors to the easy to use website, the Sideways Dictionary, include Google’s CEO Eric Schmidt, who contributed definitions for machine learning and cloud computing, and Vint Cerf, the “father of the internet”. Cerf’s contributions include human-friendly definitions of domain name servers and IP address.

We’re fans of explaining geeky things clearly on Naked Security, so we like this a lot.

Sex toy manufacturer settles privacy lawsuit

Standard Innovation, a Canadian maker of sex toys, has agreed to pay C$4m to settle a privacy lawsuit after its “smart vibrator”, the We Vibe, was found to be tracking users’ data without their consent.

The device, which connected users and their partners via a smartphone app, was found to be seriously lacking in security: the Bluetooth connection was so insecure that anyone within range could have seized control of the device, and the device was sharing user data with the manufacturer.

The C$4m will be distributed among US users – the lawsuit was brought in Illinois – who will be able to claim up to $10,000 in compensation.

Social media providers under fire

Facebook, Twitter and Google came under fire from British MPs for not taking tough enough action to tackle hate crime. Executives being grilled by the House of Commons Home Affairs committee were told by committee chair Yvette Cooper that they had a “terrible reputation” for failing to act over posts that fell foul of laws on hate speech and other offensive material.

One Labour member of the committee told the companies that they were doing little more than “commercial prostitution” after one executive tried to explain why an example of Holocaust denial didn’t breach its guidelines.

Meanwhile, a German lawmaker has proposed fining social networks who fail to tackle hate speech or defamatory “fake news” up to €50m.

The proposal from Heiko Maas, Germany’s minister of justice and consumer protection, would require social media platforms to make it easy for users to report hate speech and to respond promptly, with posts that are “obviously criminal content” being removed within 24 hours.

Catch up with all of today’s stories on Naked Security


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/n66-OI6aMug/

How a serious Apache vulnerability struts its stuff

Last week’s vulnerability news was full of web technologies with intriguing names, including Apache, Struts, Java, Jakarta and OGNL.

(In case you’re wondering, OGNL is short for Object-Graph Navigation Language, which is hard to explain briefly even to people who are familiar with it, but we’ll get back to that in a moment.)

We’ll start with a quick glossary:

  • Apache HTTPD is a popular, open source web server that is very widely used. Quite how widely depends on whom you ask and how you measure, but it’s a fair guess that somewhere between a third and a half of the world’s websites use it.
  • Java, not to be confused with JavaScript, is a popular programming language, best known for writing business applications, browser applets (although these are rarely used these days) and servlets, which are stripped-down programs that generate customised web pages on the fly as users browse your site.
  • Struts is an add-on to Apache that lets you use Java servlets to manage and deliver the content of your site. Note that Struts is a server-side technology: it isn’t about what runs in your users’ browsers, but about what runs on your server to generate the web pages that Apache serves up.
  • Jakarta is the umbrella name for a number of Java-based programming projects that were run for the community by Apache.
  • OGNL is a curious beast that is intended to let you specify how the contents of your web pages are derived from various databases stored behind the scenes. But OGNL is as good as a programming language in its own right, because it can be used to specify Java and other commands that are used in the process.

Last week’s media confabulation of all these names was a security vulnerability that goes by the unassuming name of CVE-2017-5638, or “Possible Remote Code Execution when performing file upload based on Jakarta Multipart parser”.

Simply put, web servers using the Struts 2 software include a component to deal with web uploads, which is typically what happens when a user fills in a web form on your site and clicks [Submit]:

Web form uploads are usually delivered using an HTTP request called a POST.

(As the name suggests, a POST is essentially the opposite of a GET request, which is how you download webpages, files and other content.)

For example, a web page containing a form like this…

…will produce a multipart/form-data reply in which each input field in the form is packaged and uploaded in a section of its own, in much the same way that multipart emails (e.g. messages with attachments) are formatted:

The theory is that posting the submitted data, and encoding it into its separate fields before submitting it, is less error prone that packaging it into some sort of all-in-one binary blob and uploading that.

The part of Struts 2 server-side software that deals with web requests of this sort is referred to by Apache as the Jakarta based file upload Multipart parser, and its job, greatly simplified, is to check for a Content-Type header of multipart/form-data, and then to tease the POSTed data into its multiple parts.

If the Content-Type is incorrect, the Jakarta parser is supposed to respond with an error message that explains what was wrong, which means taking the invalid Content-Type data and processing it in some way to extract information that would be helpful in fixing the problem.

Untrusted remote content

You can probably guess where this is going: a bug in the processing of erroneous, untrusted remote content, leading to an exploitable vulnerability.

It turns out that if the Content-Type header is a fragment of OGNL code, the Jakarta parser processes it not as text but as an OGNL “program”, and OGNL code can contain components that specify commands that should be run on the server itself when dealing with the data.

In other words, you can embed server commands in a Struts 2 form POST header, thus deliberately triggering an error, and the server will run your commands, ironically while trying to deal gracefully with the error.

That sort of exploit is aptly known as RCE, or Remote Code Execution, which means exactly what it says and always spells trouble, in this case with a capital T.

Without logging in, without fetching the original web form page in the first place, and without even having any form data to upload, a crook may be able trigger this bug simply by visiting the web page listed in the action field of any of your web forms.

For example, the well-known command-line web clients WGET and cURL can generate POST requests with headers of your choice like this:

The malformed Content-Type header you need is not trivial to figure out by yourself, but unfortunately you don’t need to, because working examples written in what you might call OGNL’s “pseudo-Java” source code have already been furnished on many websites:

What to do?

Patch early, patch often!

If you use Struts 2 somewhere in your network, and still haven’t applied the latest patch, you really ought to, because this vulnerability is easy to exploit by anyone who wants to try.

Sadly, experience suggests that there are often wannabe hackers out there who fancy their “15 minutes of fame”, even if they don’t have any explicit criminal aims and wouldn’t show the same sort of vandalistic tendencies against traditional property.

The Canada Revenue Agency (CRA), for example, discovered this the hard way back in 2014, when a 19-year-old student was accused of turning the Heartbleed vulnerability against the Agency’s website.

He allegedly extracted 90 social insurance numbers over a six-hour period, just to prove he could.

This time, the CRA acted quickly, taking its systems down to apply the vital patch over the previous weekend, even though it’s tax season in Canada.

If Canada Revenue can find a way to accommodate an unscheduled outage of this sort, you can, too!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Ojm5CMdWkDU/

Black Hat Review Board Spotlight: Beyond the Bio with Jamie Butler

Get to know the Black Hat Review Board in a new interview series, Beyond the Bio. In this series, Black Hat Review Board Members offer insight from their favorite exploits and pastimes to their most memorable Black Hat experiences.

In this issue, Black Hat interviews Jamie Butler, Chief Technology Officer at Endgame.

Why did you choose your profession?

I was attracted to computer security at a young age because I felt it was me against the machine.  That machine could be very expensive and owned by a corporation or a nation.  With a small investment in a personal computer, one can truly take on the world.  Hacking helped democratize the world for me.

Who’s someone in the community you’ve never met, but would like to?

I would really like to meet Steve Wozniak and learn more about the early days of hardware hacking.

What’s the biggest issue facing InfoSec that needs to be solved?

The biggest issues facing the industry today is lack of transparency among security vendors. Organizations spent $75 billion on security last year, yet they are still being breached at an alarming rate. Nonetheless, vendors are still making promises about their technology that they can’t keep, leading to confusion and discontentment amongst organizations. The InfoSec industry as a whole needs to commit to being more open and explicit about how their capabilities can solve problems for customers.

Have you ever been hacked?

Well, that depends on your definition of hacked.  My corporate laptop was infected once in 2003 by the Blaster worm. At the time, I had two endpoint protection products running on my laptop; however, they were in detect only mode. When I dug into the logs of the products, I saw that they had both detected the buffer overflow.  I was curious into how the protection software worked and how it could be bypassed.  This research led to the Phrack Volume 0x0b, Issue 0x3e, Phile #0x05 article.

How do you “unplug” in your free time?

After a long day at work, I need thirty minutes to an hour of television or movies – something to watch or have on that is completely mind numbing.  When I have a day to myself, I usually chose to read a biography or some other non-fiction book related to technology, entrepreneurship, or finance.

Article source: http://www.darkreading.com/vulnerabilities---threats/advanced-threats/black-hat-review-board-spotlight--beyond-the-bio-with-jamie-butler/d/d-id/1328399?_mc=RSS_DR_EDT

Debunking 5 Myths About DNS

From the boardroom to IT and the end user, the Domain Name System is often misunderstood, which can leave organizations vulnerable to attacks.

The Domain Name System (DNS) is the common denominator for all communication on the Internet. It touches everyone. Every online transaction – good or bad – begins with a DNS lookup. Despite its critical role in our online lives, DNS is often misunderstood and, as a result, leaves organizations more vulnerable to attacks. I’d like to address five myths about DNS.

 More on Security Live at Interop ITX

Myth 1: DNS Is not a Boardroom Issue
If you were to walk into your average corporate executive suite and say “DNS,” most likely the executives would wonder why this technical detail is being mentioned to them. Most C-level and boardroom execs view DNS purely as an IT issue. Yet that could not be farther from the truth.

Domain names and related subdomains are critical company assets – your brand ambassadors – that need to be carefully managed and protected to ensure a healthy, profitable business. If these assets are used in phishing scams or other cyberattacks, a company’s revenue and reputation can be severely damaged.

Today, too often, it’s the organization’s legal team that truly understands the value of DNS to the corporate brand. In many companies, the IT department initially registers the domain names but leaves the oversight of the domain name to the legal department. A better approach is for legal and technology teams to collaborate to insure that all the domains that are properly registered have policies, procedures, and tools in place to protect them.

Myth 2: DNS Drives on Auto-Pilot
A DNS architecture is not static – it is constantly evolving and requires care. Many corporate infrastructures suffer from considering that DNS is something you configure and leave alone since “it just works.” In reality, DNS cannot ride on auto-pilot; DNS hygiene is essential as an ongoing task. I suspect there are many environments that never monitor their DNS traffic to see where the domain name to IP address resolution is being performed. Is the server that is giving the authoritative answers truly authoritative, or is it a malicious server that is impersonating an authoritative role? 

DNS architectures need to be engineered with careful thought as to how long entries should be cached, and where cache miss traffic resolution should be performed. For example, users can change the DNS resolvers they go to and, thereby, significantly impact corporate business risk. Is this allowed in your environment? Robust DNS architectures need to be created that also follow and enforce DNS architecture best practices.

Myth 3: DNS Is not a Security Issue
In 2016, DNS celebrated its 33rd birthday. In its early days, DNS was not a key security issue. In the first edition of my book, “Designing Network Security,” published in 1999, I only made passing mention of securing critical infrastructure services such as DNS. It wasn’t until 2005 that I started incorporating in-depth DNS security into my security workshops and assessments. Over the last five to 10 years, cybercriminals have increasingly utilized DNS for various malware infrastructures. Despite the rise in DNS-related cyberattacks, such as DNS Changer, companies still overlook DNS during security assessments. Today, DNS security is essential for protecting against cyberattacks. Historical and real-time visibility of the DNS can provide critical context for suspicious indicators of compromise (IoCs) for SOCs and other security teams.   

Myth 4: DNS-Related Risks Are Small
Today DNS is integral to online criminal infrastructures. Why? Because purchasing domain names is cheap and easy. In fact, upwards of tens of thousands of domains are generated per day by a single malware family, according to Trend Micro. The number of DNS-related cyberattacks is escalating across all types of industries, from healthcare to retail, as well as across all government agencies. For example, in 2016, enforcement agencies took down 4,500 domain names selling counterfeit luxury goods, sportswear, spare parts, electronics, pharmaceuticals, toiletries and other fake products. According to the APWG Phishing Trends Report Q4 2016, 2016 was the worst year for phishing ever. The total number of phishing attacks observed by the APWG in 2016 was a record 1,220,523, a 65% increase over 2015.  DNS-related risks are great and can have a significant impact on a company’s financial and reputation bottom line.

Myth 5: DNS=Translating Names to Numbers
DNS is not just about mapping domain names to IP addresses. It plays a larger role in Internet communications. DNS also provides critical information, including:

  • MX records — specifies the domain name of a mail recipient’s email address;
  • SRV records — defines both the port number and the domain name used by a service;
  • DNSSEC (Domain Name System Security Extension) records — cryptographically signs each DNS CNAME records — maps a name to another name.

From the boardroom to legal and IT departments and the end user, DNS is critical to the success of every corporation. Understanding the myths about DNS and aligning corporate strategies for assessing and addressing them is an important step to improving your organization’s security posture.

Related Content:

 

Merike is the CTO of Farsight Security, responsible for developing the technical strategy and executing its vision. Prior to joining Farsight Security, Merike held positions as CISO for Internet Identity (IID), and founder of Doubleshot Security, which provided strategic and … View Full Bio

Article source: http://www.darkreading.com/debunking-5-myths-about-dns-/a/d-id/1328385?_mc=RSS_DR_EDT

New ‘PetrWrap’ Signals Intensified Rivalry Among Ransomware Gangs

PetrWrap modifies Petya ransomware so its authors can’t control unauthorized use of their malware.

Researchers at Kaspersky Lab have discovered a new ransomware family that basically steals features from the infamous Petya ransomware.

The new PetrWrap uses Petya ransomware to encrypt its victims’ data. PetrWrap’s creators built a special module that modifies the original malware “on the fly,” meaning Petya’s creators cannot take control of it.

Experts say the PetrWrap gang’s actions could be a sign of increasing competition among players in the ransomware space.

“The modification and repurposing of malware code is not a new phenomenon; exploit kits are often created and sold on the Dark Web,” says Gerben Kleijn, a security analyst with Bishop Fox. “However, the blatant hijacking of another author’s ransomware and replacing function calls to make it seem like a new ransomware version altogether has not been a common trend for ransomware.”

Petya, which was originally discovered in May 2016, encrypts data stored on a computer and overwrites the hard disk drive’s master boot record so infected PCs can’t boot into the operating system.

It’s a prime example of ransomware-as-a-service model where threat actors offer ransomware “on demand” to spread its use among several distributors and receive part of the profit. PetrWrap creators, howevere, managed to bypass payment to Petya’s cereators by somehow cheating the protection mechanisms put in place by Petya’s authors.

Until now, ransomware authors were primarily concerned with implementing encryption correctly so users couldn’t decrypt files without paying ransom. Now, authors who don’t want their code modified may implement mechanisms to complicate reverse engineering and modification, leading to more advanced ransomware. Others may create code specifically for reuse by other threat actors and sell it on the Dark Web, Kleijn notes.

PetrWrap’s authors found a way around these protective mechanisms. Now they can use Petya to infect machines, change the code in real time to hide which malware they’re using, and avoid paying Petya’s creators.

Petya has a strong cryptographic algorithm. The people behind PetrWrap use their own public and private encryption keys, which let them work without need of a key from the Petya authors to decrypt victims’ machines if the ransom is paid.

This strong algorithm is likely what attracted PetrWrap’s authors to exploit Petya, which has been updated after mistakes in earlier versions allowed security researchers to find ways of decrypting files. Victims’ machines are now consistently encrypted when Petya attacks, meaning it was a strong malware family for PetrWrap’s creators to exploit, notes Kleijn.

Anton Ivenov, senior security researcher for Anti-Ransom at Kaspersky Lab, spins the trend of threat actors targeting one another in a positive light:

“We are now seeing that threat actors are starting to devour each other and from our perspective, this is a sign of growing competition between ransomware gangs,” he says in a statement.

“Theoretically, this is good, because the more time criminal actors spend on fighting and fooling each other, the less organized they will be, and the less effective their malicious campaigns will be.”

What’s worrisome, he continues, is PetrWrap is used in targeted attacks, which are increasingly used on enterprise victims.

More cybercriminals are launching targeted attacks on organizations with the primary goal of encrypting data. Those employing ransomware for these attacks typically seek vulnerable servers and use special frameworks to get the access they need to install ransomware throughout the network.

Kleijn says that while the discovery of PetrWrap doesn’t pose a new risk to businesses, it does indicate the ransomware industry is evolving. Expect to see the rise of more ransomware variants, especially as authors begin to sell code to fellow attackers, he says.

Businesses can defend against ransomware attacks by using security software with behavior-based detection, which observes how malware operates on victim systems and detects unknown ransomware. They should also back up data, assess security of control networks, and educate employees, especially operational and engineering staff, on recent attacks.

Related Content:

Kelly is an associate editor for InformationWeek. She most recently reported on financial tech for Insurance Technology, before which she was a staff writer for InformationWeek and InformationWeek Education. When she’s not catching up on the latest in tech, Kelly enjoys … View Full Bio

Article source: http://www.darkreading.com/threat-intelligence/new-petrwrap-signals-intensified-rivalry-among-ransomware-gangs/d/d-id/1328401?_mc=RSS_DR_EDT

Researchers find 36 Android devices shipping with pre-installed malware

Update: Tuesday, 3/14/2017: Check Point originally listed 38 devices, but later dropped the number to 36. Nexus 5 and Nexus 5x were originally on the list of infected phones, but Check Point removed those models without explanation in an update of its blog post.

SophosLabs cited a rising tide of Android-based attacks in its 2017 Malware Forecast last month, and the problem was further illustrated last week in a report that Windows-based malware was making its way into Android apps during development. And now researchers have discovered another security issue: devices shipping with pre-installed malware.

Check Point’s Mobile Threat Prevention team says it detected malware in 36 Android devices belonging to a large telecommunications company and a multinational technology company.

The team said malicious code was already present on the devices even before they were issued to users. Just as the Windows-based malware cited above was introduced during the development process, so were the malicious apps Check Point discovered. Six infections were apparently added to the device’s ROM by bad actors using system privileges.

Most of the sinister apps steal information and display unwanted ads. The malware discovered is well-known to SophosLabs researchers. One is Loki, used by attackers to gain device system privileges. Another is a piece of ransomware known as Slocker, which relies on Tor to conceal the bad guys’ identities.

Check Point didn’t name the affected companies, but it did list the infected devices, which include:

  • A Xiaomi Mi 4i and Redmi
  • A Galaxy A5, S4 and S7
  • A Galaxy Note 2, 3, 4, 5 and 8
  • A ZTE x500
  • A Galaxy Note Edge
  • A Galaxy Tab 2 and S2
  • An Oppo N3
  • An Asus Zenfone 2
  • A Lenovo S90 and A850
  • An OppoR7 plus
  • An LG G4

The growing threat to Android users was explained in detail last month in Sophos’ malware forecast. SophosLabs analysis systems processed more than 8.5m suspicious Android applications in 2016. More than half of them were either malware or potentially unwanted applications (PUA), including poorly behaved adware.

When the lab reviewed the top 10 malware families targeting Android, Andr/PornClk was the biggest, accounting for more than 20% of the cases reviewed in 2016. Andr/CNSMS, an SMS sender with Chinese origins, was the second largest (13% of cases), followed by Andr/ DroidRT, an Android rootkit (10%), and Andr/SmsSend (8%).

In addition to malware, Android has been found vulnerable to a variety of hacking techniques. In one such case, researchers found that attackers can crack Pattern Lock within five attempts by using video and computer vision algorithm software.

Last week, researchers at Palo Alto Networks discovered 132 Android apps on Google Play tainted with hidden IFrames linking to malicious domains in their local HTML pages. Interestingly, the malware was Windows-based. SophosLabs showed additional research tracing that malware back to a developer who goes by the name Nandarok.

Defensive measures

Though Android security risks remain pervasive, there’s plenty users can do to minimize their exposure.

In the case of the malware discovered by Check Point, a simple piece of advice is to scan a new phone for malware. Though it can make sense for small companies with limited budgets to purchase the devices through cheaper resellers, it’s important to research the sellers to see if they’ve sold problematic technology in the past. Trusted websites and stores remain the safest route of purchase.

In a more general sense and outside of this specific problem, there are some best practices users can follow when buying and using Android apps:

  • Stick to Google Play. It isn’t perfect, but Google does put plenty of effort into preventing malware arriving in the first place, or purging it from the Play Store if it shows up. In contrast, many alternative markets are little more than a free-for-all where app creators can upload anything they want, and frequently do.
  • Consider using an Android anti-virus. By blocking the install of malicious and unwanted apps, even if they come from Google Play, you can spare yourself lots of trouble.
  • Avoid apps with a low reputation. If no one knows anything about a new app yet, don’t install it on a work phone, because your IT department won’t thank you if something goes wrong.
  • Patch early, patch often. When buying a new phone model, check the vendor’s attitude to updates and the speed that patches arrive. Why not put “faster, more effective patching” on your list of desirable features, alongside or ahead of hardware advances such as “cooler camera” and “funkier screen”?


 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OKUysprRTdw/

SXSW: the real cost of free services is giving up your data

Naked Security is reporting from SXSW in Austin, Texas this week

“Even with the best will in the world, you cannot be completely anonymous online,” said Liz Kintzele, VP of sales and marketing at Golden Frog, at this year’s SXSW conference in Austin, Texas.

Kintzele took her audience on a broad journey around anonymity and privacy on today’s internet. It might have been light on technical details, but if the goal was to force the audience to stop mindlessly browsing and think about some of the issues around big data, then it was a success.

Kintzele’s message is simple: “There is no such thing as total anonymity on today’s internet.” From the moment you sign in to your smartphone or desktop computer, how you interact online has the potential to be tracked and used by others for profit.

Many will be uncomfortable with this and look for ways to hide their online footprint. The goal should not be to believe the claims of total anonymity, noted Kintzele, but to think carefully about what online services you use, what information you are giving out and how this information could be used by others.

Kintzele focused on how this situation has occurred, laying out how the internet grew to feed the “big data” machine. She highlighted three moments that led to the current attitudes and expectations of widespread data collection: the launch of Google in 1998, Facebook in 2004 and the alterations to Google’s privacy policy in 2014. These moves ushered in and gave tacit permission for the monetisation of big data.

More of our data is being stored and used by companies every day. The Investigatory Powers Act 2016 in the UK increases the amount of personal data that has to be retained. Recent changes to the FCC’s privacy regulations has increased the ability for US telecom operators to log and process personal information.

In a world where services are free at the point of use, the real cost is giving up your personal data. From search engines and browsers to websites, apps and social media hubs, you are constantly providing companies with this data.

When you sign up for free you give away information. You hand over your personal details, usernames, birthdays and location. But you also hand over more data when you use these services. You hand over the websites you visit, the links you click, the images you look at, the people who you communicate with, your location, your device’s unique identifiers, and much more.

All of this data can and will be used and sold for profit.

Although Kintzele reminded the audience that there is no simple answer to the challenge of anonymity and privacy, the talk was decidedly light on practical examples that could be used by the audience. Tor was highlighted as a good starting point for those with concerns, but it was also used to highlight the dangers of trusting a privacy provider. Tor has to gather some of your personal data to work effectively, and when your data is being collected it has the potential to be used.

Kintzele’s message was about taking responsibility and looking beyond the advertising to see what a service will do with your data – that tends to be hidden deep in the terms and conditions, but it will be there.

Ghostery’s move towards a more transparent approach was praised. This browser extension blocks advertising trackers, tags, and beacons. Its model does involve selling data back to advertisers on adverts that are blocked so they can refine their approach. What is key in Kintzele’s mind is that Ghostery is transparent and clear with its users in what it does with the gathered data, which was not always the case.

Kintzele argues for a personal approach that will limit your exposure as much as possible. Be aware of the information that you are offering whenever you use a service, be it a service that can limit the data you offer or one which will use a much of your data as possible for profit. Look beyond the surface of the companies and tools that you are using, drill down into the terms and conditions to see not only what your data will be used for, but who actually controls the company.

Most importantly you have to take control. Your privacy is not the responsibility of others, it is yours.

You can never hide every single action you make on the internet, but neither should you trust these companies to respect your data while profiting from it. Educate yourself, research the tools you can use and reduce your visible digital footprint. These will all make it harder for companies to track you and monetise you, and will put you back in control of your personal data.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/8B_kw0VSYWY/

You could soon have to share your genetic screen results with your boss

A bill that would strip genetic privacy protections from workplace “wellness” programs, allowing companies to require employees to undergo genetic testing or risk paying stiff penalties, was approved in the US by a House committee on Wednesday.

All 22 Republicans supported passage of HR 1313, known as the Preserving Employee Wellness Programs Act. All 17 Democrats on the committee opposed the bill.

Stat, which first shone light on the bill on Friday, reports that employers are at present barred from collecting genetic data by legislation that includes GINA: the 2008 Genetic Information Nondiscrimination Act.

HR 1313 gets past GINA by stating explicitly that the law’s protections don’t apply when genetic testing is done as part of a workplace wellness program.

Such programs are voluntary. Sort of. Employers have made them hard to say no to, given that the Affordable Care Act (ACA) allows them to charge their workers 30% more for health insurance if they decline to participate. HR 1313 would up that to 50%.

The programs offer things like cholesterol and blood pressure checks; get-healthy incentives such as gym memberships; smoking cessation help; and bans on junk food in vending machines. But they also frequently include questionnaires about personal details, such as plans to become pregnant, for example. Employers hire outside companies to run these programs, and those companies in turn sometimes sell employees’ health information.

Thus, as Stat reports, employees can find themselves getting marketed at, be it for weight-loss programs or running shoes.

Nancy Cox, president of the American Society of Human Genetics (ASHG), said in a letter to the House Committee on Education and the Workforce the day before it approved the bill that the legislation “would undermine fundamentally the privacy provisions” of GINA and the 1990 Americans with Disabilities Act. From the letter:

[It would allow] employers to ask employees invasive questions about their and their families’ health, including genetic tests they, their spouses, and their children may have undergone.

According to the Kaiser Family Foundation, the average annual premium for employer-sponsored family health coverage in 2016 was $18,142. If HR 1313 were to pass into law, employees could be charged 30% more, or an average extra of $5,443 in annual premiums if they decline to share their genetic and health information.

What are the chances that HR 1313 will be passed? As of Monday, opposition was growing. As Stat reported, one of the country’s leading wellness associations – the Wellness Council of America – said it would oppose the bill, calling the proposed changes “punitive”.

The association signed a statement from a new group consisting of trade associations, wellness companies and other businesses, called Ethical Wellness, that announced its opposition to the bill, which it said provides “too much opportunity for such data to be misused or misinterpreted”.

Tom Price, secretary of Health and Human Services, went on NBC’s Meet the Press on Sunday to discuss the Republicans’ efforts to repeal and replace the ACA. It sounds like he hasn’t read HR 1313, but the Trump administration might have “significant concerns” about the bill.

I’m not familiar with the bill, but it sounds like there would be some significant concerns about it. If the department’s asked to evaluate it, or if it’s coming through the department, we’ll be glad to take a look at it.

The House Ways and Means Committee and the House Energy and Commerce Committee are considering the bill. According to the National Law Review, it’s expected to be included in the larger healthcare replacement of the Affordable Care Act (ACA).


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wJItyF1JD0w/