STE WILLIAMS

Advanced, Low-Cost Ransomware Tools on the Rise

New offerings cost as little as $175 and come with lots of anti-detection bells and whistles.

Malware developers keep making it easier for even the most broke and technically inept bad guys to jump on the ransomware craze with cheap and user-friendly tools that are bound to fuel plenty more computer blackmail attacks in 2017. The latest evidence of the trend comes from a report out today of a new variant offered up by Russian cybercriminals through a software-as-a-service delivery mechanism that costs criminals only $175 to get started.

According to researchers with Recorded Future, the Karmen Cryptolocker malware variant is ransomware-as-a-service (RaaS) built on top of the open-source Hidden Tear project. It follows the standard ransomware m.o., with mechanisms for taking data hostage with AES-256 encryption, accepting Bitcoin payment from hostages. and automatically decrypting data upon payment. In addition to the low-cost barrier to”clients” – otherwise known as victims in the law-abiding world– and tally total money stolen, as well as update the software when updates are available.

RaaS is hardly a new phenomenon. Security researchers have been digging up similar examples over the last two years after McAfee Labs researchers found the Tox malware kit in the wild. But the professional sheen to the Karmen shows how ransomware tools continue to evolve as every type of criminal tries to make a buck on the ransomware gold rush.

Many technically minded criminals are able to crank out variations like Karmen for their less geeky brethren due to the now wide availability of code found in open-source projects like the one behind Karmen’s code. For example, just today, Cylance researchers reported another new variant of CrypVault ransomware, which uses the GnuPG open-source encryption tool to encrypt files.

“Unlike common ransomware, CrypVault is simply written using Windows scripting languages such as DOS batch commands, JavaScript and VBScript,” writes Rommel Ramos of Cylance. “Because of this, it is very easy to modify the code to create other variants of it. Any potential cybercriminals with average scripting knowledge should be able to create their own version of this to make money.”

For it’s part, the Hidden Tear project from which Karmen was derived was first developed as ‘educational ransomware’ software. But it took off with a mind of its own once the bad guys got their hands on it several years ago. The silver lining for security professionals is that the underlying code has vulnerabilities embedded within it, which has made it possible for ransomware researchers like Michael Gillespie to create decryptors for it. He’s already taken to Twitter to offer help to anyone affected by Karmen.

Nevertheless, Karmen still shines a light on the dangerous technical evolution of ransomware with some under-the-hood tinkering that Recorded Future researchers say are meant to deter sandbox analysis.

“A notable feature of Karmen is that it automatically deletes its own decryptor if a sandbox environment or analysis software is detected on the victim’s computer,” writes Diana Granger with Recorded Future. Her associates told Dark Reading that it’s meant to discourage security tools and researchers from learning too much about its code.

This kind of evasion technique is typical of most evolving malware and is happening among a number of notable malware types. Last month Trend Micro reported that new variants for Cerber are running anti-sandbox features to evade machine-learning security technology.

“This is a typical game of cat and mouse. Criminals make an innovation in their techniques, so defenders follow suit,” says Travis Smith, senior security research engineer for Tripwire. “Once the criminals activities are being slowed by defensive measures, they continue to change their tactics.   As far as the seriousness of these evasion techniques, they pose no additional risk to the end-user when it comes to protecting themselves.”

Unfortunately, according to a recent study by SecureWorks, even though 76% of organizations see ransomware as a significant business threat, only 56% have a ransomware response plan. According to Keith Jarvis, senior security researcher at SecureWorks, organizations worried about ransomware need to not only make sure their backup and endpoint protection protocols are firmly in place, they’ve also got to take a second look at email filtering and patch management.

 “We see most ransomware come in through emails and browser exploit kits that rely on poorly patched environments,” he says, pointing to Adobe Flash as a common culprit. “A great first step for email defense would be to block outright the most abused file extensions used by executables and scripts. Next would be to block Word documents that contain macros. If you take these steps, you’re going to block the overwhelming majority of ransomware.”

Related Content:

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/advanced-low-cost-ransomware-tools-on-the-rise/d/d-id/1328675?_mc=RSS_DR_EDT

Burger King triggers Google Home devices with TV ad

After using an ad to hijack the OK Google voice assistant so it would read Whopper ingredients from Wikipedia, Burger King itself has been flame-broiled by scampy Wikipedia editors.

Here’s the 15-second ad, released on Wednesday:

In it, a cheeky young actor dressed like a fast food employee says this:

You’re watching a 15-second Burger King ad, which is unfortunately not enough time to explain all the fresh ingredients in the Whopper sandwich. But I got an idea.

Then, he beckons the camera closer and says this home assistant triggering line:

OK Google, what is the Whopper burger?

As you can see in the 30-second video posted by the New York Times, it works just as Burger King planned. A home assistant device powered by OK Google lights up and reads out the ingredients list, which, as it turns out, was edited by a Wikipedian last week who goes by the username Fermachado123.

That appears to be the username of Burger King’s marketing chief, Fernando Machado.

Before Fermachado123 injected his marketingese into it, the first line of the Whopper entry read like so:

The Whopper sandwich is the signature hamburger product sold by the international fast-food restaurant chain Burger King and its Australian franchise Hungry Jack’s.

After Fermachado123’s marketing fluff injection, that first line read like this:

The Whopper is a burger, consisting of a flame-grilled patty made with 100 percent beef with no preservatives or fillers, topped with sliced tomatoes, onions, lettuce, pickles, ketchup, and mayonnaise, served on a sesame-seed bun.

Oh, really? said other Wikipedians, who went on to edit the ingredient list to include, variously, an “often stinky combination of dead and live bacteria,” “mucus,” a “fatally poisonous substance that a person ingests deliberately to quickly commit suicide” and “a juicy 100 percent rat meat and toenail clipping hamburger product”.

Google eventually stuck a stick in the spokes of the marketing wheels. Within hours of the ad’s release and the addition of these alternative/toxic/illegal ingredients, tests run by The Verge and BuzzFeed showed that Burger King’s commercial had stopped activating OK Google devices.

Wikipedia also pulled the plug on the fun, locking the Whopper entry and allowing changes to be made only by authorized administrators.

Veteran privacy activist Lauren Weinstein took to his blog to accuse Burger King of a “direct and voluntary violation of law”:

…the federal CFAA (Computer Fraud and Abuse Act) broadly prohibits anyone from accessing a computer without authorization. There’s no doubt that Google Home and its associated Google-based systems are computers, and I know that I didn’t give Burger King permission to access and use my Google Home or my associated Google account. Nor did millions of other users. And it’s obvious that Google didn’t give that permission either.

This isn’t the first time that commercials have accidentally set off voice assistants. It happened with a Google Home ad, which aired during the Super Bowl in February, for one. “OK Google,” said people in the ad, causing devices across the land to light up.

Alexa’s had its own share of miscues: in January, San Diego’s XETV-TDT aired a story about a 6-year-old girl who bought a $170 dollhouse and 4 lbs. of cookies by asking her family’s Alexa-enabled Amazon Echo, “Can you play dollhouse with me and get me a dollhouse?”

Cute story, eh? Well, not for viewers throughout San Diego who complained that, after the news story aired, their Alexa devices tried to place orders for dollhouses in response.

One problem with these internet of things (IoT) gadgets is that while they have voice recognition, they don’t necessarily have individual voice recognition. Any voice will do, be it from a neighbor talking to a device through an window and thereby letting himself into your locked house or a little kid who orders up a pricey Kidcraft Sparkle Mansion.

For its part, Apple did, in fact, add individual voice recognition to the iOS 9 version of Siri… for good, money-saving, dollhouse-avoiding reasons.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/YJtjobsPnCc/

Researchers develop synthetic skeleton keys for fingerprint sensors

Those fingerprint-based security systems in your mobile phone might not be quite as secure as you wish they were. That’s the takeaway from just-published research by engineering researchers at New York University and Michigan State University.

NYU/MSU researchers started by noting that the small fingerprint sensors in your device typically don’t capture an entire fingerprint: they settle for partial fingerprints. What’s more, as MSU Today notes, many devices permit users to generate multiple partial impressions, and to enroll more than one finger:

Identity is confirmed when a user’s fingerprint matches any of the saved partial prints.

Every complete human fingerprint is commonly assumed to be unique. But what if partials were similar enough that they could generate a single “MasterPrint” which would successfully authenticate many individuals?

According to NYU’s press room, “the team analyzed the attributes of MasterPrints culled from real fingerprint images, and then built an algorithm for creating synthetic partial MasterPrints.” And their digitally simulated “synthetic partials” proved worryingly effective.

The team reported successfully matching between 26 and 65 percent of users, depending on how many partial fingerprint impressions were stored for each user and assuming a maximum number of five attempts per authentication.

Not surprisingly, vulnerability rates increased as devices stored more partial fingerprints for an individual user. As The New York Times reports:

Dr. Memon said their findings indicated that if you could somehow create a magic glove with a MasterPrint on each finger, you could get into 40 to 50 percent of iPhones within the five tries allowed before the phone demands the numeric password.

The researchers and their fellow academics have carefully noted several limitations of this research.

First, the researchers used simulators: they didn’t possess any “magic glove” to actually press a phone’s fingerprint sensor. Such fake “gloves” aren’t yet easy to create – but, as MSU Today notes:

improvements in creating synthetic prints and techniques for transferring digital MasterPrints to physical artifacts in order to spoof an operational device pose significant concerns.

Second, many aspects of fingerprint biometric systems are kept secret by their manufacturers and by the cellphone makers who implement them, and not all systems work exactly the same way. Some, like Apple’s, use complementary technological precautions that make them significantly more effective. According to The Times, Apple claims a rate of false positives of only 1 in 50,000 – assuming users only register one fingerprint, that is.

For all these limits, the finding that partials are spoofable is significant, according to Dr. Chris Boehnen, who manages the U.S. government’s IARPA (Intelligence Advanced Research Projects Activity) program on defeating biometrics.

This kind of research helps to identify areas where our security is weaker than we thought rather than practical forms of attack. They may come in time though and according to MSU Today, the research team is now investigating potential solutions for this vulnerability.

This could entail developing effective anti-spoofing schemes; carefully selecting the number and nature of partial impressions of a user during enrollment; improving the resolution of small-sized sensors to facilitate extraction of more discriminative features; developing matchers that utilize both minutiae and texture information; and designing more effective fusion schemes to combine the information presented by multiple partial impressions of a user.

Users can improve the effectiveness of their fingerprint authentication today by enrolling only one fingerprint impression, and by using it as one part of a two-factor authentication scheme where they can.

Oh, and remember, fake fingers aren’t the only way to compromise fingerprint security (as this enterprising six-year-old and her mother recently discovered)!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/CNI5-nJCaxU/

Internet routing weakness could cost Bitcoin users

Researchers have found, what they claim is, a way to attack the bitcoin network using a weakness in the way the Internet operates.

The exploit, created by researchers at Swiss university for science and technology ETH Zurich, relies on the fact that a key piece of the Internet’s underlying technology, called the Border Gateway Protocol (BGP), is broken.

The Internet is a network of networks, known as autonomous systems (AS). BGP is used to route traffic between them. Most users will never need to use it, but your ISP needs it to tell traffic where to go.

This all works well, assuming your ISP is trustworthy. But, what happens if it isn’t? Like much of the rest of the Internet, BGP was developed by trusting souls; collegial types, interested in solving technical problems, but operating back then in a rarified environment largely devoid of criminal activity.

These engineers developed BGP, on the back of three napkins in 1989, to solve a routing problem for a network that was expanding quickly and experiencing growing pains. It was a short-term solution based on an honor system, for which no long-term replacement ever came. Read this excellent article for a more in-depth history.

Nearly 28 years later, in a network filled with ne’er do wells, attackers can do some nasty things using BGP. Some of them are accidental. Pakistan Telecom cut off YouTube to most of the Internet in 2008 when it tried using BGP to cut off traffic to YouTube. Unfortunately, the routing configuration it entered propagated across the world.

Attacks can be even more damaging if they’re intentional. BGP hijacking is common. It is a great way for an attacker with ulterior motives to get network traffic to pass through specific bits of the Internet that it might not otherwise see.

Totally forked

The researchers discovered that most of the traffic on the bitcoin network traverses a handful of ISPs. 60% of all bitcoin connections cross just three ISPs. Should one or more of those ISPs decide to hijack the traffic using BGP, they can engineer two kinds of attack, the paper warns.

The first temporarily carves the bitcoin network in two, by configuring BGP to cut connections between computers in the network. This is a problem for bitcoin’s blockchain algorithm, which relies on all computers reaching a consensus together and updating a network-wide shared ledger with the same information about bitcoin transactions.

Artificially creating two groups of machines means that each group will be working on its own ledger, and they will quickly become uncoordinated. In blockchain terminology, this is known as a fork, because it’s like a fork in a road – each group has happily taken its own path in the road, and there are now two.

The bitcoin network resolves forks when all computers can talk to each other again, at which point the ledger with the most transactions wins, and the alternative fork in the blockchain is discarded.

An attacker with BGP hijacking capability could use that situation to their advantage by transacting with someone in the smaller group – perhaps sending them some bitcoins in return for an online service – only to then collapse the fork and claim that the transaction never happened. This is known as a double spending attack.

There’s another attack, too. This one focuses on a single bitcoin node, and uses BGP hijacking to delay the delivery of bitcoin blocks.

The bitcoin network creates new blocks roughly every 10 minutes, and these contain the latest transactions that happened on the network. These blocks propagate throughout the network as individual nodes request them from others. This is how everyone on the network stays on the same page and understands who has sent bitcoins to whom.

Using BGP hijacking, an attacker could alter network routing to ensure that a victim requesting the latest bitcoin block receives an older block, which doesn’t show the latest transactions. The BGP hijacker would only allow the latest block through just short of 20 minutes later. This stops the victim from seeing the latest transactions on the network. Attackers can use this technique to spend bitcoins twice, or to disrupt the network by targeting large numbers of nodes, potentially altering the value of bitcoin by damaging confidence in the network.

Whereas network participants will eventually uncover the first attack, this second attack would go completely undetected, the researchers point out.

None of this is a fault in the bitcoin protocol per se. After all, the Internet and its associated protocols, such as BGP, are simply the rails on which bitcoin and many other services run. If anything, we can blame bitcoin’s economic patterns for exacerbating the problem. The concentration of bitcoin mining in China – well over half of all bitcoins are mined using Chinese mining pools – has gone a long way towards worsening what would otherwise be a theoretical issue.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/VuKoda-WKCo/

Large UK businesses are getting pwned way more than smaller ones

Larger businesses in the UK are far more likely to be victims of attacks than smaller ones, according to a survey by the British Chamber of Commerce.

Nearly half (42 per cent) of companies with more than 100 staff have been hit by information spillages, hackers or malware attacks. This figure compares to 18 per cent of companies with fewer than 99 employees who have suffered a security breach.

More comprehensive studies, such as the Verizon Data Breach Investigations Report, have shown that breaches often go undetected for weeks and months. So it could be that larger businesses are more proficient at detecting problems than their smaller counterparts.

The poll of 1,200 businesses, published on Tuesday, found that companies rely on IT providers (63 per cent) much more than banks and financial institutions (12 per cent) to resolve issues after an attack. Only one in 50 look to police and law enforcement (2 per cent) for help after being pwned.

From May 2018, all businesses that handle personal data will have to ensure they are compliant with General Data Protection Regulation (GDPR) legislation.

Dr Adam Marshall, director general of the British Chambers of Commerce, said: “Companies are reporting a reliance on IT support providers to resolve cyber-attacks. More guidance from government and police about where and how to report attacks would provide businesses with a clear path to follow in the event of a cyber-security breach, and increase clarity around the response options available to victims, which would help minimise the occurrence of cybercrime.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/18/bcc_security_survey/

Profit with just one infection! Crook sells ransomware for $175

Cybercrooks have begun retailing a new easy-to-use ransomware strain that promises profit with only one successful infection.

Karmen is being sold on Dark Web forums from Russian-speaking cyber-criminal DevBitox for $175. The new ransomware-as-a-service variant offers a graphical dashboard, allowing purchasers to keep a running tally of the number of infections and their earnings in real time.

The malware requires very little technical skill to deploy, according to threat intelligence company Recorded Future.

Ransomware offers infection dashboard [source: Recorded Future]

The first cases of infections with Karmen were reported as early as December 2016 by victims in Germany and the United States. Sales underground forums began in March 2017.

The Karmen malware is derived from “Hidden Tear”, an open-source ransomware project. The seller admits he was only involved with web development and control panel design. Recorded Future reports that 20 copies of Karmen malware were sold by DevBitox, while only five copies remain available to potential buyers.

DevBitox has produced a YouTube video in a bid to promote sales of his warez.

Youtube Video

Karmen encrypts files on the infected machine using the strong AES-256 protocol, making them inaccessible unless victims pay the attacker for a decryption key.

Keeping up-to-date backups would obviate the need to cave into such demands, and remains the best strategy for safeguarding against ransomware infection.

Karmen automatically deletes the decryptor if a sandbox environment or analysis software is detected on the victim’s computer, a tactic designed to make life harder for security researchers tasked with investigating the nasty. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/18/ransomware_offers_infection_dashboard/

The Implications Behind Proposed Internet Privacy Rules

The FCC’s overreach needed to be undone to protect the FTC’s authority over privacy.

If we want to protect privacy, we must be clear about why it’s important, how we can prevent confusion, and who is protecting consumers. Privacy is at risk in unprecedented ways if we don’t put checks and balances on it from time to time. Sadly, the legal system is lagging behind the pace of innovation, as the last major privacy law was passed in 1986.

The true privacy mission also needs to prevent business practices that are deceptive or unfair to consumers, and include things that enhance informed consumer choice and public understanding of the competitive process, all without unduly burdening legitimate business activity. This is where the Federal Trade Commission (FTC) comes in.

You may be more familiar with the FTC’s work than you think. The FTC deals with issues that touch the economic life of every American, and it’s the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. It has moved much faster than our congressional leaders in putting consumer protections in place.

Why Am I Telling You This?
Last year, the Federal Communications Commission (FCC) pushed through, on a party-line vote, privacy regulations designed to benefit one group of favored companies over another group of disfavored companies. The rules would have required home Internet and mobile broadband providers to get consumers’ opt-in consent before selling or sharing Web browsing history, app usage history, and other private information with advertisers and other companies. The rules, although well-intentioned, were at odds with the existing and proven privacy framework put forth by the FTC.

The FCC wanted to reclassify the Internet as a service under Title II of the Telecommunications Act, a provision that lets the FCC set rates and ensures equal access to traditional phone service, such as what you have at home. This was not permissible under US law. In making this move, the FCC stripped the FTC of the current jurisdiction it had over Internet privacy and data sharing practices.

[Check out the two-day Dark Reading Cybersecurity Crash Course at Interop ITX, May 15 16, where Dark Reading editors and some of the industry’s top cybersecurity experts will share the latest data security trends and best practices.]

As one of the leading voices in email protection and chairman of the Email Experience Council, I believe the FCC should never have been allowed to declare “information services” a Title II service. But the FCC passed its own regulations, which subjected Internet service providers to onerous and unnecessary restrictions, and exempted edge providers.

Once the FCC declared the Internet a common carrier service, it removed all authority of the FTC to regulate. The privacy rules the FCC had in place are geared toward phone services, not the Internet. The rules didn’t fit, so it attempted to write Internet-specific regulations.

These actions had to be undone to restore authority over privacy and data sharing to the FTC. This solution needed to happen to undo the fruits of regulatory overreach and absurdity.

What Happens Now?
First, the legislation that’s been repealed isn’t active today, and never has been. There’ll be no change in whether an ISP is “allowed to sell your information.” You still have privacy protections. How, you ask?

When Trump signed the Congressional Review Act, the FCC can’t re-create the rules until Congress authorizes it to. Getting that legislation through Congress is pretty unlikely for the next couple of years. This will allow the FTC to regain the control and authority it has always had to protect consumers and regulate Internet service as it has done successfully for years.

There are some technical things consumers should understand to protect themselves.

If you use encryption (HTTPS), as many browsers and applications do, ISPs can track which websites you visit but not specific pages or what you do there. However, most advertisers already have this information and have since the dawn of the Internet. The websites you visit tell them when you buy things on Amazon or eBay, if you’re reading this story, when you’re on Facebook, etc.

What’s even more interesting is that if someone wants to track which websites you visit, it’s probably a lot easier to buy that information from a tiny, low-margin service provider in a lax jurisdiction or that is under FCC regulation than to do so from a large domestic ISP.

It’s also important to know that ISPs already self-regulate on opt-in for what the FCC tried to define as the most sensitive uses. These include Web browsing, app usage history, geo-location data, financial and health information, and the content of communications. As a user of their services, you opted in when the service was purchased.

What’s Next?
The changes, if allowed to go through, would have also stifled the industry’s use of data that is used by anti-spammers and security vendors, data used to prevent viruses and malware, and many other security-related things, thus making you less safe as a user of the Internet.

Another important point: Congress is looking at a complete rewrite of the Communications Act. Everything is up for grabs if this happens.

The FCC has said it will work with the FTC to ensure that consumers’ online privacy is protected through a consistent, comprehensive framework. The FCC knows that the best way to achieve those results would be to return jurisdiction over broadband providers’ privacy practices to the FTC, with its decades of experience and expertise in this area.

Consumers must continue to educate themselves and their families about how their information can be used and how they can control it. Simply reading the privacy policies of sites and applications you use is a start.

If you’re really worried about your information not being kept private, your best option is to use a virtual private network, which anonymizes Internet activity by routing it through another system and shielding it from your ISP. However, most ISPs are open about how you can opt out of any data use, and they give you control to do so.

Knowing how to protect your information identity is a must in the 21st century. Here are some tips from the FTC on doing it effectively.

Related Content:

Dennis Dayman is the chief privacy and security officer at Return Path. He has more than 20 years of experience combating spam and in security/privacy issues, data governance issues, and improving email delivery through industry policy, ISP relations, and technical solutions. View Full Bio

Article source: http://www.darkreading.com/endpoint/the-implications-behind-proposed-internet-privacy-rules/a/d-id/1328598?_mc=RSS_DR_EDT

Cybercrime Tactics & Techniques: Q1 2017

What’s This?

A deep dive into the threats that got our attention during the first three months of the year – and what to expect going forward.

The first quarter of 2017 brought with it some significant changes to the threat landscape, and we aren’t talking about heavy ransomware distribution either.

In our second Cybercrime Tactics Techniques report. (Read the first one here.), we take a deep dive into what threats got our attention the most during the first three months of the year, what we expect to happen moving through the next quarter, and a behind the scenes interview with one of our Malwarebytes Labs analysts. Here is a sneak peek at what’s in the report.

  • Cerber ransomware took over as top dog with respect to distribution and market share.
  • Locky ransomware has dropped off the map, likely due to the desired change by the controllers of the Necurs spam botnet. However, with a lack of new Locky versions being developed since before the beginning of the year, the fate of its creators are unknown.
  • The Mac threat landscape saw a surge of new malware and backdoors in Q1 2017, including a new Mac ransomware (FindZip).
  • On the Android side, two notable malware families have been causing a lot of trouble: HiddenAds.lck, which locks the device from being able to remove the app, therefore allowing for more advertisement revenue for the creators, and Jisut, a mobile ransomware family that has been spreading like wildfire.
  • In the exploit kit world, RIG continues to have the greatest market share of the few exploit kits that are still active, and we expect this to continue. RIG exploit kit remains on top mainly due to its lack of competition rather than its technical sophistication.
  • Malicious spam campaigns have also started utilizing password protected zipped files and protected Office documents to evade auto analysis sandboxes utilized by security researchers.
  • In social media scams, users were bombarded with links to WWE nude photo dumps that lead to gift card survey scams.
  • Tech support scammers, finding difficulty working with North American payment processors, have begun accepting alternate forms of payment, such as Apple gift cards and bitcoin.

Looking ahead to the second quarter of the year:

  • We expect to see continued heavy distribution of Cerber through Q2 2017 due to new developments made to the malware design, and its continued use of the ransomware as a service (RaaS) model.
  • As far as Cerber losing its crown, it is unlikely within the next quarter that any competitor will rise in market share enough to dethrone Cerber, barring something happening to the developers of Cerber, and their ability to develop and distribute the ransomware.
  • The continued heavy development of Mac malware throughout Q2 is highly likely.
  • The Android ransomware Jisut is expected to continue its trend of high distribution and spread. We predict the same for HiddenAds.lck.
  • Distribution mechanisms are likely going to develop new features and functionality, be it through social engineering tactics utilized by exploit kits and malicious spam or from the discovery of new exploits, potentially revitalizing the exploit kit market.
  • Finally, in the world of scams, we expect to see an uptick of ‘exit scams’ and tech support scammers utilizing social media advertising to scam each other. At the same time, we predict the increase collaboration of PUPs and TSS through the spread of tech support scammer advertisements being pushed alongside potentially unwanted programs.

Download the full Cybercrime Tactics Techniques report here.

Article source: http://www.darkreading.com/partner-perspectives/malwarebytes/cybercrime-tactics-and-techniques-q1-2017----/a/d-id/1328664?_mc=RSS_DR_EDT

Identity Thief Faces Potential 22-Year Prison Sentence

A foreign national pleads guilty to two criminal counts after he and his cohorts steal nearly $1.48 million in bogus tax return refunds following an identity theft hack on a Pittsburgh medical center.

A Cuban man potentially faces up to 22 years in prison and a $750,000 fine, after he pleaded guilty to two criminal counts in an identity theft case involving current and former workers at the University of Pittsburgh Medical Center (UPMC), federal authorities announced Monday.

Yoandy Perez Llanes, 33, and his cohorts stole Social Security numbers, birth dates, and other personal information from tens of thousands of current and former employees at UPMC in 2014.

Llanes and his co-conspirators hacked into UPMC’s computer databases, then turned around and used the data to file more than 900 false 2013 tax returns with the Internal Revenue Service. And rather than receive a cash refund, Llanes requested any returns be provided as Amazon gift cards, which then were used to buy electronics. The purchased electronics were shipped to Venezuela, where Llanes and his associates snapped them up.

During the nearly two years that Llanes engaged in aggravated identity theft and money laundering, he and his cohorts received $1.48 million from the bogus tax returns. Llanes will be sentenced on Aug. 18.

Read the Department of Justice release here.

 

Article source: http://www.darkreading.com/threat-intelligence/identity-thief-faces-potential-22-year-prison-sentence/d/d-id/1328655?_mc=RSS_DR_EDT

SWIFT: System Unaffected Following Shadow Brokers Leak

SWIFT, the interbank messaging system allegedly targeted by the NSA, says there is no indication its network has been compromised.

SWIFT said yesterday that there are no signs of foul play in its network or messaging systems following last week’s data dump by the Shadow Brokers.

Shadow Brokers released several Windows exploits, which were allegedly stolen from the NSA and used to break into SWIFT systems. The information released by Shadow Brokers, which dates back several years, indicates the leaked tools could be used to access two service bureaus, or third-party providers connecting businesses to SWIFT. This would provide the NSA a view into communications between customers and organizations outsourcing their SWIFT activity.

SWIFT says it is in contact with the service bureaus involved to ensure they know about the allegations and have taken the necessary preventative measures. The EastNets Service Bureau, one of the organizations, says its customers have not been affected.

SWIFT confirmed that its systems were not targeted. “We can confirm that there is no impact on SWIFT’s infrastructure or data, and we have no evidence to suggest that there has been any unauthorised access to SWIFT’s network or messaging services,” the financial messaging service said in a press FAQ.

Read more here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/swift-system-unaffected-following-shadow-brokers-leak/d/d-id/1328667?_mc=RSS_DR_EDT