STE WILLIAMS

MIT to Oz: Crypto-busting laws risk banning security tests

Australia’s government’s crypto-busting legislation risks blocking security research, a leading Internet policy boffin has warned.

Speaking to a parliamentary hearing into the “Assistance and Access” legislation this morning, a director of the Massachusetts Internet Policy Research Initiative, Daniel Weitzner, said the problem arose out of secrecy provisions of the proposed legislation.

The problem, Weitzner told the Parliamentary Joint Committee on Intelligence and Security (PJCIS) today, is that if a Technical Capability Notice that requires access be added to hardware or software, disclosing its existence is a crime.

However, organisations like service providers typically subject their systems to security assessments before deployment. Red Teams, he said “will do everything they possibly can to find weaknesses.”

What happens if researchers find a vulnerability covered by a TCN, when they can’t know that the TCN exists and the vulnerability therefore has to be kept secret, Weitzner postulated

“If the specific features that are mandated by the TCNs are kept secret, it will be hard for security engineers to know where to look”, Weitzner said, and it will be “perilous” for service providers to engage people to run security tests.

Weitzner said any TCN regime needs transparency so as to allow security testing: “It would simply be irresponsible to keep the behavior of parts of those systems secret”.

That is, of course, assuming such capabilities exist in the real world – that, for example, “golden keys” to systems can be created and protected against misuse.

Dunce

Oz intel committee: Crypto-busting is only bad if you’re a commie, and we’re not by the way

READ MORE

“Our view is that if those keys to unlock the system are kept for one purpose, which may be entirely legitimate and lawful, they can be exploited for another purpose,” Weitzner said. “We haven’t seen a design of a system that reduces that risk.”

The only way to know whether any proposed system is functional and secure is to test it, and that is once again at odds with the secrecy the government hopes to apply to what he called “exceptional access systems”.

“I would not claim that it’s ‘impossible’ to design such a system because we haven’t seen every possible design,” but if anyone claimed to offer such a system, “you have to subject that system to very careful study.”

Shadow attorney-general Mark Dreyfus pointed out that the government’s position is that the bill does not require “specific exceptional access systems, or to require that providers redesign their entire systems to facilitate government access”.

However, Weitzman replied, as the legislation now stands, the government has very broad discretion in what may be contained in Technical Capability Notices, and “there’s no restriction that they only be targeted to a limited set of users.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/16/oz_cryptobusting_laws/

7 Free (or Cheap) Ways to Increase Your Cybersecurity Knowledge

Building cybersecurity skills is a must; paying a lot for the education is optional. Here are seven options for increasing knowledge without depleting a budget.PreviousNext

Cybersecurity isn’t free. Sure, there’s little cost to users who follow best security practices in their day-to-day actions, but when it comes to learning how to defend against skilled criminals and set up secure systems, there’s generally a cost attached. For professionals who know the basics and want to get better at their profession, costs can quickly add up. For smaller organizations, that cost can be prohibitive, but what is the alternative?

It turns out there are ways to boost knowledge and skill without dipping deeply into the operating budget for the year — and if you’re a professional who wants to increase your skills, you can do it without endangering your retirement account.

The options range from free training offered by industry groups to online classes provided by major universities. Throw in training that taxes have paid for and regional gatherings, and you have an array of possibilities that can go a long way toward boosting the value and usefulness of most security pros — or those IT professionals who want to add “security” to their portfolio.

There are costs associated with many of these – and not necessarily in dollars. For instance, few of the free offerings provide certifications of class completion: If you want a (virtual) piece of paper, you’ll have to pay up. And if you want the work to lead to a degree, you’ll have to pay more. But even in those cases, the options in this list are likely much more affordable than most commercial training courses. At the very least, these can be a good way to brush up on skills, or a way for an IT pro to find out whether security is a path they want to tread.

(Image: Alexas_Fotos)

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/cloud/7-free-(or-cheap)-ways-to-increase-your-cybersecurity-knowledge/d/d-id/1333287?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Thought you deleted your iPhone photos? Hackers find a way to get them back

Twice a year, an international contest called Pwn2Own – the Olympic Games of competitive hacking, if you like – gives the world’s top bug-hunters a chance to show off their skills.

The word pwn, if you aren’t familiar with it already, is hacker jargon for “own”, as in “owning” someone’s computer – and, with it, their data – by taking control of it behind their back.

In case you’re wondering, pwn is a deliberate mis-spelling, based on the fact that O and P are adjacent on most keyboards. In theory, therefore, it should be read aloud as own, the word it denotes, in much the same way that the word St is read aloud as saint, or Mr as mister. In practice, however, it’s pronounced pone – just treat it as own with a p- added in front.

Like the Olympics, which alternates every two years between summer and winter sports, Pwn2Own alternates between desktop hacking at the start of the year, and mobile device hacking at the end.

Even though we’re talking flippantly about hacking, pwning and breaking into other people’s computers, and even though the content requires competitors to complete a hack live in person within a 30-minute period, Pwn2Own isn’t a free-for-all endorsement of cybercrime.

The rules are pretty clear cut – and clean-cut, for that matter.

Finding new zero-days

Brand new, genuinely exploitable zero-day bugs are hard to find these days, and vendors dearly like to find out about them before the crooks do, so it’s fair that top bug hunters get paid for their efforts.

So, Pwn2Own winners can earn loads of money, but they only get paid out if they conform to strict guidelines of responsible disclosure.

A successful contest entry has to be practicable – participants have half an hour to show that the vulnerabilities they’ve discovered really can be chained together to form a working exploit.

Also, the details of how the attack works have to be properly written up. (Anyone who’s worked as a programmer knows that there’s nothing more frustrating than chasing down a badly-documented bug – a task that’s like searching for the right haystack in which to search for what may or may not be a needle in a haystack.)

In other words, competitors only get paid if they find a working exploit; document it properly so that it can be repeated and investigated; and then keep quiet about it while the vendor gets a fair chance to fix it.

Well, the standout winners at Mobile Pwn2Own 2018, which finished on Tuesday in Tokyo, Japan, were a team known as @fluoroacetate.

Despite their confrontational monker (fluroacetate is an acute and lethal toxin, sold commercially as 1080, for poisoning unwanted wild animals), the duo also go by the names Amat Cama and Richard Zhu, and look like perfectly pleasant people:

The hack that really got our attention, given the many recent controversies to do with recovering data from iPhones, was news that @fluoroacetate figured out a way to access one or more deleted files on an iPhone running the latest version of iOS.

In their live exploit demo, the file they used was a photo from the Recently Deleted directory, a holding location where deleted photos go to “rest” for a few weeks, in case you have deleter’s regret and decide you want to undelete them.

Deleted-but-not-yet-overwritten files have been a cybersecurity risk for years on most desktop operating systems, where users can, at least in theory, log in as root or an administrator and go digging for leftover data right down at disk sector level.

This opens the path to forensic recovery of data, or perhaps data fragments, by bypassing the usual hierarchical structure and controls imposed by the filing system and the operating system.

But Apple’s iOS isn’t supposed to be open to spelunking of this sort – users aren’t supposed to be able to get root powers or the ability to dig around behind the scenes, whether for deleted data or moved-out-of-the-way files.

To exfiltrate deleted photos, Cama and Zhu used exploitable bugs in the Safari browser to trick iOS into letting them at content that shouldn’t have been accessible.

The risk of browser bugs of this sort is that they can be triggered by booby-trapped web pages, and are therefore generally remotely exploitable – you only have to entice your victim to look at a website, rather than to convince them to download a file, change some settings and then launch it themselves.

That hack earned the intrepid duo $50,000, but that was less than a quarter of their total earnings.

They also bagged:

  • $30,000 for tricking a Xiaomi Mi6 phone (running Android MIUI, Xiaomi’s alternative to Google’s proprietary flavour of Android) into launching a web browser automatically, and then downloading a working exploit, all via NFC.
  • $50,000 for taking over a Samsung Galaxy S9 by exploiting a bug in the baseband firmware. (That’s vendor-provided firmware, distinct from the operating system itself, programmed to look after the mobile telephony aspects of the device such as making calls and connecting to the 4G network.)
  • $60,000 for exploiting an iPhone X via a Wi-Fi bug.
  • $25,000 for a JavaScript bug on the Xiaomi Mi6 that allowed them to exfiltrate data from the device.

The pair also had a go at hacking the iPhone X’s baseband firmware, but didn’t get their exploit to work correctly within the time limit.

Nevertheless, they took home $215,000 from five successful zero-days.

But those zero-days will now be reported to Apple, Samsung and Xiaomi and will therefore very likely be patched before they’re found by any cybercrooks.

What to do?

What to do about those not-so-deleted photos on your iPhone?

Our advice is not to panic – this bug doesn’t feel like one that will be independently rediscovered by cybercrooks before it gets patched.

However, if you’re worried about photos you thought you were rid of, remember that there’s a second “delete” stage in the iPhone Photos app.

In the list of Albums, you’ll find one called Recently Deleted, which is a sort of short-term limbo for photos you no longer want.

As far as we know, permanently deleting them from the Recently Deleted halfway house puts them beyond recovery, even using @fluoroacetate’s new hack.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/sgimpebCdCo/

John McAfee is ‘liable’ for 2012 death of Belize neighbour, rules court

Infosec personality John McAfee has been found legally “liable” via a default judgment for the death of his neighbour, who was found dead from a gunshot wound to the head in his Belize home in 2012.

The ruling (PDF) was made yesterday in a Florida district court as part of a five-year legal battle by Gregory Faull’s estate against McAfee, who spent some time living in Belize before his neighbour’s demise.

Local newspaper the San Pedro Sun reported at the time of Faull’s death that the 52-year-old American retiree “was found by a housekeeper inside his living room with an apparent gunshot wound to the back of his head” in late 2012.

Though local police wanted to question McAfee as a potential witness, the millionaire, who is the founder of the antivirus software firm that still bears his name, had travelled abroad. And despite the criminal investigation stalling, a civil case was later brought against McAfee by Faull’s relatives.

“The Court will enter default judgment as to liability in favour of Plaintiff and against Defendant for the wrongful death of Gregory V. Faull,” ruled US District Judge Gregory Presnell, who had previously closed the case against Mac until a US Court of Appeal ruling forced it back open.

Mac, however, did not appear to have engaged with the legal process, judging from a striking lack of legal filings made by him in the case, as seen by The Register – to the point where he hadn’t even hired himself a lawyer. Faull’s estate thus successfully applied for default judgment against McAfee. A bench trial has been scheduled for January 2019 to determine what damages Mac will have to pay.

It’s all rather colourful

Mac, whose exploits while living in Belize included an interest in “bath salts” (which he explained was a joke), also involved his property being raided by local police over claims that he was running a drug manufacturing operation and had unlicensed firearms. After the raid, the Belizean authorities dropped all charges against Mac.

The real thing that has come to define Mac’s time in Belize, in the public eye at least, was his neighbour’s death.

As we reported in November 2012: “Faull, who was last seen alive on Saturday night, filed a formal complaint on Wednesday against McAfee over the latter man’s ‘roguish behavior,’ his habit of firing off guns, and his poor management of his dogs.”

When local cops searched McAfee’s home, wanting to question the one-time software developer over his neighbour’s demise, Mac was nowhere to be found. He duly turned up in neighbouring Guatemala some weeks later, having entered the country illegally (by his own admission), and attempted to claim political asylum. The Guatemalans gave him short shrift and Mac eventually wound up back in the United States.

The eccentric millionaire, who made a large but precisely unknown amount of money from selling his antivirus company in 1994, was last seen tweeting harmless nonsense about running for US president – as well as, er, keeping a step ahead of financial watchdogs over his inevitable involvement in some cryptocurrency concern. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/15/john_mcafee_liable_death_neighbour_belize/

Up to three million kids’ GPS watches can be tracked by parents… and any miscreant: Flaws spill pick-and-choose catalog for perverts

Parents could be unwittingly putting their children’s safety and privacy at risk, thanks to security vulnerabilities in potentially millions of kids’ GPS-tracker watches.

These cheapo watches are supposed to be worn by the youngsters, and use SIM cards to connect to cellular networks. The idea is they beam to backend servers the GPS-located coordinates of the wearer so their parents can, via a website or app, find out where the tykes are at all times.

The devices also display any messages and take calls from guardians, can listen in on a child’s activities using a microphone, and warn if the kid has strayed out of a particular area, such as the playground.

However, an investigation by British security shop Pen Test Partners has shown that the software used by a smartphone app that communicates with the watches is so poorly coded that the connections are easy to hijack. This means miscreants can snoop on kids as if they were their parents.

The probe began when a friend of one of the infosec bods bought a MiSafes Kid’s Watcher for his offspring, a snap at just £10 for the unit. But after playing around with it, they found shocking levels of insecurity. It appears that the same weak code has been reused in a lot of other GPS watches, too.

“We believe that in excess of a million smart kids tracking watches with similar vulnerabilities are being used, possibly in excess of 3 million globally,” said researcher Alan Monie on Tuesday. “These are sold under numerous brands, but all appear to use remarkably similar APIs, suggesting a common original device manufacturer or ODM.”

No encryption – what is this, the 1990s?

The key problem is that the app and the GPS watch do not encrypt their communications, and transmit virtually all data in plain text for anyone to snoop on or meddle with. This includes profile pictures, names, gender, dates of birth, height, weight, and so on, of the child. The watches talk to backend servers, and those servers pass on the info to apps used by the parents.

By simply intercepting and changing the user ID number in the phone app’s request to the backend servers for information on a child, you can gain full access to data on that particular youngster. In other words, you can make an API request using any ID number and you’ll get the photograph, whereabouts, and other details for the child of that ID. You can set the ID to anything you like, and produce a shopping catalog of potential victims for savvy predators.

Thus, a miscreant or pervert could, for example, just buy one of these things, tamper with the backend connection using Burp Suite or a similar tool on the network, and abuse the vulnerability to request the whereabouts of strangers’ kids, who may be playing on their own. Scumbags could also send messages to kids to trick them into accepting a ride from a stranger, who happens to know exactly where they are.

Seeing as watch communicates every five minutes, you can also track the location of a child in near-real-time.

After Monie wrote a simple C# program to automate this process, he would have been able to access the accounts of over 12,000 MiSafe watches, and also download a photo of each child, plus their name and other aforementioned personal details, as well as the phone number of the parents and of the watch itself.

iot

Smart toys spring dumb vulns. Again. This time: Cuddly bears, watches

READ MORE

To stop just anyone calling the child’s watch, the device has a white-list of approved phone numbers. But the caller ID is easy to spoof, so someone could make a call or message that appeared to come from a parent or trusted party.

It’s also child’s play to hijack the watch’s remote listening facility, turning it into a bugging device. The only indication that something is amiss is a busy sign on the watch face.

“These new attack vectors can not only be performed remotely (including capturing the IMEI remotely), but allow an attacker to build up a global picture of the location of all the children,” said Monie. “Combined with caller ID spoofing, this attack becomes really nasty.”

Attempts to contact the manufacturer have failed – by Pen Test Partners and ourselves – so it’s unlikely that the devices will ever be patched. We advise parents to make the devices safe themselves, by deleting the app and disassembling the watches with a large hammer or brick. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/15/gps_tracking_children_hack/

Cloud, China, Generic Malware Top Security Concerns for 2019

FireEye researchers unveil an extensive list of security risks waiting in the new year’s wings.

There may still be nearly seven weeks left in 2018, but security leaders are already looking ahead to the new year. Enterprise concerns, from cloud attacks to nation-states, are already piling high.

This year, on track to be the worst-ever for data breaches, has already proved exhaustive for the infosec community. From Jan. 1 to Sept. 30, a total of 3,676 breaches were reported, involving over 3.6 billion records – the second-most number of reported breaches in a year.

The threats ahead are numerous, according to a new report entitled “Facing Forward: Cyber Security in 2019 and Beyond.” The report was compiled by FireEye CEO Kevin Mandia, chief security officer Steve Booth, vice president of global intelligence Sandra Joyce, and numerous analysts and strategists.

What’s top of mind for senior leaders? Nations building offensive capabilities, breaches continuing due to lack of attrition and accountability, the widening skills gap, lack of resources (particularly for SMBs), holes in the supply chain, cloud attacks, social engineering, and cyber espionage, cybercrime, and other threats targeting the aviation sector.

FireEye’s Threat Intelligence, Mandiant, and Labs teams, which have a close eye on the frontlines, are particularly worried about how Chinese cyber espionage is restructuring, the increase in Iranian activity targeting the US, attackers using publicly available malware, the increase of business email compromise, abuse of legitimate services for command-and-control, and e-commerce and online banking portals being caught in the crosshairs of cyberthreats.

China Is Changing and Other Nation-State Threats
Ben Read, senior manager of cyber espionage analysis at FireEye, says he has noticed the threat from China evolve throughout this year. It’s no longer “smashing and grabbing” intellectual property, he says. Attackers’ actions are far subtler – and more nefarious.

“They’re doing a lot, going after people’s data after it goes outside their premises,” he explains. Organizations including law and investment firms, which have troves of client data, are prime targets.

FireEye’s threat intelligence team has noticed Chinese cyber espionage restructure and believes this will drive the growth of its activity through, and beyond, 2020. Changes have been gradual and driven by high-profile events: the Obama-Xi agreement shifting Chinese cyber espionage away from intellectual property (IP) theft, the People’s Liberation Army bringing cyber functions under a Strategic Support Force (SSF), and China beginning projects for its 13th Five-Year Plan.

Analysts believe 2019 will bring an increase in state-sponsored and financially driven supply chain attacks. APT10, “a Chinese espionage group,” is focused on hitting the supply chain of major US companies to steal business data and improve targeted technology theft by “non-cyber means” to avoid violating the Xi-Obama Agreement, which prohibits cybertheft of IP.

“The supply chain is so global and so integrated … it’s more a problem in the software supply chain,” Read adds. Auto updates are good for deploying patches but “also a very attractive vector to get into lots of victim computers.” NotPetya and CCleaner are key examples. Software supply chain attacks could involve integrating backdoors into legitimate software or using stolen certificates to sign malicious files and bypass detection.

“The change in China is something we’ve seen over a number of years,” Read says. “China wants to be a respectable place to do business on the world stage. That’s something you can’t be if you’re very noisily stealing stuff.”

Other nation-state threats he’s watching include Iran and North Korea. Both are in “delicate situations,” he says. Analysts anticipate Iranian cyber activity against the United States is likely to increase after the US exit from the Joint Comprehensive Plan of Action (JCPOA). North Korea, which is keeping up its standard activities – stealing money, spying on South Korea – is taking an interest in Japan ahead of the 2020 Olympics in Tokyo.

(Image: A Luna Blue - stock.adobe.com)

(Image: A Luna Blue – stock.adobe.com)

Simple Malware and Cloud-Based Threats
Another top-of-mind trend is the growing use of publicly available malware among sophisticated attackers. Financially driven espionage actors, who previously developed their own threats, are now browsing underground forums for the generic, Read says.

“It’s cheaper to use something off the shelf,” he explains, and a lot of pen-testing tools come at low cost. But that’s not all: “It can also give a false sense of security to defenders,” he adds.

When advanced actors use simple tactics, they obfuscate their sophistication and lull their targets into a false sense of security. It’s easy to dismiss a generic threat as something that’s not to worry about. Unfortunately, now the attackers know they’re likely to be dismissed, and they can remain anonymous while launching generic threats against several victims at once.

“There have always been espionage groups that use lower rent malware,” Read says. “What we’ve seen is it increasingly be part of the ecosystem for even the advanced groups.”

Attackers’ choices vary by geography. Russia uses a mix, he explains, with some groups using open source and others using custom malware. North Korea tends to develop its own. The adoption of generic malware is more common among Iranian and Chinese actors.

Attackers are also eyeing the cloud as more data heads there.

“Everyone in the industry is seeing huge migrations to the cloud, but most companies are not doing anywhere near as much work as they need to be doing to protect the cloud the way they used to protect their data centers — and the bad guys know this,” states Booth in the report.

The bad guys go where the money is, and throughout 2019 they will find more opportunities in the cloud because it presents a wide attack surface without advanced technology to detect malicious activity, he adds. Roughly 20% of breaches FireEye investigates involve the cloud.

One way to approach cloud security, he says, is to treat the infrastructure hosting enterprise “crown jewels” as a higher priority than the laptop belonging to the person who clicked a malicious link. Ask yourself what your greatest assets are — what you’re trying to protect.

Cyberattacks Aren’t Slowing
Mandia, who holds that security breaches are “inevitable,” points to the lack of risks or consequences for the people behind them. As a result, they will continue to act.

“The attackers are not waking up fearful that they are going to get arrested for stealing email or extorting someone for a certain amount of cryptocurrency,” he explains. “Without a deterrent, attackers are going to keep targeting networks and getting through.”

Related Content:

 

 

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/risk/cloud-china-generic-malware-top-security-concerns-for-2019/d/d-id/1333283?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cyber Crooks Diversify Business with Multi-Intent Malware

The makers of malware have realized that if they’re going to invest time and money in compromising cyber defenses, they should do everything they can to monetize their achievement.

Diversification is a well-understood business principle. Nordstrom, for example, started as a shoe store — but then its founders figured out they could generate more revenue by also offering clothing. Following that came jewelry, handbags, accessories, and in-store restaurants, and the rest is history. This same evolution has occurred across countless companies and industries, from Amazon.com (started with books) to General Foods (corn flakes).

And now, we’re seeing the same dynamic with cybercrime. Malware engineers have figured out that if they’re going to invest time and money in compromising cyber defenses, they ought to do everything they can to monetize their achievement to the max. This has given rise to the growing presence of multi-intent malware.

Multi-Intent, Multibusiness
It’s no secret that malware today is mostly machine-driven, requiring minimal human touch. Creating malware in modern times requires little more than simply expressing your malicious intent (say, cryptomining), and the machine does the rest. What is relatively new, however, is that malware makers are now expressing multiple intents, which has led to the emergence of multi-intent malware. Just like Nordstrom increased the return on investment in each store by offering diverse merchandise instead of just shoes, malware creators are diversifying their businesses with multi-intent malware, where a single successful compromise can open up multiple streams of revenue.

Typically, this class of malware will begin by executing one malicious intent (e.g., cryptomining), and once it has maximized the revenue from that channel, it moves onto others (say, ransomware). It does this until it has exhausted all of the malicious intents it was designed to execute on a network or host.

Another particularly insidious feature of multi-intent malware is the ability to evaluate “business opportunities” and react accordingly. For example, if it identifies sensitive information, it can make decisions on whether to encrypt the data for a ransomware attack or exfiltrate it as a data breach. If the data does not seem particularly interesting, the malware can also choose to enslave the host as a bot, or identify if it has enough computing power for cryptomining, etc.

This class of malware effectively conducts “business research” to understand the greatest revenue potential for each compromised asset, and then acts accordingly. The malware owners may even decide there is more money to be made by reselling (or renting) the malware with the compromised hosts, based on cybercriminals’ needs. For example, they may offer cryptomining as a one-month “rental,” and then rent the malware to another buyer in need of ransomware. For maximum ROI and efficiency, they may even sell or rent the malware to multiple cybercriminals simultaneously.

One recent high-profile example of multi-intent malware was Xbash, which not only included ransomware, cryptominers, botnets, and worms but also conducted reconnaissance through port scanning to identify easily compromised assets within the host organization. To evade detection, this class of malware typically starts by executing the malicious intents that are more difficult to detect (e.g., cryptominers), and then moves into the ones where the malware must expose itself (e.g., ransomware activation).

Defense Strategy
The key to detecting multi-intent malware is to understand what it’s trying to achieve. This is done through intent classification. Unfortunately, this is still a largely manual process where humans must analyze suspicious files or behavior, which simply can’t keep pace with the rapid volume and variety of machine-generated attacks. However, we are seeing some new approaches to intent classification automation. Two particularly promising areas include:

  • The use of artificial intelligence (AI) and natural language processing (NLP). When a suspicious file is detected on a host, it can trigger an AI and NLP process to automatically collect and read relevant human threat intelligence information from third-party research centers, blogs, etc., and decipher the potential intent (or multi-intent) of the malware. All of this can be done in case the same or similar type of malware was analyzed somewhere else and is part of public intelligence data. This ability to automatically “operationalize” human-readable threat intelligence makes AI and NLP potent countermeasures to multifunction malware and other advanced attacks.
  • The use of cause-and-effect analytics. A complementary approach to automatically operationalize threat intelligence is to use cause-and-effect analytics to decipher malware intent based on the actions that are detected on the compromised host. This works particularly well because all malware actions are typically followed by a logical “next action.” For example, a keylogger infection will typically be followed by suspicious login attempts; or, in the financial industry, memory-scraping malware (harvesting credit card or Social Security numbers) will typically trigger data exfiltration; and, of course, a cryptomining infection will be followed by an increase in the host’s CPU utilization.  

These technologies are gaining prominence in the war against malware because of their ability to classify intent orders-of-magnitude faster than is possible with manual processes. In the case of multi-intent malware, they help organizations detect, prioritize, and remediate the malware early in the “diversification process,” so they can put it out of business before it has the opportunity to open multiple revenue streams.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Avi Chesla is a recognized leader in the Internet security arena internationally, with expertise in product strategy, cybersecurity, network behavioral analysis, expert systems, and software-defined networking. Prior to empow, Avi was CTO and VP of security products at … View Full Bio

Article source: https://www.darkreading.com/risk/cyber-crooks-diversify-business-with-multi-intent-malware-/a/d-id/1333249?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Congress Passes Bill to Create New Federal Cybersecurity Agency

Cybersecurity and Infrastructure Security Agency Act now headed to President Trump for signing into law.

A bill that seeks to reorganize the US Department of Homeland Security’s National Protection and Programs Directorate (NPPD) into a new cybersecurity agency has cleared Congress and is now headed to President Trump’s desk for his signature.

The Cybersecurity and Infrastructure Security Agency Act – which passed the Senate in October and the US House of Representatives this week – essentially re-designates NPPD as the Cybersecurity and Infrastructure Security Agency (CISA).

CISA will be responsible for leading cybersecurity and critical infrastructure protection programs, developing associated policy, and coordinating with federal and private sector entities on security matters. CISA also will be responsible for fulfilling DHS’ responsibilities with respect to anti-terrorism standards for chemical facilities.

The new agency will have a Cybersecurity Division, an Infrastructure Security Division, and an Emergency Communications Division. Christopher Krebs, the current NPPD Undersecretary, will head up CISA.

The reorganization will elevate and streamline the cybersecurity mission within DHS while improving the department’s ability to engage with government and industry stakeholders, Krebs said in a statement this week. “Giving NPPD a name that reflects what it actually does will help better secure the nation’s critical infrastructure and cyber platforms,” he added.

The move to spin out NPPD into a separate, operational cybersecurity agency comes amid growing threats to US critical infrastructure and industry from nation-state adversaries and increasingly sophisticated cybercrime groups. Concerns about adversaries having capabilities to physically damage critical systems and networks and to steal trade secrets and intellectual property from US companies have escalated sharply in recent months. The current geopolitical tensions between the US and countries such as China, Russia, North Korea, and Iran have only exacerbated those concerns.

Same Agency, New Look?

The big question though is whether reorganizing the NPPD into a new agency is going to make much of a difference in the US’s ability to address its cybersecurity concerns. “I’m concerned that putting a new face on the old NPPD won’t raise performance levels to what the nation needs,” says Alan Paller, founder and director of research of the SANS Institute.

There has been something of an internal battle between the NPPD and the DHS’ Science Technology group, which is responsible for researching, developing, testing, and evaluating technologies in support of the DHS mission, Paller notes. So far at least, “the only things that you could point to with impact were coming out of the [ST] group,” he says.

Colin Bastable, CEO of Lucy Security, says the feds instead need an FBI-like organization in charge of cybersecurity for businesses, non-federal assets, and consumers. “The problems that private citizens and private enterprises face from rampant cybercrime are never going to be addressed by the DHS, even wearing its CISA cape,” because DHS’ primary focus is the federal government, Bastable says.

A federal bureau of cybersecurity would be focused on protecting Americans as consumers and employees, and the businesses that employ them. It would be responsible for investigating and attacking cyber criminal gangs and anticipating cyber threats.

“From a cybersecurity perspective, the DHS is always going to focus on federal infrastructure and major systems,” Bastable notes. “It is an amalgamation of 22 agencies and while it has many smart employees, it is never going to be agile enough to combat the cyber threats that we face.”

Related Content:

 

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/risk/congress-passes-bill-to-create-new-federal-cybersecurity-agency-/d/d-id/1333286?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

France: Let’s make the internet safer! US: ‘How about NO?!’

The US, China and Russia are some of the big names that are missing from the list of signees of the Paris Call for Trust and Security in Cyberspace: an initiative designed to establish international etiquette with regards to the internet, including coordinating disclosure of technical vulnerabilities.

French President Emmanuel Macron announced the agreement on Monday at the annual UNESCO Internet Governance Forum in Paris.

The document proposes rules of engagement for a slew of internet-related challenges, including cooperating to fend off interference in elections, online censorship and hate speech, intellectual property theft, malware proliferation and cyberattacks, and the use of cyberweapons to hack back… or, in the parlance of the US military, “offensive hacking,” as in, what the Department of Defense gave itself the power to do in the new military strategy it set forth in September.

The document has been endorsed by more than 50 nations, 90 nonprofits and universities, and 130 private corporations and groups.

You can see why the accord’s attitude about cyberwarfare wouldn’t fly with a lot of countries. Besides the US, some of the nations that abstained from signing on, including China and Iran, have active cyberwar programs. As we reported last week, Iran unravelled the CIA’s secret online network years ago with simple online searches, leading to informants being left vulnerable to exposure and execution worldwide.

Wired characterized the Paris Call as “lacking teeth,” with no legal requirements for governments or corporations to adhere to its principles.

It’s mostly a symbol of the need for diplomacy and cooperation in cyberspace, where it’s hard to enforce any single country’s laws.

Even some of the groups that support the Paris agreement say it’s not perfect. Access Now, an international non-profit dedicated to a free and open internet, pointed out that the accord, in promoting cooperation between industry and law enforcement when it comes to fighting cybercrime, could mean a few things, not all of them good.

Would such cooperation entail weakening encryption to enable backdoors, for example? …a crippling of security for which law enforcement has been strenuously campaigning? Access Now certainly thinks so:

Judicial orders should be the basis for any assistance between providers and law enforcement. Cooperation, on the other hand, can be interpreted to mean informal exchange of data or the intentional weakening of platforms to enable law enforcement access. As such, “cooperation” is not the proper framework for the relationship between law enforcement and companies.

The Paris Call also refers to the Budapest Convention: a cybercrime treaty that has been criticized for its broad definition of what constitutes “crime.” We can look to the US for a recent example of how that can play out: in February, the US state of Georgia drew up what critics called a “misguided” bill that could have criminalized security research.

Then too, Access Now said, the Council of Europe is developing an additional protocol that would extend law enforcement’s ability to reach data stored across borders. But will it be crafted with an eye toward protecting human rights? Or will repressive regimes be given greater latitude to unmask activists, journalists, and/or persecuted groups, such as LGBTQ people or dissidents?

In spite of these reservations, plus concern about the potential limiting of the free flow of information online in the case of zealous intellectual property protections, Access Now signed on. Others that signed on to the Paris Call include technology companies such as Microsoft, Oracle, Facebook, IBM, and HP.

Wired quoted Microsoft President Brad Smith, who also gave a speech on Monday in Paris. Smith:

It’s an opportunity for people to come together around a few of the key principles: around protecting innocent civilians, around protecting elections, around protecting the availability of the internet itself. It’s an opportunity to advance that through a multi-stakeholder process.

This is characteristic of the new responsibilities that corporations such as Microsoft are shouldering when it comes to keeping the internet secure. Wired quoted Megan Stifel, the cybersecurity policy director at Public Knowledge, a nonprofit that also signed on to the Paris Call:

If you look over the past three or four years, we’ve really seen a groundswell of private leadership. The private sector is now willing to say that we can and we will do more.

One of many examples of nation-like behavior coming from corporations is the war room that Facebook set up last week in an effort to fight misinformation on a global level and to protect election integrity. Microsoft, for its part, disrupted alleged Russian Fancy Bear election meddlers in August.

Of course, it’s in corporations’ best interests to have a safer, more predictable internet, and to avoid getting dragged in front of Congress to answer for it when it’s less than safe. Drew Mitnick, policy counsel at Access Now, said that the Paris Call might not be perfect, but it’s a step in the right direction, and for the time being, we can look forward to Paris Call 2.0:

The document is imperfect but it arrives as other governments, that did not endorse the Paris Call, have shown a competing vision for cybersecurity grounded instead in state sovereignty and control.

Look for Paris Call 2.0 to come next year, when it reconvenes in Germany.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0VQC5D_4XPg/

DARPA uses a remote island to stage a cyberattack on the US power grid

There was the sound of breakers tripping in all seven of the grid’s low-voltage substation, and then, the station was plunged into darkness. It was the worst possible scenario: swaths of the country’s grid had already been offline for a month, exhausting battery backups at power plants and substations alike.

What would you do if you were in that utility command center? Turn up everything all at once? Turn up smaller pieces of the grid and put them into a protected environment to run cyberforensics and thus keep them from potentially spreading whatever malware was used in the attack?

Those are the kinds of questions that are typically confined to a lab setting. But earlier this month, on a small island 1.5 miles off the shore of Long Island, the Defense Advanced Research Projects Agency (DARPA) brought the dreaded scenario to life.

Plum Island – at 840 acres, it’s about the same size as Central Park, in Manhattan – is officially called the Plum Island Animal Disease Center. Currently run by the Department for Homeland Security (DHS), the federal facility comprises 70 mostly decrepit buildings.

The island has its own fire department, power plant, water treatment plant and security. The center was originally created in 1954, in response to outbreaks of foot-and-mouth disease in cattle. DHS took over control of Plum Island in 2003, due to the research center’s critical role in protecting the nation’s livestock from infectious animal diseases.

It’s a mixture of industrial infrastructure and isolated, unpeopled, wind-swept, undeveloped acreage with unparalleled views, as the government described in its sales listing when it tried to offload the property.

In short, you couldn’t ask for a better spot to stage an attack on the electric power grid, according to Stan Pietrowicz, a researcher at Perspecta Labs who’s working on a network analysis and threat detection tool that can be used in so-called “black-start” situations, when power has to be restored to a dead grid. Wired quotes him:

We had 18 substations, two utilities, two command centers, and we had two generation sources that we had to bring up a crank path and synchronize. It had a realism that you don’t really find in lab environments that made you rethink the approach.

A cranking path is a portion of the electric system that can be isolated and then energized to deliver electric power from a generation source to enable startup of other generating units.

The week-long exercise, dubbed “Liberty Eclipse,” was designed to throw everything imaginable at a group of DARPA-funded research projects known as Rapid Attack Detection, Isolation and Characterization Systems (RADICS). The aim of the three-year-old RADICS program is to ensure that US utilities can bounce back from a blackout brought on by a cyberattack.

And the aim of the Liberty Eclipse project was to uncover gaps in RADICS defenses under dire, black-start conditions, in which a cyberattack wrestles the power grid to its knees and forces operators to start from scratch.

Walter Weiss, a program manager for the exercise, told reporters that nobody has ever done this before.

As described by EE News – a news outlet focused on energy and the environment – this wasn’t just a simple staging of a cyberattack. The project planners tossed a variety of wrenches into the mix, including a steady onslaught of simulated cyber and physical attacks. For example, at one point, they introduced a data “wiper,” modeled on real-world cases of ransomware, which could send grid operators back to square one if they weren’t careful.

According to Wired, Plum Island’s weather also played a role. Rainy days and high winds made it difficult to take the ferry back and forth to the island and hampered physical work on the grid. The conditions also showed the limitations of one of the recovery tools being developed to survey the grid from above: balloons carrying lightweight electromagnetic radiation detectors that could be launched during a blackout to seek out simple indicators of live power, such as Wi-Fi hotspots from home routers and electromagnetic signals that could show where electrons are actually flowing.

The balloons couldn’t cut it, and the red-team hackers running the attacks never let up while those balloon-born sensors were being buffeted. Wired:

One day, the researchers were instructed to pack overnight bags in case they couldn’t come back from the island until morning. The balloons weren’t reliable in the bad weather, so some of the researchers tried flying the sensors on a kite instead. That proved impractical with the winds. And all the while, the so-called red team kept hacking away.

According to Weiss, DARPA is working on a public after-action report that will cover any major weaknesses found in the RADICS program and map out next steps. The Department of Energy (DOE) is also drafting its own set of takeaways: according to EE News, it completed a related tabletop exercise last month and joined in on the exercise at Plum Island. Others who trekked out to the island included dozens of representatives from major utilities and industry groups.

Successful cyberattacks are real

Real-world scenarios of power grids being crippled by hackers aren’t purely hypothetical: the Ukrainian power grid was attacked in December 2015, affecting 20 substations and leaving about 230,000 people without electricity for hours.

The SANS Institute categorized the outage as a coordinated cyberattack. Malware didn’t directly cause the outage, SANS said, but it did give the attackers a foothold into the grid’s command and control, and malware was also used to thwart recovery.

The Ukrainian power grid was attacked again in December 2016, when remote terminal units (RTUs) controlling circuit breakers at Ukrenergo‘s Pivnichna power substation near Kiev suddenly shut down.

The two attacks had striking similarities, including the same BlackEnergy 3 malware, initiated by malicious spear-phishing attachments that had reportedly bounced around inside state organizations for months.

What was particularly worrisome in the case of the Ukrainian outages was the prospect that the attackers could have been using Ukraine as a playground as much as a battlefield: after all, experts pointed out, the country uses the same equipment and security protections from the same vendors as everybody else around the world.

Marina Krotofil, a researcher from Honeywell Industrial Cyber Security Lab who worked on the investigation:

 If the attackers learn how to go around those tools and appliances in Ukrainian infrastructures, they can then directly go to the west.

The fact that successful attacks have already been carried out makes testing out attacks in real-world settings vital: bring on the wind, the rain, and the darkness, and then take away the sensors that enable operators to figure out what the hell is going on. Pietrowicz:

Most of the exercise was really about trying to figure out what was going on and deal with the conditions. It wasn’t a hit and run – while we were cleaning things up the adversary was countering our moves. There was one instance on the third day of the exercise where we almost had the crank path fully established and the attacker took out one of our key substations. It was sort of a letdown and we had to just keep going and figure out our next viable path. Even that small victory got taken away from us.

The participants on two teams, each of which was struggling to start up a grid labelled as a top priority, succeeded in black-starting the grids. Overall, mission accomplished. But participants said that the true insights didn’t come from the successes. Rather, it was the setbacks along the way that gave the most valuable insights.

DARPA plans to run another, even more sophisticated version of the exercise on Plum Island in May, with potentially more of the same to come after that. RADICS’s Weiss told reporters that he hopes that ultimately, the DOE will take over the exercises and incorporate them into preparedness training for government workers and utilities.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/i4uLvPan_8M/