STE WILLIAMS

Apple fixes that “1 character to crash your Mac and iPhone” bug

Apple has pushed out an emergency update for all its operating systems and devices, including TVs, watches, tablets, phones and Macs.

The fix patches a widely-publicised vulnerabiity known officially as CVE-2018-4124, and unofficially as “one character to crash your iPhone”, or “the Telugu bug”.

Telugu is a widely-spoken Indian language with a writing style that is good news for humans, but surprisingly tricky for computers.

This font-rendering complexity seems to have been too much for iOS and macOS, which could be brought to their knees trying to process a Telugu character formed by combining four elements of the Telugu writing system.

In English, individual sounds or syllables are represented by a variable number of letters strung together one after the other, as in the word expeditious.

That’s hard for learners to master, because written words in English don’t divide themselves visually into pronunciation units, and don’t provide any hints as to how the spoken word actually sounds. (You just have to know, somehow, that in this word, –ti– comes out as shh and not as tea.)

But computers can store and reproduce English words really easily, because there are only 26 symbols (if you ignore lower-case letters, the hyphen and that annoying little dingleberry thing called the apostrophe that our written language could so easily do without).

Better yet for computers, English letters always look the same, no matter what other letters they come up against.

We do have some special characters in English typography – so-called ligatures that combine letters that are considered to look ugly or confusing when they turn up next to each other:

These English ligatures aren’t taught in primary school when you learn your alphabet, so you can go through your life as a fluent, native, literate English speaker and not even realise that such niceties exist. It’s never incorrect to write an f followed by an l, and little more than visual politeness to join them together into an combo-character.

Additionally, in English, we sometimes pronounce –e– as if it were –a– (typically in geographical names where the old pronunciation has lingered on, as in the River Cherwell in Oxfordshire, which is correctly said aloud as char-well); sometimes as a short –eh-,sometimes as a long –ay-, and sometimes as if it were several Es cartoonishly in a row, –eee-.

But many languages use a written form in which each character is made up of a combination of components that denote how to pronounce it, typically starting with a basic sound and indicating the various modifications that should be applied to it.

So each written character can convey an array of information about what you are looking at: aspirated or not (i.e. with an –h– sound in it), long vowel or short, sounds like –eee– rather than –ay-, and so on.

That’s great when you are reading aloud, but not so great when you’re a computer trying to combine a string of Unicode code points into a visual representation of a character for display.

It’s particularly tricky when you are scrolling through text.

In English, each left-arrow or right-arrow simply moves you one character along in the current line, and one byte along in the current ASCII string, but what if there are four different sub-characters stored in memory to represent the next character that’s displayed?

What if you somehow end up in the middle of a character?

Or what if you split apart a bunch of character components incorrectly, accidentally turning hero into ear hole or this into tat?

For that reason, unusual (or perhaps merely unexpected) combinations of characters sometimes cause much more programmatic trouble that you’d expect, as when six ill-chosen characters brought Apple apps down, back in 2013…

…or the recent CVE-2018-4124, when Macs or iPhones froze up after encountering a message containing four compounded Telugu symbols that rendered as a single character:

(For an intriguing overview of complexities of rendering Telugu script, take a look at Microsoft’s document entitled Developing OpenType fonts for Telugu script.)

The February 2018 “Telugu bug” was particularly annoying because a notification containing the dreaded character could cause the main iOS window to crash and restart, and to crash and restart, and so on.

Unsurprisingly, given the ease of copying and pasting the treacherous “crash character” into a message and sending it to your friends (or, perhaps, your soon-to-be-ex-friends), Apple really needed to get a patch out quickly.

And now it has.

What to do?

For your iPhone, you ‘ll be updating to iOS 11.2.6; for your Mac, you need the macOS High Sierra 10.13.3 Supplemental Update.

To make sure you’re current (or to trigger an update if you aren’t), head to SettingsGeneralSoftware Update on iOS, and to Apple MenuAbout This MacSoftware Update... on your Mac.

Oh, and if you will forgive us a moment of sanctimoniousness: if you were one of those people who sent your (ex-)friends a message containing the Telugu bug because you thought it would be hilarious… please don’t do that sort of thing again.

With cyberfriends like you, who needs cyberenemies?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/xqryPr5EWlI/

Facebook to verify election ad buyers by snail mail

Facebook’s come up with a way to avoid being used by the Russians like a tinkertoy in the upcoming US mid-term elections: snail mailed postcards.

Katie Harbath, Facebook’s global director of policy programs, described the plan to verify political ad buyers at a conference held by the National Association of Secretaries of State over the weekend. She didn’t say when the program would start, but she did tell Reuters that it would be before the congressional midterms in November.

The unveiling of the plan, which is meant to verify ad buyers and their locations, came a day after Robert S. Mueller III filed an indictment describing a Russian conspiracy to interfere in the 2016 US presidential election. It alleges that 13 Russians and three Russian companies conducted a criminal and espionage conspiracy using social media to pump up Donald Trump and to vilify Hillary Clinton.

Lawmakers, security experts and election integrity watchdog groups have been dissecting the social network’s failure to detect Russia’s use of Facebook and other social media platforms, including Twitter and Google, and its sluggishness in dealing with its fake news problem.

Facebook isn’t the only media outlet to turn to nice, flat, analog paper to try to keep Russians from meddling in the 2018 election.

As the Boston Globe reported on Monday, state election officials around the country are hoping to use paper ballots for voting, to evade the Russian hacking attacks aimed at the US e-voting system.

For their part, online platforms were beset by a swarm of meddling aimed at influencing the 2016 US presidential election: the Russian troll farm that infiltrated Twitter, the fake social media accounts, the fake news they planted, and the rallies they sponsored to sow discord.

…and then there are the advertisements that Russian conspirators purchased to promote their posts and those rallies. Facebook has cited 10 million US users who saw Kremlin-purchased ads. But there were far more who saw Russia-backed posts. According to the company’s prepared testimony, submitted to the Senate judiciary committee before hearings at the end of October, Russia-backed Facebook posts actually reached 126 million Americans during the US election.

According to Mueller’s indictment, Facebook ads for a “Florida Goes Trump” rally reached more than 59,000 users and were clicked on by more than 8,300.

That was just one out of eight rallies mentioned in the indictment.

All of this grilling, criticism and introspection has pushed Facebook to snap out of its once lackadaisical attitude. For example, there was founder Mark Zuckerberg’s initial reaction to suggestions that misleading/misinformative Facebook posts influenced the outcome of the election: a reaction that was basically a shrug. It was a “pretty crazy idea,” he said, though he later conceded that he’d been unduly dismissive.

Wired has an excellent piece regarding 1) the tumult at Facebook over the past two years, as it’s slowly come to realize that it’s not a benevolent jeans-as-dress-code Silicon Valley do-gooder company but is, rather, a powerful platform that yes, has the ability to be used to influence opinions, even change the course of an election, and 2) how Mark Zuckerberg is now planning to fix it all.

To fix the problem of not knowing who in the world is buying ads or promotion of posts, Harbath said that Facebook will send postcards that contain a code required for advertising purchases that mention a specific candidate running for a federal office. The process won’t be required to buy issue-based political ads, she said.

Reuters quoted her:

If you run an ad mentioning a candidate, we are going to mail you a postcard and you will have to use that code to prove you are in the United States.

It won’t solve everything.

No, it won’t solve everything, but it was the best that Facebook could come up with, she said.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/xiYw_cmZKJI/

UK local gov: 37 cyber attacks a minute but little mandatory training

Local government was hit by almost 100 million cyber attacks in the last five years, while one in four councils’ systems were successfully breached, according to research.

Privacy campaign group Big Brother Watch sent Freedom of Information to all the UK’s local authorities, asking for details of cyber attacks and data breaches from 2013-17.

Of the 395 councils (94.5 per cent) that responded, some 29 per cent reported at least one cyber security incident, which is defined as an actual breach of their systems.

Tonbridge and Malling Council reported the most – a total of 62 incidents over the five years. Herefordshire said it had experienced 22; Rhonnda, Cynon and Taff reported 18; the City of Edinburgh, 11; and Leicestershire, 10.

Some 25 councils said that there had been a data breach or loss as a result of such incidents, with the councils of Merton and Westminster each saying this had happened three times.

Despite this, 56 per cent of these local authorities admitted they had not reported the incidents – of the two examples above, Merton said it had reported no incidents and Westminster made one report to the police.

Overall, the councils estimated they had been hit by 98 million cyber attacks – defined here as a malicious attempt to damage, disrupt or gain unauthorised access to systems, networks or devices. Most common were malware and phishing.

Big Brother Watch argued that these numbers would only increase as councils continue to build “ever-expanding troves of personal information… under the banner of data-driven government”.

In a bid to provide better, more efficient public services – that also cost the councils less money – authorities are looking to gather more data on people’s habits and movements.

But Big Brother Watch warned that “zealous data sharing comes with real risks”, as the information councils amass are “attractive targets for criminals”.

This should mean staff in councils are well versed in cyber security threats, the group said, but three-quarters said they don’t provide mandatory training, while 16 per cent there was no training at all.

It also seems cash-strapped councils are keeping the purse strings tight, with more than half saying they had no specific budget for cyber security training or had spent nothing on it.

Pointing out that the councils had experienced the equivalent of 37 attacks a minute, Big Brother Watch slammed the councils for this lack of investment.

“Considering that the majority of successful cyber attacks start with phishing emails aimed at unwitting staff, negligence in staff training is very concerning and only indicative of the low priority afforded to cyber security issues,” the group said.

It called for increased staff training that included refresher courses for all staff, rather than just a one-off when they join the authority.

In addition, Big Brother Watch urged councils to establish simple protocols for reporting incidents that use the National Cyber Security Centre’s definitions to ensure reports are consistent. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/20/local_government_98_million_cyber_attacks_five_years_big_brother_watch/

Proactive Threat Hunting: Taking the Fight to the Enemy

What’s This?

Pulling together everything your security team needs to be effective at threat hunting is not easy but it’s definitely worthwhile. Here’s why.

If you haven’t implemented a cyber threat hunting capability yet, 2018 is the time to start. Anyone who has paid attention to recent data breaches will know that attackers have become dangerously good at breaking into and hiding on enterprise networks for long periods of time. Often organizations do not realize they have been breached for months, and in some cases years, after an initial intrusion and even then only when informed of it by a third-party.

Security strategies solely focused on blocking attacks at the perimeter and responding to incidents after they have occurred are no longer enough for dealing with targeted, stealthy and persistent attackers. You also need measures for proactively hunting down and neutralizing threats on your network before they materialize.

With threat hunting, your incident response team is going out and engaging with the enemy, rather than passively waiting for them to show their hand. As the SANS Institute describes it, “threat hunting is a focused and iterative approach to searching out, identifying and understanding adversaries that have entered the defender’s networks.” 

It involves the use of external threat intelligence, internal telemetry and other data to uncover adversaries who have the intent, the ability, and the opportunity to do harm. If implemented correctly, a threat hunting capability can reduce attacker dwell time on your network, and also your exposure to new risks.

Threat hunting requires a very different mindset from one that is focused primarily on reacting to security incidents and alerts. It’s main objective is finding the hidden human adversary on your network rather than just their tools and malware. It requires you to think like the adversary and to know what systems and data on your network an attacker will most likely target so you can start protecting those assets first.

A good incident response team is vital to threat hunting;unless you can quickly mitigate and recover from the threats you find, there’s not a whole lot that can be gained from going out and finding them in the first place. This is particularly true as your threat hunting team’s capabilities mature. Just as the hunters get quicker and better at finding new threats, your IR team needs to be able to prioritize and respond to the identified threats equally effectively.

To be effective at hunting, security teams also need access to a lot of internal and external telemetry and threat intelligence. For example, to find a hidden adversary, you need to know what to look for, where and when. Often, that task requires correlating data from multiple sources, which in turn requires a high degree of automation. You need tools for automatically collecting and aggregating data from multiple sources and for quickly cross-referencing and analyzing the data.

Having trained analysts with a diverse range of skills on your team is another necessity. Those include  security operations skills, incident response, forensics, and malware analysis.

Clearly, pulling together everything you need to be effective at threat hunting is not easy but it’s definitely worthwhile.  Many forward-looking organizations have already adopted threat-hunting practices and more are following suit daily. In an April 2017 SANS Institute survey, 27% of the 306 enterprises that participated had a defined threat-hunting program in place and were following it. Another 45% engaged in threat hunting even if it was only on an ad hoc basis and without any formal processes. Organizations that achieved measurable improvements in security as the result of threat hunting, most often cited speed and accuracy as their biggest gain.

Hunting is about taking a proactive approach to dealing with threats on your network. Make adoption of at least some of its approaches a priority this year.

To learn more about linking security intelligence to policy enforcement to defend against advanced threats click here.

Laurence Pitt is the Strategic Director for Security with Juniper Networks’ marketing organization in EMEA. He has over twenty years’ experience of cyber security, having started out in systems design and moved through product management in areas from endpoint security to … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/juniper/proactive-threat-hunting-taking-the-fight-to-the-enemy-/a/d-id/1331084?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Vulnerabilities Broke Records Yet Again in 2017

Meanwhile, organizations still struggle to manage remediation.

Last year was another one for the record books when it came to software vulnerabilities: published security flaws jumped by 31% in 2017.

The number shot up to 20,832 for the year, with nearly 40% of them with CVSSv2 severity scores of 7.0 and higher, according to new data from Risk Based Security.

“Organizations that track and triage vulnerability patching saw no relief in 2017, as it was yet another record-breaking year for vulnerability disclosures,” said Brian Martin, vice president of vulnerability intelligence for Risk Based Security, which published its findings last week in a new report. “The increasingly difficult task of protecting digital assets has never been so critical to businesses as we continue to see a rise in compromised organizations and data breaches.”

Forrester analyst Josh Zelonis says ineffective vulnerability management is one of the top five concerns security and risk professionals should be focusing on for 2018. Forrester’s 2017 global security survey showed that software vulnerabilities played a hand in 41% of external data breaches last year.

Last year’s massive WannaCry and NotPetya outbreaks following the patching of the vulnerability exploited by the EternalBlue zero-day offers an illuminating example of how important it is for organizations to more rapidly close their vulnerability windows, according to Zelonis.

“While remediation was listed as ‘critical’ by Microsoft, these attacks created global damage months after patch availability,” Zelonis explained in a recent report.

He detailed the fact that WannaCry wreaked havoc on 300,000 systems 60 days after the patch was released, and 30 days later NotPetya started another round of mayhem that caused serious damage worldwide. For example, he cited losses at pharmaceutical company Merck Co totaling over $270 million as a result of NotPetya.

“Organizations should really be aiming to fix vulnerabilities on their systems as rapidly as is feasible,” says Tim Erlin, vice president of product management and strategy for Tripwire. “Any gap in applying a patch to a vulnerability provides an opportunity for hackers to access systems and steal confidential data.”

Last month, a Tripwire survey found that almost a quarter of enterprises still take a month or longer to remediate known vulnerabilities in their systems. What’s more, 51% of organizations admit that fewer than half of their systems are automatically discoverable by vulnerability scanning tools – meaning that more that remediation teams may not even know whether or not more than half of systems are susceptible to a known vulnerability at any given time.

Meantime, the number of new vulnerabilities and their severity continues to mushroom. Organizations’ vulnerability management practices may also be suffering from a visibility gap when it comes to new vulnerabilities coming down the pike, according to Risk Based Security. The firm said that it published over 7,900 more vulnerabilities than those catalogued by the more widely used MITRE Common Vulnerability Enumeration (CVE) and the National Vulnerability Database (NVD).

Visibility gaps notwithstanding, many CISOs may first need to straighten out the procedures in place to remediate once they receive reports of vulnerabilities, no matter the source of that intelligence. 

“The sad truth is that vulnerability management programs have either no or extremely limited ability to actively correct the flaws that they find,” explained Mike Convertino, CISO for F5 Networks, in a recent commentary piece for Dark Reading. “Even when completely accurate vulnerability scans are delivered, there aren’t enough people to patch or correct the systems in a timeframe that is relevant to prevent attack.”

Related Content:

 

 

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/vulnerability-management/vulnerabilities-broke-records-yet-again-in-2017/d/d-id/1331087?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Year-old vuln turns Jenkins servers into Monero mining slaves

Here’s a salutary reminder why it pays to patch promptly: a Jenkins bug patched last year became the vector for a multi-million-dollar cryptocurrency mining hijack.

A campaign security researchers dubbed “JenkinsMiner” exploited CVE-2017-1000353, a deserialisation bug first disclosed with fixes by the Jenkins team in April 2017.

According to Check Point researchers, that bug helped an attacker, believed to be from China, use Jenkins servers as mining rigs – after they’d already garnered US$3 million of Monero using the XMRig miner on exploited Windows machines.

On un-patched systems, just two commands sent to the Jenkins CLI trigger CVE-2017-1000353.

miner

Good news, everyone: Ransomware declining. Bad news: Miscreants are turning to crypto-mining on infected PCs

READ MORE

Next, they wrote, the attacker sends a request containing two objects, “Capability” and “Command”. It’s the second of these that contains the Monero miner payload.

Once the Jenkins server is compromised, the attack launches a hidden PowerShell instance so the script can run in the background, and the attack sets a variable to a web-client object, with scrambled case to try and confuse security products.

That command fetches the miner’s executable and the script starts the miner.

Check Point’s estimated income came from a detail of how the attacker works: funds from their different operations are sent to a single Monero wallet.

Earlier this year, an old bug in Oracle’s WebLogic server was also exploited to plant XMRig. That attack was discovered by Morpheus Labs’ Renato Marinho and disclosed in a post at the SANS Institute. The SANS Dean of Research Johannes Ullrich noted that XMRig itself is considered a legitimate miner. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/20/unpatched_jenkins_servers_mining_monero/

Google reveals Edge bug that Microsoft has had trouble fixing

Google has again decided to disclose a flaw in Microsoft software before the latter company could deliver a fix. Indeed, Microsoft has struggled to fix this problem.

Detailed here on Google’s Project Zero bug-tracker, the flaw impacts the just-in-time compiler that Microsoft’s Edge browser uses to execute JavaScript and makes it possible to predict the memory space it is about to use. Once an attacker knows about that memory, they could pop their own code in there and have all sorts of naughty fun as Edge executes instructions of their choice rather than JavaScript in the web page the browser was rendering.

News of the flaw was posted to Project Zero on November 17th, 2017, with the usual warning that “This bug is subject to a 90 day disclosure deadline. After 90 days elapse or a patch has been made broadly available, the bug report will become visible to the public.”

Google later gave Microsoft 14 more days to sort things out.

But last week, on February 15th, came a post that said Microsoft “replied that ‘The fix is more complex than initially anticipated, and it is very likely that we will not be able to meet the February release deadline due to these memory management issues. The team IS positive that this will be ready to ship on March 13th, however this is beyond the 90-day SLA and 14-day grace period to align with Update Tuesdays’.”

The next post stated simply “Deadline exceeded — automatically derestricting”. The latest post in the thread said Microsoft has advised Google that “because of the complexity of the fix, they do not yet have a fixed date set as of yet.”

Which is just great news – NOT – seeing as Google’s original post explains the flaw in great detail and is now visible to anyone who feels like some evil fun.

This is not the first time Project Zero has revealed flaws before Microsoft has been able to fix them, and Redmond doesn’t like it one little bit.

In October 2017, for example, Microsoft criticised Google on grounds that disclosure can endanger users. That outcome looks to be possible in this case.

Also worth considering is Google’s behaviour in the revelation of the Meltdown/Spectre CPU design flaws, as on that occasion it listed the problems in June 2017 but didn’t disclose until January 2018. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/20/google_reveals_edge_bug_that_microsoft_has_had_trouble_fixing/

Google drops new Edge zero-day as Microsoft misses 90-day deadline

Google’s Project Zero team has dropped a Microsoft Edge bug for the world to see.

Google originally shared details of the flaw with Microsoft on 17 November 2018, but Microsoft wasn’t able to come up with a patch within Google’s non-negotiable “you have 90 days to do this” period.

Ironically, Google may give you a 14 day grace period to extend the deadline to 104 days, but if you admit you aren’t going to make it within 104 days, you don’t get any of the extra 14 days of non-disclosure.

Last week, right at the 90-day deadline, Google quoted Microsoft as saying:

The fix is more complex than initially anticipated, and it is very likely that we will not be able to meet the February release deadline due to these memory management issues. The team IS positive that this will be ready to ship on March 13th [2018-03-13], however this is beyond the 90-day SLA [service level agreement] and 14-day grace period to align with Update Tuesdays.

As a result, Google published details of the bug immediately, so Microsoft Edge users are now adrift without a patch for nearly a month.

How bad it it?

Fortunately, this bug isn’t a remote code execution exploit all on its own.

It’s a security bypass that could allow an attacker who has already wrested control from your browser to vault over Mirosoft’s second layer of defence, known as ACG, short for Arbitrary Code Guard.

ACG is supposed to head off remote code execution attacks before they can make any headway.

Even if a booby-trapped web page, image, or script manages to wrest the CPU away from Edge in an effort to grab control, ACG means that the attack can’t easily transfer control to malware of its own choice.

That’s a bit like having a backup security system at home that throws a net over crooks who manage to pick your front door lock and get into your house: they’re already in, which is bad, but their hands are pinned to their sides, so they can’t pick anything up or open any more doors, which is good.

Very simply put, ACG works by locking down the memory that Edge uses to run its own software code.

In theory, an attacker who gets control via a web page that Edge just loaded:

  • Can’t modify executable code that’s already in memory.
  • Can’t allocate new memory blocks in which to store rogue code.

But today’s browsers aren’t that simple: Edge itself (in common with Chrome, Firefox, Safari and others) includes what’s called a Just-In Time compiler, or JIT, that can convert remotely-provided JavaScript programs from interpreted source code into native binary format.

This conversion happens on demand while you’re browsing, so the JIT-generated code is created and executed after the browser itself has loaded.

In other words, the browser needs to be able to allocate new memory blocks for executable code, and to modify those blocks at runtime…

…but it also needs to stop attackers who has already compromised the browser from creating their own blocks of executable code.

Microsoft therefore elected to separate Edge’s own “shove new code into memory and run it” JIT feature from the rest of the browser by running the JIT compiler in a separate process.

But Google researchers nevertheless found a way to guess roughly where Edge’s JIT compiler was going to allocate new memory, and to exploit it that way.

Jut to be clear, though: you need to find a remote code execution vulnerability (RCE) in Edge first.

This ACG bypass doesn’t give you remote code execution on its own.

Should Google have waited?

Google’s approach is that 90 days ought to be enough for anyone to fix any security bug, so after 90 days it’s OK to reveal publicly how the bug works.

The theory is that by ‘dumping’ bugs according to an inflexible algorithm, you can never be accused of favouritism by giving some companies more time than others.

This means that the pressure of unwanted disclosure – the stick that you’re wielding in the hope of forcing software vendors not to sweep bugs under the carpet – is unswervingly objective.

But there’s a somewhat inhuman aspect to bug-dumping-by-numbers.

What you might call Google’s soulless approach doesn’t differentiate between a company that’s not trying and has missed the deadline because it simply doesn’t care about security, and one that has been trying hard but hasn’t quite made it in time.

Of course, rules are rules, so you can argue that Google is right to apply this one without fear or favour.

On the other hand, you can argue that Google is being high-handed by applying its own opinion in the first place as if it were an objective industry-wide standard.

Should you stop using Edge?

As explained above, this hole doesn’t give crooks a direct way to take over your browser immediately.

Simply put, you can regard it as a vulnerability that could make a bad thing worse, rather than a bad thing in the first place.

Nevertheless, keep your eyes open for Microsoft’s forthcoming patch.

For all you know, Microsoft might yet get a fix out before next Patch Tuesday, so watch this space, and grab the patch as soon as it’s available.

PS. We were wondering if turning off Edge’s Just-In Time compiler would prevent this bug from being exploitable – because the sequence of operations on which it dependa would then never arise. We can’t find out how to turn off the JITter, or even if it’s possible, let alone whether it would work if it were. If you have any hints for a workaround, please let us know in the comments.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/gwHpj6uUm-s/

Hackers sentenced for SQL injections that cost $300 million

Heartland Payment Systems: remember that decade-old breach?

What was then the sixth-largest payments processor in the US announced back in 2009 that its processing systems had been breached the year before.

Within days, it had been classified as the biggest ever criminal breach of card data. One estimate claimed 100 million cards and more than 650 financial services companies were compromised, at a cost of hundreds of millions of dollars. Prosecutors have said that three of the corporate victims reported $300m in losses.

The “biggest ever” designation applied to Heartland, but it was one of many corporate victims in a worldwide hacking and data breach scheme that targeted major networks. In total, the hacking ring responsible for the Heartland attack compromised 160 million credit card numbers: the largest such scheme ever prosecuted in the United States. Individual consumers also got hit, incurring what court documents said were “immeasurable” losses through identity theft, including costs associated with stolen identities and false charges.

It might be an old breach, but it hasn’t been collecting dust.

On Wednesday, the US Attorney’s office of New Jersey announced that two Russians belonging to the hacking ring that gutted Heartland, other credit card processors, banks, retailers, and other corporate victims around the world have been sent to federal prison.

Both had pleaded guilty in 2013.

Russian national Vladimir Drinkman, 37, had previously pleaded guilty to one count of conspiracy to commit unauthorized access of protected computers and one count of conspiracy to commit wire fraud. He’s been sentenced to 12 years in prison. Dmitriy Smilianets, 34, of Moscow, had previously pleaded guilty to conspiracy to commit wire fraud against a financial institution and was sentenced to 51 months and 21 days in prison: time served.

So that makes it three down: The infamous American “superhacker” and mastermind of the mammoth hacking ring behind the breach, Albert Gonzalez, was sentenced in March 2010 to 20 years in prison.

Three down, three more to go. On the fugitive list: Alexandr Kalinin, who, along with Drinkman, allegedly specialized in penetrating network security and gaining access to the corporate victims’ systems; Roman Kotov, another Russian hacker who allegedly specialized in mining corporate networks to steal valuable data; and Mikhail Rytikov, a Ukrainian who allegedly provided the gang with anonymous web-hosting services.

The conspirators handed the ripped-off data to Smilianets to sell; it was also his job to parcel out the proceeds from selling the ill-gotten data.

The gang targeted companies including NASDAQ, 7-Eleven, Carrefour, JCP, Hannaford, Heartland, Wet Seal, Commidea, Dexia, JetBlue, Dow Jones, Euronet, Visa Jordan, Global Payment, Diners Singapore and Ingenicard.

They turned the financial data – card numbers and associated data that they called “dumps” – into profit by selling it either through online forums or directly to individuals and organizations. Prosecutors said Smilianets sold the data exclusively to identity theft wholesalers.

The going rate was $10 for each stolen American credit card number and its data, $50 for each European card number and data, and about $15 a pop for Canadian credit cards and data. Repeat customers and those who bought in bulk got a discount. Then, the purchasers would encode each data dump onto the magnetic strip of a blank plastic card and cash it out by withdrawing money from ATMs or buying stuff with the cards.

To cover their tracks, Rytikov allegedly allowed his internet service provider (ISP) clients to hack away, ostensibly safe in the knowledge that he’d never keep records of what they were up to nor rat them out to police.

The conspirators pried open corporate networks by using an attack that’s as old as dirt: SQL injection.

It wasn’t only SQL injection that pierced the hide of all those companies, though SQL injection vulnerabilities exposed their tender bellies quite nicely. After penetrating networks, the attackers would avoid detection by tweaking settings on company networks so that security mechanisms couldn’t log their actions, or they managed to figure out how to slip past the protection of security software entirely.

The hackers also used sniffers – programs that identify, collect and steal network data. Once they had it, they sent it to an array of computers located around the world, storing it until they ultimately sold it.

So no, it wasn’t just SQL injection vulnerabilities that led to companies and consumers being bled for hundreds of millions of dollars. Sloppiness played its part, both on the part of those vulnerabilities but also on the part of the hackers themselves. These weren’t elite hackers, after all: They were caught thanks in no small part to having posted their holiday snaps online and letting their mobile phones broadcast their location to the cops on their trail.

But it shows how far you can go if a company exposes its soft and fleshy parts to the internet.

As Naked Security’s Mark Stockley has noted, coding a website so it’s protected from the kinds of attack it’s most likely to face (SQL injection is a perennial favorite on Akamai’s State of the Internet Security Report) is an old story. Mostly, hardening defenses to protect against them isn’t fancy work: it’s just about doing a lot of tedious work, but doing it thoroughly.

If websites are properly coded then anything anyone enters in an input field is scrubbed and cleaned until it can do no harm. If websites were properly coded then SQL injection and XSS attacks would have disappeared long ago.

SQL injection can be killed stone dead by the simple expedient of using parameterised database queries – but only if you have the discipline to use them everywhere, all the time.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/O2mjOoNsEHY/

US and UK condemn Russia for NotPetya worm attack

When it comes to pointing the finger for last year’s historically-disruptive NotPetya cyberattack, nobody could accuse the US and UK of dodging the issue.

First the UK, and then the US, named their chief suspect – Russia – in near-synchronised statements that set out to dissolve the secrecy and confusion that cloaks many cyber-incidents.

UK Defence Secretary Gavin Williamson said at the time:

Russia is ripping up the rule book by undermining democracy, wrecking livelihoods by targeting critical infrastructure, and weaponising information.

Which echoed White House Press Secretary Sarah Sanders:

This was also a reckless and indiscriminate cyberattack that will be met with international consequences.

In a possible first, the three other members of the Five Eyes intelligence alliance – Australia, Canada and New Zealand – also put out statements blaming Russia too.

We’ve heard US-led condemnations before. Examples include that Russia hacked the Democratic National Committee in 2016, that North Korea was behind WannaCry and, further back in time, a lot of fuss about China’s APTs stealing intellectual property from US companies.

The problem is accusations only get you so far: no technical evidence against Russia has been offered beyond noting that NotPetya appeared to have been aimed at arch-Russian foe, Ukraine.

Inevitably – whether Russia was behind the attack or not – it can dismiss the accusation as “Russiaphobia” in a way that makes that defence sound plausible.

To onlookers, a cyberattack that happened over six months ago (and whose central software exploit has been patched) will sound like old news. Cyberattacks are a regular occurrence after all.

That would be to underestimate NotPetya’s deeper significance, which was unlike any other cyberattack yet recorded, bar perhaps the WannaCry attack which preceded it by mere weeks.

NotPetya should be the last attack the US would want to remind the world of given that it exploited the EternalBlue Windows SMB vulnerability leaked to The Shadow Brokers hacking group from none other than the US National Security Agency (NSA) itself.

In other words, the US and the world had been attacked using its own cyberweapons loaded with a home-made exploit, which is as embarrassing as cyberwar gets.

The US and its allies probably calculate they have little to lose by warning alleged perpetrator Russia about its conduct after the event.

But it seems only fair to point out that had the NSA secured its cyberweapons more competently, the attacks would not have been possible.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RFnonoRHSmg/