STE WILLIAMS

Apache Hadoop spins cracking code injection vulnerability YARN

The “Zip Slip” vulnerability that first emerged in June has claimed another victim – the Apache Hadoop YARN NodeManager daemon.

Cats eyes behind a zip

Loose .zips sink chips: How poisoned archives can hack your computer

READ MORE

Apache’s Akira Ajisaka disclosed the bug here. Zip Slip affects all Apache Hadoop versions except 3.1.1, 3.0.3, 2.8.5 and 2.7.7, as well as JBoss Fuse 6.0 and 7.0.

In the Hadoop case, as well as the NodeManager daemon, the vulnerability affects implementations that use public archives in the distributed cache.

According to the disclosure, the bug “allows a cluster user to publish a public archive that can affect other files owned by the user running the YARN NodeManager daemon. If the impacted files belong to another already localised, public archive on the node then code can be injected into the jobs of other cluster users using the public archive.”

As we explained when Zip Slip was first disclosed, the bug affects any code that unpacks compressed archives. Attackers can exploit inadequate filename sanitation that allows them to set the unpacked file’s destination to an existing folder or file on the target system.

penguin entree army - black olives, carrot pieces and mozzarella balls pierced by toothpicks

Malware scum want to build a Linux botnet using Mirai

READ MORE

The attacker’s file could therefore overwrite existing data, anywhere on a system, and as we noted in June: “That would allow a miscreant to inject arbitrary commands in script files, or change executables, to do nefarious things.”

Apache had already mitigated Zip Slip in another package in June. Fixing YARN was harder, it seems, since the organisation’s CVE list entry said it was first notified of the issue in April.

It’s been a rough week for YARN, with Netscout revealing its role as a vector for Mirai attacks earlier this week. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/23/apache_hadoop_yarn_zip_slip_vulnerability/

Reddit helps admin solve mystery of rogue Raspberry Pi

Finding a mysterious circuit board plugged into a network that you are tasked with managing is always going to be a disconcerting moment for any sysadmin.

Now imagine the device isn’t just connected to the network but plugged directly into a LAN switch located inside a cabinet in a supposedly secure, locked room.

Who put the device there? What was the equipment doing before it was found?

It’s a mystery that faced a sysadmin, geek_at, at a college in Austria earlier this week. According to The Register, the sysadmin took to Reddit to find answers.

The primary evidence was the device itself, an original Raspberry Pi Model B revision 1 from 2011 – a bit of a collector’s item these days.

Plugged into one of the Pi’s USB ports was a dongle enabling Wi-Fi and Bluetooth, the former connecting to an unknown SSID.

This dongle, it later transpired, was an nRF52832 system-on-a-chip development board of the sort that might be popular in environments for tinkering with (a clue here) the Internet of Things (IoT).

The boot image on the Pi’s SD card turned out to be balena.io, an IoT development platform, loading virtualised Docker containers which were being updated every 10 hours.

Important detail – the communication from the device back to whomever it was communicating with happened, suspiciously, across a VPN.

Unidentified Network Object

The setup looked like an unauthorised and rather irresponsible experiment in IoT, but the possibility of something rogue couldn’t be ruled out.

Reddit being Reddit, there was no shortage of theories:

  • Perhaps it was a spot of pen-testing by a red team.
  • Or a sophisticated attempt to gain backdoor access to Bluetooth or Wi-Fi traffic.
  • Or perhaps the test was to see whether admins noticed it sitting in plain sight in the first place.
  • Did the organisation whose network it was connected to do anything that might interest hackers?

Replied geek_at:

We’re in the educational field so I don’t think it’s what’s IN our network but rather the network itself. Maybe to obfuscate some traffic the attacker creates.

Other commenters fretted that perhaps the sysadmin should call the police and pass the problem to someone on a higher pay grade.

It’s easy to understand why finding a Raspberry Pi connected to your network cabinet could be unsettling, but wouldn’t a professional criminal have taken more care to disguise it?

Eventually, geek_at was able to shed some light on matters:

At the moment it looks like a former employee (who still has a key because of some deal with management) put it there. I found his username trying to log in to Wi-Fi (blocked because user disabled) at 10pm just a few minutes before our DNS server first saw the device. Still no idea what it actually does except for the program being called ‘logger’, the Bluetooth dongle and it being only feet away from secretary/CEO office.

Several snatches of learning here, starting with the obvious one that asking Reddit for an opinion could leave you with plenty of helpful insight but perhaps more than you expected, or indeed wanted.

The other is the power wielded by insiders, even ones who have left an organisation.

Just because they’re gone doesn’t mean they’ve left, especially if someone has unwisely given them a key to the network room.

 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/bpdH1lOq5Lw/

Cybercriminal techniques – SophosLabs 2019 Threat Report

Cyberattackers are successfully evading detection on Windows computers by abusing legitimate admin tools commonly found on the operating system.

This is a pivotal finding of the SophosLabs 2019 Threat Report, which traces how the technique has risen from the fringes of the cybercriminal playbook to become a common feature in a growing number of cyber attacks.

Known in security parlance as ‘living off the Land’ or ‘LoL’ because it avoids the need to download dedicated tools, a favourite target is PowerShell, a powerful command line shell that ships by default on all recent Windows computers even though few users have heard of it.

Alternatives include Windows Scripting Host (WScript.exe), the Windows Management Instrumentation Command line (WMIC), as well as popular external tools such as PsExec and WinSCP.

It’s a simple strategy that makes detection a puzzle. Removing the tools is an option but comes with disadvantages few admins would be happy with, notes the report:

PowerShell is also an integral component of tools that help administrators manage networks of almost any size, and as a result, must be present and must be enabled in order for those admins to be able to do things like, for example, push group policy changes.

Attackers, of course, know this and often feel brazen enough to chain together a sequence of scripting and command interfaces, each running in a different Windows process.

According to SophosLabs, attacks might start with a malicious JavaScript attachment, in turn invoking wscript.exe, before finally downloading a custom PowerShell script. Defenders face a challenge:

With a wide range of file types that include several “plain text” scripts, chained in no particular order and without any predictability, the challenge becomes how to separate the normal operations of a computer from the anomalous behaviour of a machine in the throes of a malware infection.

Macro attacks 2.0

Meanwhile, attackers show no signs of giving up on new variations on Microsoft Office macro attacks, another route to launch exploits without the need for conventional executables.

In recent years, protections such as disabling macros inside documents or using preview mode have blunted this technique.

Unfortunately, attackers have developed techniques to persuade people to disable these using macro builder tools that package Office, Flash, and other exploits within a document that throws up sophisticated social engineering prompts.

Compounding this, cybercriminals have refreshed their older stock of software flaws in favour of more dangerous and recent equivalents – SophosLabs’ analysis of malicious documents found that only 3% of exploits inside builders dated back to before 2017.

With well-used filetypes now blocked or monitored by endpoint security, the trend is to use more exotic filetypes to launch attacks, especially apparently innocuous ones that can be called from a Windows shell such as .cmd (Command File) .cpl (Control Panel), .HTA (Windows Script Host), .LNK (Windows Shortcut), and .PIF (Program Information File).

Moving sideways

The EternalBlue exploit (CVE2017-0144) has surprisingly become a popular staple for malware writers, despite Microsoft issuing a patch in advance of its first use by WannaCry in May 2017.

Cryptominers have been enthusiastic users of EternalBlue, using it to move laterally through networks to infect as many machines as possible.

Attackers combining these innovations – Windows LoL tools, macro attacks, novel exploits and cryptomining – represents a challenge because they often confound the assumptions of defenders.

Their uptake of these more complex and esoteric approaches has been driven, ironically, by the success of the cybersecurity industry at curbing traditional malware. Concludes Sophos CTO, Joe Levy:

We expect we’ll eventually be left with fewer, but smarter and stronger, adversaries.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/HRbD1a8VLsk/

Update now! Adobe Flash has another critical security vulnerability

Adobe’s Flash Player for Windows, Mac and Linux has a critical vulnerability that should be patched as a top priority.

Flash has a dismal history of critical vulnerabilities – so what’s the hurry this time? The answer to that question is buried in the brief Adobe advisory explaining the issue:

Technical details about this vulnerability are publicly available.

That’s a warning that although no exploits have been detected so far, they are unlikely to be far off and might even be underway.

The SANS Institute’s Johannes B. Ullrich makes an interesting point about the flaw’s imminent exploitation:

This is of course, in particular, worrying ahead of the long weekend (in the US) with many IT shops running on a skeleton crew.

The flaw

The vulnerability was made public last week by a researcher on the same day Adobe released its monthly patch, which means it’s been in the public realm for at least that long.

Identified as CVE-2018-15981, the problem is a type of confusion bug that could lead to a remote code execution (RCE), which could be executed via a malicious Flash file on a boobytrapped website.

The affected versions are 31.0.0.148 and earlier running on all platforms, which means the Desktop Runtime as well as inside the Chrome (and Chromebook), Edge, Firefox and Internet Explorer browsers.

The updated version is 31.0.0.153. Windows 10 consumer users should receive this update automatically from Microsoft.

Taming Flash

Flash is heavily locked down in browsers (Chrome, Firefox, Edge, Safari) that now require users to activate it each time it is used.

That’s not a perfect defence because users could be tricked into enabling it, which is why it’s also possible to disable it completely (after installing any patches just in case it gets re-enabled later).

Better still, with Flash on its last legs before the 2020 end of life cut-off, remove it completely.

Recent figures suggest that under 5% of websites use it, so losing it shouldn’t be noticed.

However, history teaches us to be realistic. Most likely Flash will continue as a zombie technology well into the future and long after Adobe has washed its hands of a favourite target for the internet’s bad guys.

Make sure you’re not one of the holdouts.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/G7A6cFz_LXw/

The passwordless web explained

On 20 November 2018, Microsoft announced that its 800 million Microsoft account holders could now log in to services like Outlook, Office, Skype and Xbox Live without using a password.

The announcement is part of an apparent acceleration in the march towards a passwordless web, and comes at the end of a year when Mozilla Firefox, Google Chrome and Microsoft Edge all rolled out support for WebAuthn, one of the keystone technologies.

Passwordless authentication means ditching usernames and passwords in favour of biometrics, like fingerprints and face recognition, or other forms of authentication compatible with the FIDO2 specification, such as YubiKeys or Titans.

In security terms, it’s great news, but there’s no guarantee that people will embrace it just because it’s more secure. In fact, if the slow uptake of two-factor authentication is anything to go by, I imagine getting users to adopt it will be an uphill struggle.

One of the reasons that passwords have hung around for so long is that they’re very, very easy to understand. Passwordless authentication works in a different way, isn’t as easy to grasp, and comes with a lexicon of new (or relatively new) acronyms and standards like FIDO2, WebAuthn and CTAP.

It also comes with misunderstandings. Users who come at it assuming it works something like password authentication are left wondering if using their fingerprints to sign on to a website means sharing their fingerprint data (it doesn’t). Passwords might not be great, they reason, but at least they can change them if there’s a breach.

Passwordless authentication is a Good Thing and it deserves some explanation.

In this article I’ll try to explain, in simple terms, how it all works, what some of the important acronyms mean and how they fit together.

How to log in without a password

Normally, when you sign up for a website you have to tell it who you are and what password you’re going to use. Once you’ve shared that password you have no control over what the website does with it, you simply have to trust that the site will store it safely.

A quick glance at the number of data breaches in the news will tell you how well that’s working out.

With passwordless authentication you don’t have to trust the website with a password, or any other kind of secret.

That’s because it uses public key cryptography, which authenticates you using a pair of cryptographic keys: a private key that’s a secret, and a public key that isn’t.

You keep the secret, private, key and you give the public key to the website when you sign up. Because the public key isn’t a secret you don’t have to worry about whether the website will keep it safe, lose it in a data breach or leave it in the back of a taxi.

The public key can only unlock things that were locked using the corresponding private key, and that never has to leave your possession.

You authenticate yourself by using your private key to encrypt a challenge (a very large random number) sent by the website and then having the website decrypt it with the public key. If the encryption/decryption sequence works and the web server gets its challenge back then congratulations, you’ve proved you’re the owner of the private key.

That’s the theory.

To make it work in practice you need something that can create and store keys, an Authenticator, and a set of rules that allows your computer, your browser and the websites you visit to cooperate and make it all work.

WebAuthn

WebAuthn is a recently minted set of rules, an API (Application Programming Interface), that websites and web browsers can use to enable authentication using public key cryptography instead of passwords.

All the major browsers already understand how to handle WebAuthn, so now it’s up to individual website owners to do their bit and change their code.

Instead of providing a login form where you type your username and password, websites can authenticate users using JavaScript code embedded in its web pages.

The code uses the WebAuthn API to ask browsers to create credentials, when you want to sign up to a website, or to get credentials, with you want to log in.

Although the JavaScript code is downloaded with the web page and is running on the user’s machine, in their browser, it’s still considered part of the website so it isn’t trusted with access to your private key, or any other secrets.

Instead, it just acts as a go-between, asking your browser to do the secret stuff, and then relaying information back and forth between the browser and the website’s web server.

Although the browser accepts requests to do the secret stuff it doesn’t actually do them. It’s a go-between too and actually outsources the secret stuff to the authenticator.

Authenticators

By design, a website you’re authenticating to doesn’t know or care how you generate or secure your private keys, so you’re free to do that in what ever way suits you.

You can use authentication that’s built in to your operating system, such as Microsoft’s Windows Hello facial recognition, or a remote authenticator like a cell phone or a security key.

To support a wide variety of remote authenticators efficiently, we need to agree a set of rules about how web browsers and authenticators are going to talk to each other. The rules free browsers from having to know or care about different authenticator types, manufacturers or versions, and make it easier for companies to enter the market with new authentication solutions (something that has been difficult in the past).

Those rules are called CTAP, the Client to Authenticator Protocol, and they define how a client like a web browser can talk to a remote authenticator over USB, Bluetooth or NFC (Near-field Communication).

(While we’re talking acronyms, let’s get another one out of the way: FIDO2. FIDO stands for Fast IDentity Online and FIDO2 is just an umbrella term for the combination of WebAuthn and CTAP.)

The authenticator provides the cryptographic know-how in the whole transaction, generating and storing your keys, and encrypting the website’s WebAuthn challenge on behalf of your browser.

Crucially, the authenticator must ask your permission before doing its crypto-magic. From the CTAP specification:

In order to provide evidence of user interaction, a roaming authenticator implementing this protocol is expected to have a mechanism to obtain a user gesture. Possible examples of user gestures include: as a consent button, password, a PIN, a biometric or a combination of these.

And that’s where all the different forms of authentication come in, and why if you choose to log in to a website using your fingerprints they aren’t shared with that website – they’re only ever used to unlock the authenticator.

Your secrets, like your fingerprint data (or face, PIN, iris etc) and your private keys, are only ever shared with the authenticator, a device that’s in your possession.

Putting it all together

Let’s say you’re logging on to a website using your fingerprints.

You arrive at the website and your web browser requests the login page. The code in the login page uses the WebAuthn API to ask your browser to sign a challenge using your private key. Your browser passes the challenge to the authenticator, and the authenticator asks you to grant it permission to sign the challenge, which you do by putting your finger on its fingerprint scanner.

The authenticator checks that, yes, it’s your finger, signs the challenge and passes it back to the browser, which passes it back to the website’s client-side JavaScript code, which hands it back to the website’s server. The server successfully decrypts the challenge with a (non-secret) public key you provided when you signed up, proving who you are.

Although behind the scenes it’s more complex than simply sending the website a password, you’ve done almost nothing.

You’ve not had to type anything or remember anything. You’ve proved who you are without anything secret leaving your possession, and if you want to upgrade your security tomorrow with a newer, shiner, more secure authenticator that doesn’t rely on fingerprints you can.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/UNEyBpQzWXg/

German e-government SDK patched against ID spoofing vulnerability

Germany has patched a key “e-government” service against possible impersonation attacks, and both private and public sector developers have been told to check their logs for evidence of exploits.

In July, SEC Consult warned the country’s federal computer emergency team at CERT-Bund that software supporting the government’s nPA ID card had a critical vulnerability (the ID cards themselves have not been breached).

The Governikus Autent SDK allows web developers to check users’ identities against the nPA. Because of a quirk of HTTP, the system could be tricked into authenticating the wrong person, SEC Consult said.

SEC Consult’s disclosure is here, and it explained the exploit process in this blog post.

Online authentication is carried out using a smartcard reader and electronic ID (eID) client software such as the government’s AusweisApp 2. To authenticate a citizen, a web application (which could be a government service such as tax, or a private service such as a bank or insurer) sends a request to the eID client.

“It requests a PIN from the user, communicates with an authentication server (eID-Server or SAML-Processor), the web application and the RFID chip, and finally sends a response to the web application. This response contains the data retrieved from the ID card, eg, the name or date of birth of the citizen,” the company said.

To prevent manipulation, the authentication server applies a digital signature to its response, but the SDK’s authors didn’t take into account a characteristic of HTTP that allowed impersonation.

HTTP allows more than one parameter to have the same name. “When the method HttpRedirectUtils.checkQueryString creates a canonical version of the query string, it parses the parameters from it and generates a new query string with the parameters placed in a specific order. The case that a parameter can occur multiple times is not considered,” SEC Consult wrote.

This meant an attacker could “arbitrarily manipulate the response [from the server] without invalidating the signature”.

“An attacker is therefore able to arbitrarily modify an authentic query string. By obtaining such a string (e.g. by providing a web application with nPA login and then checking the access log), he is able to authenticate as any citizen against any vulnerable web application that also trusts the issuer of the signature,” the disclosure explained, as demonstrated in this video:

Youtube Video

CERT-Bund told SEC Consult the bug was patched at the end of October. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/22/german_egovernment_sdk_patched_against_id_spoofing_vulnerability/

Laptop search unravels scheme to fake death for insurance cash

After faking one’s own death to defraud a life insurance company, it’s best to avoid being photographed alive and well, particularly when border agents may be reviewing those photos.

That’s a lesson learned too late by Igor Vorotinov, age 54, who was just arrested in the Republic of Moldova and extradited to the US State of Minnesota – a prior place of residence – to face charges of mail fraud brought three years ago.

According to court documents, Igor and his wife Irina Vorotinov, age 51, between September 2011 and March 2012 devised a scheme to defraud life insurance company Mutual of Omaha by faking the husband’s death so the wife could collect a $2m life insurance policy.

“On October 1, 2011, Igor staged his death in Moldova by arranging for the corpse of an unknown person to be placed between two bushes at the entrance of the Cojusna village in Moldova and placing in the clothes of the corpse lgor’s passport and other identification documents,” the indictment states.

Irina travelled from her residence in Minnesota to Moldova to identify the body, at which point she obtained a death certificate from local authorities and had the body cremated. Returning to the US with ashes of something or someone in an urn, she then asked Mutual of Omaha to pay Igor’s $2m life insurance policy.

The company paid out in March 2012, after which Irina involved an unidentified third party to open an account at a US Bank branch to deposit the funds. This third-party then transferred $1.5m to an account in the name of her son, Alkon, age 28, and the insurance proceeds were subsequently transferred to accounts in Switzerland and Moldova.

The criminal complaint against Irina – who in 2016 pleaded guilty to mail fraud and engaging in a monetary transaction involving criminal proceeds, and was sentenced that year to 37 months in prison – explains that in June 2013, someone told the FBI that Igor was alive.

That may explain why US Customs and Border Protection agents paid attention to Alkon and his fiancée. When the pair returned to the US from a trip to Moldova in November 2013, CBP agents seized their laptops. A warrant was issued shortly thereafter. On one of the devices, a Sony VAIO laptop, investigators found pictures of Igor taken in April and May 2013 in which he looked very much alive.

Metadata attached to the pictures indicated that they were taken with a Canon IOS Rebel T4i, which was released in June 2012, nine months after Igor had supposedly died.

The son, said to have become aware of the fraud no later than April 2013, pleaded guilty to concealing and failing to report his family’s fraud scheme and was sentenced to three years of probation. Along with his mother, he is required to return the fraudulently obtained funds.

Igor, if convicted of mail fraud, faces a maximum possible penalty of 20 years in prison. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/22/laptop_search_unravels_scheme_to_fake_death_for_insurance_cash/

Malware scum want to build a Linux botnet using Mirai

Diligent hackers have decided routers and cameras aren’t enough, and have reportedly crafted Mirai variants targeting Linux servers.

That unwelcome news came from Netscout, whose Matthew Bing wrote: “This is the first time we’ve seen non-IoT Mirai in the wild.”

Bing’s post explained that the botmasters are trying to use a Hadoop vulnerability as the vector to spread Mirai. The bug was first published on GitHub eight months ago, and attacked the platform’s YARN resource management technology with command injections.

Netscout, Bing wrote, has seen tens of thousands of exploit attempts against Hadoop YARN each day, and among the 225 binaries the attackers are trying to inject into victims’ machines “at least a dozen of the samples we’ve examined are clearly variants of Mirai”.

Because the attack is specific to YARN, the Mirai variant Netscout analysed is simpler than its predecessors. Older Mirai versions designed to infect Internet of Things devices had to identify whether the platform they were attacking was x86, x64, Arm, MIPS and so on.

This variant is only interested in x86 machines, Bing wrote.

The “VPNFilter” variant “still tries to brute-force factory default usernames and passwords via telnet”, he continued. If any are found, the attacker didn’t install malware on the victim, but rather “phoned home” with the target’s IP address, username and password.

Last week, Radware’s Pascal Geenens warned that Hadoop YARN exploit traffic was running at a very high rate – around 350,000 events per day.

Geenens warned systems are particularly at risk of cryptomining abuse, and added that if a Hadoop system has a publicly exposed YARN service, “it is not a matter of IF but a matter of WHEN your service will be compromised and abused”. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/22/mirai_for_linux_on_x86/

3 is the magic number (of bits): Flip ’em at once and your ECC protection can be Rowhammer’d

Researchers in the Netherlands have confirmed that error-correcting code (ECC) protections can be thwarted to perform Rowhammer memory manipulation attacks.

The Vrije Universiteit Amsterdam crew of Lucian Cojocar, Kaveh Razavi, Cristiano Giuffrida, and Herbert Bos today said they have developed a viable method to precisely alter bits in server RAM chips without triggering ECC’s correction mechanism. This gives them the ability to tamper with data, inject malicious code and commands, and change access permissions so that passwords, keys, and other secrets can be lifted.

The findings are significant because while ECC was once considered a reliable method for thwarting Rowhammer-style attacks, it was thought to be theoretically possible to bypass the defense mechanism. Now an attack has been demonstrated.

The upshot is that a baddie who can leverage the team’s technique on servers to sidestep ECC, could extract information from these high-value targets using Rowhammer. Said miscreant would have to first get into a position where they can flip bits on the vulnerable machines, likely using malware already on the device.

What’s Rowhammer?

Back in 2015, Google’s Project Zero found that it was possible to alter the values of individual memory cells by repeatedly charging and discharging the cells in adjacent rows. If an attacker knew precisely which locations to target, they could alter specific locations to inject instructions or commands into memory or grant access to restricted portions that contain sensitive information.

ECC protection (which was developed before Rowhammer to deal with memory errors) theoretically stopped this by detecting and correcting changes to individual bit values.

The magic number

The VU Amsterdam team confirmed that the way ECC checks for errors suffers from an exploitable loophole: when one bit was changed, the ECC system would correct the error. When two were found, ECC would crash the program.

But if three bits could be changed simultaneously, ECC would not catch the modification. This much people have known about, though the key thing here is that it can be shown to allow Rowhammer attacks through.

Crucially, the researchers found something akin to a race condition that would let them check that a RAM address could be usefully manipulated by the triple-flip technique.

“Simply put: it will typically take measurably longer to read from a memory location where a bitflips needs to be corrected, than it takes to read from an address where no correction was needed,” the team explained.

“Thus, we can try each bit in turn, until we find a word in which we could flip three bits that are vulnerable. The final step is then to make all three bits in the two locations different and hammer one final time, to flip all three bits in one go: mission accomplished.”

The researchers said they were able to test and recreate the vulnerability on four different server systems: three running Intel chips and one using AMD. They declined to single out any specific memory brands.

Fortunately, while the attack would be extremely difficult to prevent, it also looks to be very difficult to actually pull off in the wild. Between combing through the various addresses to find vulnerable lines and then actually carrying out the Rowhammer attacks, the VU Amsterdam team said a successful attack in a noisy system can take as long as a week.

The boffins said that their findings should not be taken as a condemnation of ECC either. Rather, it should show admins and security professionals that ECC is just one of several protection layers they should use in combination with things like optimised hardware configurations and careful logging and monitoring.

“ECC cannot stop Rowhammer attacks for all hardware combinations. If the number of bit flips is sufficiently high, ECC will only slow down the attack.”

A paper describing the technique, Exploiting Correcting Codes: On the Effectiveness of ECC Memory Against Rowhammer Attacks, will be presented next year at the Symposium on Security and Privacy. The above link to their work should be valid within the next couple of days. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/21/rowhammer_ecc_server_protection/

Talk in Trump’s tweets tells whether tale is true: Code can mostly spot Prez lies from wording

Boffins from the Netherlands and France claim that the word choices and sentence construction in President Donald Trump’s tweets can be used more often than not for lie detection.

In a paper distributed through ArXiv earlier this month, researchers Sophie van der Zee, Ronald Poppe, Alice Havrileck, and Aurelien Baillon – from Erasmus University, Utrecht University, and École Normale Supérieure de Cachan – describe how they found significant linguistic differences between factually accurate and inaccurate Trump tweets, and used this finding to construct a language-based lie detection model.

The accuracy of their model was about 73 per cent, making it better than a coin-toss, but far from foolproof in its evaluation.

For their data set, the researchers used a set of tweets from President Trump that had been fact checked by the Washington Post and could be characterized either as accurate or not. They began with a data set of 605 presidential tweets from the Twitter account @realDonaldTrump between February and April 2018. They then winnowed that down by removing retweets and web links. The result was a data set of 447 tweets.

Of these, almost 30 per cent were deemed factually incorrect by fact checkers.

Using a statistical technique known as multivariate analysis of variance (MANOVA), the researchers evaluated the language of the tweets to see whether their model’s characterization reflected the established accuracy or inaccuracy of the statements.

Their hypothesis, that veracity shows up in language, was supported by their findings. They detected linguistic differences between accurate and inaccurate tweets and use these differences to classify the tweets correctly as true or false about 73 per cent of the time.

Trump’s new ZTE tweet trumps old ZTE tweets that trumped his first ZTE tweet

READ MORE

The researchers assume that language does not differ with mistakes, because mistakes represent unintentional inaccuracies. Rather, they say, it’s lies that distort language.

“Being wrong should not affect language use because there is no difference in the perception or intention of the sender,” the paper explains. “In contrast, when deliberately presenting false statements as truths, one would expect a change in language use, according to the deception hypothesis. Lying can cause behavioral change because it is cognitively demanding, elicits emotions, and increases attempted behavioral control.”

Applying this model to a second dataset of 464 tweets (about 22 per cent of which were deemed factually inaccurate) covering the period between November 2017 and January 2018, the researchers’ predictions for the tweets conformed with the ground truth established by fact checkers about 73 per cent of the time.

The boffins found that correct statements contained more positive feelings while incorrect statements were more evasive and had more negations, tentative words, and comparisons. Also, fewer # and @ symbols appeared in incorrect tweets.

Tweets with money-related words were found to be more likely to be false while tweets with religious terminology were less likely to be false.

This technique could be used to help journalists and fact checkers evaluate the veracity of social media content, the researchers suggest, and they believe it can made more accurate by combining it with other lie detection methods such as keystroke analysis.

However, they caution that anyone could use this approach to construct a lie detector for a specific person. “Therefore, these results also constitute a warning to all posting a wealth of private information online,” they conclude. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/21/trump_tweets_lies/