STE WILLIAMS

Emotion-detection in AI should be regulated, AI Now says

What has AI done to you today?

Perhaps it’s making a holy mess of your physical attributes? Has technology that supposedly reads your micro-expressions to determine your inner-emotional states, tone of voice, or the way you walk, been used to figure out if you’d be a good hire? If you’re in pain and should get medication? If you’re paying attention in class?

According to Professor Kate Crawford, a co-founder of New York University’s AI Now Institute, AI is increasingly being used to do all of those things, in spite of the field having been built on “markedly shaky foundations”.

AI Now is an interdisciplinary research institute dedicated to understanding the social implications of AI technologies. It’s been publishing reports on such issues for a number of years.

The institute is calling for legislation to regulate emotion detection, or what it refers to by its more formal name, “affect recognition,” in its most recent annual report:

Regulators should ban the use of affect recognition in important decisions that impact people’s lives and access to opportunities…. Given the contested scientific foundations of affect recognition technology – a subclass of facial recognition that claims to detect things such as personality, emotions, mental health, and other interior states – it should not be allowed to play a role in important decisions about human lives, such as who is interviewed or hired for a job, the price of insurance, patient pain assessments, or student performance in school.

How do you even know if you’re going AI-isized?

Whether we realize it or not – and it’s likely “not” – AI is being used to manage and control workers or to choose which job candidates are selected for assessment, how they’re ranked, and whether or not they’re hired. AI has, in fact, been a hot technology in the hiring market for years.

Without comprehensive legislation, how the technology is used, and transparency into the research/algorithms that go into these products, are all hush-hush – and this, in spite of the fact that AI isn’t some purely mathematical, even-handed set of algorithms.

Rather, it’s been shown to be biased against people of color and women, and biased in favor of people who look like the engineers who train the software. And, to its credit, 14 months ago Amazon scrubbed plans for an AI recruiting engine after its researchers found out that the tool didn’t like women.

According to AI Now, the affect recognition sector of AI is growing like mad: at this point, it might already be worth as much as $20 billion (£15.3 billion). But as Crawford told the BBC, it’s based on junk science:

At the same time as these technologies are being rolled out, large numbers of studies are showing that there is … no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks.

She suggests that part of the problem is that some AI software makers may be basing their software on dusty research: she pointed to the work of Paul Ekman, a psychologist who proposed in the 1960s that there were only six basic emotions expressed via facial emotions.

Ekman’s work has been disproved by subsequent studies, Crawford said. In fact, there’s far greater variability in facial expressions with regards to how many emotions there are and how people express them.

It changes across cultures, across situations, and even across a single day.

Companies are selling emotion-detection technologies – particularly to law enforcement – in spite of the fact that often, it doesn’t work. One example: AI Now pointed to research from ProPublica that found that schools, prisons, banks, and hospitals have installed microphones purporting to detect stress and aggression before violence erupts.

It’s not very reliable. It’s been shown to interpret rough, higher-pitched sounds such as coughing to be aggression.

Are any regulatory bodies paying attention to this mess?

Kudos to Illinois. It’s often an early mover on privacy movers, as evidenced by the Illinois Biometric Information Privacy Act (BIPA) – legislation that protects people from unwarranted facial recognition or storage of their faceprints without consent.

Unsurprisingly, Illinois is the only state that’s passed legislation that pushes back against the secrecy of AI systems, according to AI Now. The Artificial Intelligence Video Interview Act, scheduled to go into effect in January 2020, mandates that employers notify job candidates when artificial intelligence is used in video interviewing, provide an explanation of how the AI system works and what characteristics it uses to evaluate an applicant’s fitness for the position, obtain the applicant’s consent to be evaluated by AI before the video interview starts, limit access to the videos, and destroy all copies of the video within 30 days of an applicant’s request.

Don’t throw out the baby

AI Now mentioned a number of emotion-detection technology companies that are cause for concern. But at least one of them, HireVue, defended itself, telling Reuters that the hiring technology has actually helped to reduce human bias. Spokeswoman Kim Paone:

Many job candidates have benefited from HireVue’s technology to help remove the very significant human bias in the existing hiring process.

Emotion detection is also being used by some call centers to determine when callers are getting upset or to detect fraud by voice characteristics.

Meanwhile, there are those working with emotion detection that agree that it needs regulating… with nuance and sensitivity to the good that the technology can lead to. One such is Emteq – a firm working to integrate emotion-detecting technology into virtual-reality headsets that can help those trying to recover from facial paralysis brought on by, for example, strokes or car accidents.

The BBC quotes founder Charles Nduka:

One needs to understand the context in which the emotional expression is being made. For example, a person could be frowning their brow not because they are angry but because they are concentrating or the sun is shining brightly and they are trying to shield their eyes. Context is key, and this is what you can’t get just from looking at computer vision mapping of the face.

Yes, let’s regulate it, he said. But please, lawmakers, don’t hamper the work we’re doing with emotion detection in medicine.

If things are going to be banned, it’s very important that people don’t throw out the baby with the bathwater.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MWAhLktWa8c/

Google adds Verified SMS and anti-spam feature to Messages app

If webmail, WhatsApp and IM are killing the SMS text message, someone might want to tell Google.

Far from losing interest in a dying communication medium, it’s as if Google is a new convert to the cause.

Last week, rather unexpectedly, it announced new security tweaks designed to make old-fashioned SMS communication more appealing to both companies and consumers alike.

The first of these is Verified SMS for Messages, which as its name suggests works with the company’s Android messaging app.

Available in the US, the UK, Canada, Mexico, India, Brazil, France, the Philippines, and Spain, this allows users to verify that text messages sent by companies are genuine and not fakes or scams.

From Google’s brief description, every text sent to the Messages app by a participating company embeds a hash-based message authentication code (HMAC) which is compared with an equivalent hash sent to Google.

This is unique to each person’s device rather than the company itself, which should make it impossible to spoof.

If all is as it should be, users see the company’s name and log plus a verification badge.

In theory it could also help verify SMS 2FA codes, although Google’s official position is that users should be looking to migrate to more secure forms of authentication.

Any catches?

A few – some minor and one or two that might prove more difficult to overcome.

As well as being specific to the Messages app, companies must also be part of the Verified SMS system for it to work. So far, that only runs to 1-800-Flowers, Banco Bradesco, Kayak, Payback, and SoFi.

Presumably, this list will expand in time because why wouldn’t companies sending SMS messages to customers not want them to be verified?

The question surrounding this is whether Android users will see any value in the verification of text messages that many of them might not be that keen to receive at all.

More generally, in some countries, bogus SMS spam has never been that big a problem. Even when it has been, Verified SMS will only authenticate known good senders rather than stopping unknown bad ones.

Anti-spam

This might explain why Google has added a second feature to Messages, Spam protection for Messages. Assuming you’ve received last week’s Messages app update, this can be found by tapping on Settings Advanced Spam Protection (the default is ‘on’).

With this feature in use, any message arriving or leaving from a number not in the user’s contacts list is temporarily stored and checked against the numbers of any known spammers:

This data is not linked to you or to identifiers like your name or phone number, which means Google doesn’t know who you’re messaging. Your message content is not seen or stored by Google as part of this feature.

If it’s suspect, it blocks the message. It’s not crystal clear whether this is done automatically or if the user is asked before it is blocked. It’s also possible to manually report spam.

It doesn’t sound that far away from a similar feature in Microsoft’s SMS Organizer app, announced in August 2019.

Google is also pushing something called Rich Communication Services (RCS) as a chat-centric replacement for old-fashioned SMS rustled up in conjunction with large mobile carriers.

While Verified SMS doesn’t appear to have any direct bearing on that, it does give the impression that the company wants to plant itself in revenue- and data-generating mobile channels.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/bOkj5uWtjPM/

Npm patches two serious bugs

The keeper of the internet’s most-often used JavaScript packages has warned users to update due to a serious bug that could enable an attacker to infect them with malicious applications.

Npm is a management service that organises software packages written in the JavaScript language. It is the official package manager for Node.js, which is a framework for JavaScript code that runs outside the browser (on the server, for instance). Developers manage their npm packages via a piece of software called the npm command line interface (CLI).

When developers want to include packages from npm in their code, they list them in a file called package.json, specifically in a field called bin. Entries in that field map a command name to a local file name in the ./node_modules/.bin/ directory in the developer’s project folder. Npm can overwrite those files with new versions as part of its management activities.

Security researcher Daniel Ruf found two vulnerabilities in the npm CLI after some lateral thinking, exploring how malicious packages might harm a system. He published the results of his work in a blog post last Thursday that highlighted two vulnerabilities.

One of these flaws (see the official npm advisory) allowed an attack known as binary planting. Versions of the npm CLI prior to 6.13.3 allow packages to access folders outside the intended folder by manipulating paths in the bin field. It allows an attacker to overwrite a clean file with a malicious one anywhere on the user’s system, or to create a new file where one didn’t exist before.

A second vulnerability exists in bin-links, which is an npm package that manages links from the bin field to the file in ./node_modules/.bin/ , and which the npm CLI also includes. It uses a symlink (symbolic link) to manage these files. A symlink is a file that links to another file or directory using its file path. Bin-links allowed packages to overwrite the symlink, even if they hadn’t created it.

According to Ruf, that’s especially bad for a package manager called pnpm that companies often use for managing JavaScript packages in larger environments because it stores a package only once, rather than storing a separate copy of the package for each project that’s using it. It uses symlinks to link a project to that file, he explains, enabling a globally installed file to alter others anywhere in the user’s /usr/local/bin directory. That’s significant because this folder houses most programs that a normal user might run.

To exploit these vulnerabilities, an attacker would have to persuade a user to install a file using an appropriately crafted bin entry, but this is entirely possible, according to npm.

What to do

These vulnerabilities have the IDs CVE-2019-16775 , 16776 and 16777.

Npm fixed them and warned people in a blog post that they should update their npm CLI now to version 6.13.4.

It also said that while it had scanned all packages in its npm registry for bugs and found them clean, that doesn’t give every package a clean bill of health, explaining:

We cannot scan all possible sources of npm packages (private registries, mirrors, git repositories, etc.), so it is important to update as soon as possible.

It might also be worth checking the bin field in your projects’ package.json files for any dodgy-looking file paths.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Yb7pRyvF7Uo/

Police get “unprecedented” data haul from Google with geofence warrants

Before you commit arson, do you leave your phone at home?

If not, you’re not a subtle arsonist. If you own an Android device, and if you happen to be behind any of the arsons carried out across the city of Milwaukee, in the US state of Wisconsin in 2018 and 2019, there’s a good chance that Google has handed over your location history to police.

Forbes has discovered that Google has complied with so-called geofence warrants that have resulted in an “unprecedented” data haul for law enforcement: one in which Google combed through its SensorVault to find 1,494 device identifiers for phones in the vicinities of the fires and then handed them over to the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF).

The publication obtained two search warrants that demanded to know which specific Google customers were located in areas covering 29,387 square meters – 3 hectares – during a total of nine hours for four separate incidents.

If you’ve got the Location History setting turned on, Google stores your whereabouts in its vast Sensorvault database – a database that stores location data that flows from all its applications. Stuffed with detailed location records from what The New York Times reports to be at least hundreds of millions of devices worldwide, the enormous cache has been called a “boon” for law enforcement.

To investigators, this is gold: a geofence demand enables them to pore through location records as they seek devices that may be of interest to an investigation.

Geofence data demands are also known as reverse location searches. Investigators stipulate a timeframe and an area on Google Maps and ask Google to give them the record of each and every Google user who was in the area at the time.

When they find devices of interest, they’ll ask Google for more personal information about the device owner, such as name, address, when they signed up for Google services and which services – such as Google Maps – they used.

Privacy advocates say that geofence warrants are both overly broad and that they endanger privacy. Forbes spoke to Jerome Greco, a public defender in the Digital Forensics Unit of the Legal Aid Society, who said that the warrants are dragnets that harm the rights of the innocents whose movements Google so assiduously tracks:

The number of phones identified in that area shows two key points: One, it demonstrates a sample of how many people’s minute-by-minute movements Google is precisely tracking.

Two, it shows the unconstitutional nature of reverse location search warrants because they inherently invade the privacy of numerous people, who everyone agrees are unconnected to the crime being investigated, for the mere possibility that it may help identify a suspect.

Unfortunately, just because you’re innocent doesn’t mean you don’t have anything to worry about. Case in point: 2018, Phoenix police arrested a warehouse worker in connection with a murder investigation.

They had two pieces of evidence: blurry surveillance footage in which you couldn’t make out the license plate number on the shooter’s car, and data tracking his phone to the murder site, thanks to a reverse location search warrant. He was in jail for nearly a week before he was exonerated. As of July 2019, Molina was threatening to sue police for allegedly using inaccurate Google data.

In fact, Google’s location history data is routinely shared with police. Detectives have used these warrants as they investigate a variety of crimes, including bank robberies, sexual assaults, arsons, murders, and bombings.

How to turn off Google’s location history

If you don’t like the notion of Google being able to track your every movement, you can turn off location history.

To do so, sign into your Google account, click on your profile picture and the Google account button. From there, go to Data personalization, and select Pause next to Location History. To turn off location tracking altogether, you have to do the same for Web App activity in the same section.

For its part, Apple told the New York Times that, at least as of April 2018, it didn’t have the ability to perform these kind of searches.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/uI3cDCVNbEk/

Plundervolt – stealing secrets by starving your computer of voltage

The funky vulnerability of the month – what we call a BWAIN, short for Bug With an Impressive Name – is Plundervolt, also known as CVE-2019-11157.

Plundervolt is a slightly ponderous pun on Thunderbolt (a hardware interface that’s had its own share of security scares), and the new vulnerability has its own domain and website, its own HTTPS certificate, its own pirate-themed logo, and a media-friendly strapline:

How a little bit of undervolting can cause a lot of problems

In very greatly simplified terms, the vulnerability relies on the fact that if you run your processor on a voltage that’s a little bit lower than it usually expects, e.g. 0.9V instead of 1.0V, it may carry on working almost as normal, but get some – just some – calculations very slightly wrong.

We didn’t even know that was possible.

We assumed that computer CPUs would be like modern, computer-controlled LED bicycle lights that no longer fade out slowly like the old incandescent days – they just cut out abruptly when the voltage dips below a critical point. (Don’t ask how we know that.)

But the Plundervolt researchers found out that ‘undervolting’ CPUs by just the right amount could indeed put the CPU into a sort of digital twilight zone where it would keep on running yet start to make subtle mistakes.

The undervoltages required varied by CPU type, model number and operating frequency, so they were found by trial and error.

Interestingly, simple CPU operations such as ADD, SUB (subtract) and XOR didn’t weird out: they worked perfectly right up to the point that the CPU froze.

So the researchers had to experiment with some of Intel’s more complex CPU instructions, ones that take longer to run and require more internal algorithmic effort to compute.

At first, they weren’t looking for exploitable bugs, just for what mathematicians call an ‘existence proof’ that the CPU could be drawn into a dreamy state of unreliability.

They started off with the MUL instruction, short for multiply, which can take five times longer than a similar ADD instruction – a fact that won’t surprise you if you ever had to learn ‘long multiplication’ at school.

Don’t worry if you aren’t a programmer – this isn’t a real program, just representative pseudocode – but a code fragment along these lines did the trick:

(The numbers 0xDEADBEEF and 0x1122334455667788 were a largely arbitrary choice – programmers love to make words and phrases out of hexadecimal numbers, such as using the value DOCF11E0 to denote Word document files, or CAFEBABE for Java programs.)

The above code really ought to run forever, because it calculates the product of two known values, and stops looping only if the answer isn’t the product of those two numbers.

In other words, if the loop runs to completion and the function exits, then the CPU:

  • has made a mistake,
  • has failed to notice, and
  • has carried on going rather than freezing up.

By the way, the reason that the above function returns the correct answer XORed with the mistake is to flush out the individual bits that were incorrect.

Because X XOR X = 0, any individual bit that is the same in both answer and correct ends up set to zero, with the result that wrong will end up with a 1-bit wherever there there was a mistake.

This makes it easy to look for possible patterns in which bits got flipped if the multiplication went haywire.

After a series of tests, dropping the CPU voltage a tiny bit each time, the researchers suddenly started getting the answer 4, showing not only that the answer was wrong, but that the error was predictable – the third bit in the output was flipped, while all the others were correct.

Existence proof!

Undervolting the CPU can, indeed, trick it into a torporous state in which it gets internal calculations wrong, but doesn’t realise.

That’s not supposed to happen.

Why bother?

But what use is all this?

In order to ‘undervolt’ the CPU (you can push the operating voltage up or down as much as 1V either way in steps of approximately 1 millivolt), your code needs to be running inside the operating system kernel.

And in every operating system we know of, the kernel code that gives regular programs access to the voltage regulator only lets you call upon it if you are already an administrator (root in Linux/Unix parlance).

In theory, then, you don’t need to use an undervolting exploit to attack another process indirectly because you could simply exploit your administrative superpowers to modify the other process directly.

In short, we’ve just described a method that might, just might, let you use a complex and risky trick to pull off a hack you could do anyway – a bit like shimmying up the outside of your apartment block and picking the lock on your balcony door to get in when you could simply go into the lobby, take the elevator and use your door key.

Bigger aims

The thing is that the researchers behind Plundervolt had designs beyond the root account.

They wanted to see if there was a way to bypass the security of an Intel feature called SGX, short for Software Guard Extensions.

SGX is a feature built into most current Intel processors that lets programmers designate special blocks of memory, called enclaves, that can’t be spied on by any other process – not even if that process has admin rights or is the kernel itself.

The idea is that you can create an enclave, copy in a bunch of code, validate the contents, and then set it going in such as way that once it’s running, no one else – not even you – can monitor the data inside it.

Code inside the enclave memory can read and write both inside and outside the enclave, but code outside the enclave can’t even read inside it.

For cryptographic applications, that means you can pretty much implement a ‘software smart card’, where you can reliably encrypt data using a key that never exists outside the ‘black box’ of the enclave.

If you read the SGX literature you will see regular mention of the phrase “abort page semantics“, which sounds mystifying at first. But it is jargon for “if you try to read enclaved memory from code running outside the enclave, you’ll get 0xFFFF….FFFF in hex,” because all bits in your memory buffer will be set to 1.

What about Rowhammer?

As it happens, there are already electromagnetic tricks, such as rowhammering, that let you mess with other people’s memory when you aren’t supposed to.

Rowhammering involves reading the same memory location over and over so fast and furiously that you generate enough electromagnetic interference to flip bits in other memory cells next door on the silicon chip – even if the bits you flip are already assigned to another process that you aren’t supposed to have access to.

But rowhammering fails against SGX enclaves because the CPU keeps a cryptographic checksum of the enclave memory so that it can detect any unexpected changes, however they’re caused.

In other words, rowhammering gets detected and prevented automatically by SGX, as does any other trick that directly alters memory when it shouldn’t.

And that’s where Plundervolt comes in: sneakily flipping data bits in enclave memory will get spotted, but sneakily flipping data bits in the CPU’s internal arithmetical registers during calculations in the enclave won’t.

So, attackers who can undervolt calculations while trusted code is running inside an enclave may indirectly be able to affect data that gets written to enclave memory, and that sort of change won’t be detected by the CPU.

In short, writing any data to the enclave from the wrong place will be blocked; but writing wrong data from the right place will go unnoticed.

Is is exploitable?

As the researchers quickly noticed, one place where software commonly makes use of the MUL (multiply) instruction, even if you don’t explicitly code a multiplication operation into your program, is when calculating memory addresses.

They found a place where Intel’s own enclave reference code accesses a 33-byte chunk of data stored in what’s known as an array – a table or list of 33-byte items stored one after the other in a contiguous block of memory.

To access a specific item in the array, the code uses compiler-generated instructions (represented here as pseudocode) along these lines:

    offset ← load desired array element number
    size   ← load size of each array element (33 bytes)
    offset ← MULtiply offset by size to convert offset to actual distance in bytes
    base   ← load base address of array
    base   ← ADD offset to base to compute final memory address to use

In plain English, the code starts at the memory address where the contiguous chunk of array data begins and then skips over 33 bytes for every element you aren’t interested in, thus calculating the memory address of the individual array item you want.

As long as the code verifies up front that the relevant array item will lie inside enclave memory, it can assume that the data it fetches can’t have been tampered with, and therefore that it can be trusted.

Why 33?

The researchers had already found by trial and error that they could reliably undervolt multiplication calculations when the multiplier was the number 33, for example by coming across this ‘plundervoltable’ combination, where…

    527670 × 33 = 17,413,110

…came out with a predictable pattern of bitflips that rendered the result incorrectly as:

    527670 × 33 = -519,457,769

In the context of memory address calculations, this class of error meant they could trick code inside the enclave into reading data from outside the enclave, even after the code had checked that the array item being requested was – in theory, at least – inside the secure region.

The code thought it was skipping to data a safe distance ahead in memory, e.g. 17 million bytes forward; but the undervolted multiplication tricked the CPU into reading memory from an unsafe distance backwards in memory, outside the memory region assigned to the enclave, e.g. 519 million bytes backward.

In the example they presented in the Plundervolt paper, the researchers found SGX code that is supposed to look up a cryptographic hash in a trusted list of authorised hashes – a list that is deliberately stored privately inside the the enclave so it can’t be tampered with.

By tricking the code into looking outside the enclave, the researchers were able to feed it falsified data, thus getting it to trust a cryptographic hash that was not in the official list.

AES attacked too

There was more, though we shan’t go into too much detail here: as well as attacking the MUL instruction, the researchers were able to undervolt the AESENC instruction, which is a part of Intel’s own, trusted, inside-the-chip-itself implementation of the AES cryptographic cipher.

AESENC isn’t implemented inside the chip just for speed reasons – it’s also meant to improve security by making it harder for an attacker to get any data on the internal state of the algorithm as it runs, which could leak hints about the encryption key in use.

By further wrapping your use of Intel’s AES instructions inside an SGX enclave, you’re supposed to end up not only with a correct and secure implementation of AES, but also with an AES ‘black box’ from which even an attacker with kernel-level access can’t extract the encryption keys you’re using

But the researchers figured out how to introduce predictable bitflips in the chip’s AES calculations.

By encrypting about 4 billion chosen plaintext values both correctly and incorrectly – essentially using two slightly divergent flavours of AES on identical input – and monitoring the difference between the official AES scrambling and the ‘undervolted’ version, they could reconstruct the key used for the encryption, even though the key itself couldn’t be read out of enclave memory.

(4 billion AES encryption operations sounds a lot, but on a modern CPU just a few minutes is enough.)

What to do?

Part of the reason for the effort the researchers are now putting into publicising their work – the logo, website, domain name, cool videos, media-friendly content – is that they’ve had to wait six months to take credit for it.

And that’s because they gave Intel plenty of time to investigate it and provide a patch, which is now out. Check Intel’s own security advisory for details.

The patch is a BIOS update that turns off access to the processor instruction used to produce undervoltages, thus stopping any software, including code inside the kernel, from fiddling with your CPU voltage while the system is running.

Further information you may find useful to assess whether this is an issue for you:

  • Not all computers support SGX, even if they have a CPU that can do it. You need a processor with SGX support and a BIOS that lets you turn it on. Most, if not all, Apple Macs, for example, have SGX-ready CPUs but don’t allow it SGX to be use. If you don’t have SGX then Plundervolt is moot.
  • Most, if not all, computers with BIOS support for SGX have it off by default. If you haven’t changed your bootup settings, then there’s no SGX for Plundervolt to mess with in the first place and this vulnerability is moot.
  • All computers with BIOS support for SGX allow you to turn it off when you don’t want it. So even if you already enabled it, you can go back and change your mind if you want.

Yes, there’s an irony in neutralising a vulnerability that might leak protected memory by turning off the feature that protects that memory in the first place…

….but if you aren’t planning on using SGX anyway, why have it turned on?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/_fEjGFVk1pA/

Chinese e-commerce site LightInTheBox.com bared 1.3TB of server logs, user data and more

Exclusive Infosec researchers have uncovered a data breach affecting 1.3TB of web server log entries held by Chinese e-commerce website LightInTheBox.com.

Noam Rotem and Ran Locar, VPN comparison site VPNmentor’s research team, uncovered the breach in late November.

The data was “unsecured and unencrypted”, accessible from an ordinary web browser and was being held on an Elasticsearch database, which, as the two noted, “is ordinarily not designed for URL use”.

“The database [we fouud] was a web server log – a history of page requests and user activity on the site dating from 9th of August 2019 to 11th of October,” said the two researchers in a statement about their findings shared with The Register, adding that it appeared to contain around 1.5bn entries.

The server logs included user email addresses, IP addresses, countries of residence and pages each visitor viewed on LightInTheBox’s website. It also contained data from the company’s subsidiary sites including MiniInTheBox.com.

LightInTheBox is a typical online retailer selling retail goods such as gadgets, clothing and small accessories. The site has very few clues about its Chinese origins other than sponsored links at the bottom of its homepage with a distinctly Chinese theme.

Screenshot from bottom of LightInTheBox.com homepage

Click to enlarge

Code snippets shared with The Register showed precisely how users’ email addresses were exposed.

How LightInTheBox.com exposed customers' details. Pic from VPNmentor

Click to enlarge

Aside from LightInTheBox.com, the breached database also contained data from the firm’s subsidiary sites, including MiniInTheBox.com. LightInTheBox itself is said to have around 12m monthly visitors to its site.

Opining that the breach is a big problem for LightInTheBox, Rotem and Locar said: “This data breach represents a major lapse in LighIinTheBox’s data security. While this data leak doesn’t expose critical user data, some basic security measures were not taken. This is a time of the year with a lot of online shopping: Black Friday, Cyber Monday, Christmas. Even a large leak with no user Personally Identifiable Information data could be a threat to both the company and its customers.”

With access to users’ email addresses and precise browsing history, a malicious group could easily craft targeted phishing emails pretending to be, for example, followup messages from LightInTheBox itself offering discounts on previously-viewed products.

Buried in LightInTheBox’s online privacy policy is the line: “While our business is designed with safeguarding your personal information in mind, please remember that 100% security does not presently exist anywhere, online or offline.”

Too true. Full details of vpnMentor’s findings will be on their website.

Although LightInTheBox did not respond to Rotem and Locar’s enquiries, the breach was closed off shortly after the New York Stock Exchange-listed Chinese company was told about it. El Reg was unable to find a method of contacting LITB other than by registering a customer account and opening a support ticket. ®

Sponsored:
Beyond the Data Frontier

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/16/lightinthebox_data_breach_1_5bn_customer_records/

Your workmates might still be reading that ‘unshared’ Slack document

Security researchers have uncovered a flaw in messaging app Slack that allows a file shared in a private channel to be viewed by anyone in that workspace – even guests.

Folk from Israeli cloud security outfit Polyrize uncovered the vuln, that they say exposes files shared through the IRC-for-millennials application, which boasts millions of users.

“If you share your file once, even if you later unshare it, that file can still be exposed to other people, without any indication to you,” said Polyrize, adding that the vuln includes the viewing of files through API queries.

It works through Slack’s implementation of file-sharing. Posts on a Slack workspace can be in a public channel, or conversation, where anyone with an account on that workspace can join and view messages and files, or a private conversation (invite-only). Files are shared with conversations which can have one or more participants; if you’re in a conversation where a private file is shared, you can view it. Should you leave that private conversation, you can’t view files from within it.

That’s how it’s meant to work, anyway. According to Polyrize, however, if someone in a private conversation shares a file from it to a different conversation, that bypasses the controls.

“Due to the fact that Slack users can only be aware of private conversations that they are members of, file owners have no way to tell that their files were shared in other private conversations,” Polyrize told The Register.

Youtube Video

In the video above is a demonstration of the vulnerability. The screen is split in half vertically.

Polyrize told The Register that the vuln can be verified not only through the Slack GUI (graphical user interface) but also by making API calls to Slack for a file shared, re-shared and de-shared in this way and inspecting the results.

A Slack spokesman told The Register: “We understand how important file security is for Slack’s customers. The behavior described only applies to two types of files in Slack, Snippets and Posts (two options for sharing and collaborating around longer form content in Slack). Most files shared in Slack are not these types of files.”

The spokesman continued:

When you share Snippets and Posts in private channels or messages, only the included people can see those Snippets and Posts or find them in search. When you share Snippets and Posts in public channels, anyone in the workspace can see those Snippets and Posts or find them in search. This is intended functionality.

We appreciate that the presence of the unshare button is confusing since we changed the way commenting works for Snippets and Posts. We are grateful to Polyrize for bringing this usability issue to our attention. We are planning to correct the interface but the security model for sharing Snippets and Posts on Slack will continue to operate as it does today.

Duncan Brown, chief security strategist of infosec biz Forcepoint, told The Register that this is an all too familiar refrain: “This vulnerability in Slack is an another example of the ways malicious actors can steal sensitive data. Companies often have a very poor visibility of how their sensitive data is being stored, used and manipulated. With the adoption of multi-cloud services of all kinds, we’ve seen this data sprawl and confusion only increase.” He added: “Organisations need to make sure they have a strong visibility of the data they have, and where it’s going, at all times. Looking at activity at the level of individual users is one way to do that. While this particular vulnerability is unfortunate, it’s more a symptom of the wider issue of data governance.”

As described, working around the vulnerability is fairly easy: don’t use Slack to share sensitive files. If you must use Slack to do that thing, only share files with people whom you trust not to reshare them into different conversations. ®

Sponsored:
Beyond the Data Frontier

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/16/slack_filesharing_vulnerability_post_sharing/

Why Enterprises Buy Cybersecurity ‘Ferraris’

You wouldn’t buy an expensive sports car if you couldn’t use it properly. So, why make a pricey security investment without knowing whether it will fit into your ecosystem?

Throughout my career, I’ve taken part in cybersecurity investigations in many different Fortune 500 companies. Too often, I see organizations that own advanced cybersecurity technologies that are being utilized for only a fraction of what they’re capable of doing. Often, these are good products, but the buyers either don’t know the full extent of what they’re buying or don’t fully understand the workload required before and after implementation. It’s like buying a Ferrari and not knowing how to drive.

When acquiring big-ticket cybersecurity solutions, especially those that have hardware attached, buyers must remember that these solutions require a lot of coordination and advanced skills to utilize them correctly. Deploying a sophisticated cybersecurity solution doesn’t take place in a matter of days. You must build out advanced use cases, baseline the technology in your environment, then update and configure it to the risks your business is most likely to face. It’s a process that takes several weeks or even months. And much like when considering a high-end vehicle, a person shouldn’t look at only the sticker price. Organizations must also account for the cost and time associated with ongoing maintenance in their specific environment.

You must also assess the skills and expertise of your team memberse to determine if they have what’s needed to configure the solution, to not only get it operational but to optimize and use it to its full capabilities. It is no small undertaking, and even veteran security team members may quickly find themselves overwhelmed if they have never worked with a similar technology or have never been involved in a deployment project of that magnitude.

I see this often with cybersecurity technologies like endpoint detection and response (EDR) solutions, behavioral analytics, deception technologies, and artificial intelligence (AI)-driven solutions. Many large enterprises have EDR solutions, but very few are actually doing managed detection and response. They’re simply collecting events on the EDR and bypassing deeper investigations or threat analysis necessary for responding quickly to incidents.

The descriptions of a technology’s ability to detect, contain, and eradicate threats can sound impressive, and it can be easy for security professionals to be moved to buy a solution because of its capabilities. But if your team doesn’t have the resources to maintain and drive it effectively, there is no sense buying it in the first place. It will just end up as wasted budget.

Develop a Security Maturity Framework — and Stick to It
The companies that I’ve seen fall victim to this common problem typically did not have a full business justification for buying that cybersecurity solution. They may have seen a need, or they may have been enticed by the idea that a particular solution would give them immediate visibility, but they never took it further and asked themselves how that product would fit into their security ecosystem. Visibility only goes so far. If you don’t have the capability — either on your own team or through a partner — to review that visibility and take action.  

To get the most out of cybersecurity investments, organizations should begin by creating a security maturity framework. This framework will help your organization assess where it stands today in its security capabilities, identify weaknesses and strengths, and provide a path forward for developing a more advanced cybersecurity program. Begin by assessing your organizations’ risk tolerance. The lower the risk tolerance, the higher your security maturity will need to be.

Next, evaluate your people, processes, and technologies by comparing your program with the requirements of proven industry frameworks such as the NIST Cybersecurity Framework and the Cybersecurity Capability Maturity Model (C2M2). The latter was developed by the US government for use in the energy sector, but the basic model can be applied to any sector.

Once you’ve built a security maturity framework that extends three to five years in the future, you will be able to determine where you have gaps or areas of risk, and then be able to prioritize technologies or services to fill those gaps. The security maturity framework helps an organization focus on the technologies or products that fit its plan and not get distracted or tempted into buying a technology solution because it’s new and exciting.  

Assess Your Team’s Ability to Drive
After creating a security maturity framework, assess your team’s capability to manage and continually optimize the technology products in your plan. Ask yourself whether your team can take on this task or whether it would be more effective to garner support using outside resources. Ask yourself whether the newly acquired capabilities are now core to operations and whether it’s important to retain expertise specific to those capabilities. If so, be prepared to invest in training and continued education to grow the skill sets of your current and future team members.

With every cybersecurity product purchase, you should be conducting a full skills and services assessment. No exceptions. Only then will you be able to ensure you are optimizing and maximizing leading-edge cybersecurity technologies, steering your cybersecurity program straight down the fast lane to its full potential.

Chris Schueler is senior vice president of managed security services at Trustwave where he is responsible for managed security services, the global network of Trustwave Advanced Security Operations Centers and Trustwave SpiderLabs Incident Response. Chris joined Trustwave … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/why-enterprises-buy-cybersecurity-ferraris-/a/d-id/1336582?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

VMware warning, OpenBSD gimme-root hole again, telco hit with GDPR fine, Ring camera hijackings, and more

Roundup Here’s your Register security roundup of infosec news about stuff that’s unfit for production but fit for print.

Yet another OpenBSD bug advisory

Another week, another OpenBSD patch. You’re not having deja vu.

This time, it’s CVE-2019-19726, a local elevation of privilege flaw that could let users grant themselves root clearance.

The bug was discovered by researchers at Qualys, and has been patched prior to public disclosure.

“We discovered a Local Privilege Escalation in OpenBSD’s dynamic loader (ld.so),” the report reads, “this vulnerability is exploitable in the default installation (via the set-user-ID executable chpass or passwd) and yields full root privileges.”

In some good news for OpenBSD, though, the necessary mechanisms to restrict Firefox’s access to the underlying system, in case it gets compromised, have been added, a la Chromium on the free software platform.

VMware issues advisory for critical ESXi bug

Admins running VMware ESXi will want to make sure they have updated their software to protect against this OpenSLP remote code execution vulnerability.

The flaw, caused by a heap overwrite error, would potentially allow an attacker to take over the underlying host. Both ESXi and Horizon DaaS should be updated to protect against attacks.

Ring speaks out on camera hacks

Following a series of reports of customers having their Ring cameras attacked by credential stuffing, the Amazon-owned biz has issued a guide to help punters keep their gear safe.

As Ring notes, various frightening camera takeovers, in which hackers compromised the internet-connected gear and yelled at victims through the gadgets in their own homes seemingly for a sick podcast, were not the result of a network or software security breach on its end, but rather customers re-using login details that had been stolen from other sites.

In other words, people were using the same username and passwords for their home Ring kit as profiles on websites that had been hacked, allowing miscreants to get their hands on credentials and log into the Ring boxes over the ‘net and cause trouble.

“Customer trust is important to us, and we take the security of our devices and services extremely seriously,” Ring says. “As a precaution, we highly encourage all Ring users to follow security best practices to ensure your Ring account stays secure.”

These steps include enabling two-factor authentication, picking unique passwords, and adding shared users rather than giving out your password to others.

NordVPN opens bug bounty program

Following a flood of bad press for its security policies, NordVPN is putting the final touches on its infosec overhaul with the opening of a bug bounty program with HackerOne.

Researchers who uncover and report security flaws in the NordVPN software or network will be eligible to collect payouts ranging from $100 to $5,000.

“NordVPN accepts findings related to its applications, servers, backend services, website, and more,” the VPN provider says. “Bug bounty hunters do not need to worry about possible legal action against them as long as they keep their penetration testing ethical.”

Coffee company brews up MageCart infection alert

Bad news for customers of gourmet cup of Joe shippers CoffeeAM.

The online store for the caffeine infusion service was host to a MageCart infection that sipped customer payment card details for more than eight months.

Unfortunately, it looks like the script was able to collect full payment card and account information, including card numbers, security codes, expiration dates, passwords, contact details, and shipping address.

Customers who were exposed will be eligible to get credit monitoring and insurance against identity theft. It would also be a good idea to get a new bank card and keep a close eye on your statements for a while.

FBI warning over IoT attacks

The FBI’s Portland office has issued an alert to users on the dangers of IoT malware. There was no one incident that triggered the alert, but the Feds are offering some tips and best practices.

“Unsecured devices can allow hackers a path into your router, giving the bad guy access to everything else on your home network that you thought was secure,” the FBI warned. “Are private pictures and passwords safely stored on your computer? Don’t be so sure.”

The tips range from basic stuff everyone should know, like changing default passwords and picking unique logins, to more advanced things like creating a separate network for your IoT devices and your personal computing gear.

German telco hit with fine for lax login protections

A European cable internet and cellular telco has been fined €9.6m ($10.5m, £8m) for its overly accommodating customer service.

German giant 11 Telecommunications was issued the penalty after authorities in Germany found its support agents were not properly verifying the identities of people before accessing their accounts.

This is a major security concern, particularly with the rise in SIM-jacking attacks that rely on lax identity verification policies to take over mobile phone accounts. As such, it was ruled that 11 had violated data privacy laws.

Amazon Blink cameras found to have command injection flaws

Hackers with Tenable have found a trio of security holes in Amazon’s Blink cameras.

The three flaws range from physical access vulnerabilities (easily accessible command ports) to man-in-the middle flaws and network vulnerabilities that would let hackers on the local Wi-Fi send arbitrary commands.

“In short, Tenable Research discovered three-ish vectors of attack that allow a full compromise of the sync module, which could potentially allow attackers to take further action against an end user’s entire account and associated cameras,” the firm writes.

Sorry to drone on but… a database of drone flights, including those of police-owned drones, in the US was inadvertently left facing the public internet. The system was removed from view after it was flagged up to its operator, DroneSense, by a security researcher.

US streamers take guilty plea

Two men from the US have plead guilty to creating and running separate illegal streaming services.

Darryl “djppimp” Polo, 36, admitted to five counts of copyright infringement and money laundering as the admin of iStreamitall, a TV and movie streaming site. Meanwhile, Luis Villarino, 40, took a guilty plea to one count of conspiracy to commit copyright infringement. He was among the team that created illegal streaming site Jetflicks.

Both are due to be sentenced next March. ®

Sponsored:
How to get more from MicroStrategy by optimising your data stack

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/16/roundup_dec13/

Valuable personal info leaks from Facebook – not Zuck selling it, unencrypted hard drives of staff data stolen

Facebook has lost a copy of the personal details of more than 20,000 of its employees after hard drives containing unencrypted payroll information were stolen from an employee’s car.

The antisocial network said it is in the process of informing those who were exposed, though so far there is no indication of the purloined details being used for fraud, it is claimed.

“We worked with law enforcement as they investigated a recent car break-in and theft of an employee’s bag containing company equipment with employee payroll information stored on it,” a Facebook spokesperson told The Register. “We have seen no evidence of abuse and believe this was a smash and grab crime rather than an attempt to steal employee information

thumbs down facebook

FTC kicks feet through ash pile that once was Cambridge Analytica with belated verdict

READ MORE

“Out of an abundance of caution, we have notified the current and former employees whose information we believe was stored on the equipment – people who were on our US payroll in 2018 – and are offering them free identity theft and credit monitoring services. This theft impacts current and former Facebook employees only and no Facebook user data was involved.”

A report from Bloomberg today cites an internal email explaining that last month an employee in the payroll department had their car broken into and, among the items stolen, were unencrypted hard drives containing corporate records. The report also notes that the worker was not authorized to have the drive in their car, and has been disciplined.

The lifted records were said to include employee names, bank account numbers, and partial social security numbers.

So far, Facebook has yet to file a data breach notification with the state of California, as is required by law.

This is certainly a unique situation for Facebook, as the data-slurping biz usually finds itself on the other side of egregious violations of personal privacy. Facebook has made something of a custom out of letting outside developers play fast and loose with user profile information. ®

Sponsored:
Beyond the Data Frontier

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/13/facebook_data_loss/