STE WILLIAMS

How Can I Help My Team Manage Security Alerts?

Smart prioritization, great staff and supportive tools are a good start.

Question: We can’t handle all these security alerts! What can I do?

Chris Morales, head of security analytics at Vectra — Security operations must focus on three key areas: detection, response, and prediction.

Security analysts must continuously hunt for attackers already inside the network. They need to be able to respond to the threats that can cause real damage immediately, and correctly since not all attacks are the same. Finally, an organization needs to be equipped to learn from attacks, understand their own attack surface and exposure, know the type of attacks they are at risk from, and then combine all this knowledge to predict where an attack could happen next.

In short, where is the exposure, what is the motive, where do they need to focus. Doing all the above consistently every data clearly is not easy. Doing all the above quickly while staying consistent to stay ahead of attackers is borderline crazy.

Enterprises have three choices here. Hire lots of highly skilled people able to perform security processes consistently at speed day in and day out, use AI to augment existing analysts to be more effective and automation functions to respond in real time, or give up. I believe the most achievable option is to augment security analysts with AI to scale security operations effectively.

Related content:

 

 

The Edge is Dark Reading’s home for features, threat data and in-depth perspectives on cybersecurity. View Full Bio

Article source: https://www.darkreading.com/how-can-i-help-my-team-manage-security-alerts/d/d-id/1336300?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Warrant let police search online DNA database

In May, genealogy site GEDmatch, stung by a revolt over its approach to DNA privacy, changed its privacy policy.

The new policy required users who upload their DNA to explicitly opt in – or out – of having their profiles used in police investigations. According to the New York Times, GEDmatch co-founder Curtis Rogers said that as of last week, only a handful – 185,000 – of the site’s 1.3 million users had opted in.

Its privacy switch sharply disappointed many in law enforcement. As it is, GEDmatch had become a favorite for investigators, not because it’s the biggest database – it’s far overshadowed by Ancestry.com and 23andMe – but because it’s been the most open.

One of the disappointed was Detective Michael Fields of the Orlando Police Department in Florida. He’d successfully used GEDmatch to identify a suspect in the 2001 murder of a 25-year-old woman that he’d spent six years trying to solve. So, because Fields didn’t want to stop using DNA records – he was searching for suspects in the case of a serial rapist who attacked a number of women decades ago – he took his disappointment to the court.

As Fields reportedly announced at a police convention last week, he won what he was after: a warrant to search GEDmatch’s full database. As the Times reports, he’s now working with the forensic consulting firm Parabon to try to find a DNA match that will lead him to that rapist.

Legal experts told the Times that overriding a site’s policies in this way is a “huge game changer” for genetic privacy. The newspaper quoted Erin Murphy, a law professor at New York University:

The company made a decision to keep law enforcement out, and that’s been overridden by a court. It’s a signal that no genetic information can be safe.

Everybody wants to see that warrant

Fields described his methods at the International Association of Chiefs of Police conference in Chicago last week. In July, he’d asked a Florida judge to approve a warrant that would let him skirt GEDmatch’s user privacy settings and get into its full database – one that the Times says has DNA records of 1.2 million users.

Logan Koepke, a policy analyst at Upturn, a nonprofit in Washington that studies how technology affects social issues, was in the audience. He told the Times that after his talk, Fields was approached by a number of detectives and officers who wanted to get a copy of the warrant.

They don’t need your spit to DNA-trace you

GEDmatch is the same database that was used to identify suspected serial killer Joseph James DeAngelo, the alleged Golden State Killer, in 2018. After DeAngelo’s arrest, law enforcement agencies started using GEDmatch to investigate violent crimes, making it what’s been called the “de facto DNA and genealogy database” for all of law enforcement.

As of April 2019, GEDmatch had been used in at least 59 cold case arrests and in 11 Jane and John Doe identifications across the US.

Policy experts say they’ll be keeping a close eye on how Fields’ successful pursuit of a warrant may embolden other law enforcement agencies to try to penetrate DNA databases and their privacy policies with court-ordered warrants.

Will there be backlash over the legal spurning of privacy preferences? Will it be enough to kill the goose that laid the golden egg? If people have no real say in whether their family trees can be accessed by police, will they refrain from uploading their genetic data?

It’s not just genealogy buffs and people searching for insight into what their DNA may tell them about their medical makeup – for example, whether they may have a gene that predisposes them to breast cancer – that are affected by the privacy implications of DNA profiling.

We don’t have to spit into a tube and submit it to a genealogy database to have it made public. Because we share much of our DNA with relatives, all it takes is one of them to submit their DNA, thus making much of our own genetic information available to the police without our knowledge or consent.

The more people who submit DNA samples to these databases, the more likely it is that any of us can be identified. According to Columbia University research published in October 2018, at the time, the US was on track to have so much DNA data on these databases that 60% of searches for individuals of European descent would result in a third cousin or closer match, which can allow their identification using demographic identifiers.

As far as Detective Fields is concerned, he’s hoping that he does get a chance to go after the motherlode: Ancestry.com, with its 15 million person database, and 23andMe, with 10 million records. The Times quotes him:

You would see hundreds and hundreds of unsolved crimes solved overnight. I hope I get a case where I get to try.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nR9RfKVhMTA/

Facebook confesses 100 devs may have accessed leaked Groups data

Even after Facebook locked down its Groups API in April 2018 to keep developers from accessing user data – including the names and profile pictures of people in specific, sometimes secret, groups – roughly 100 developers might still have gotten at that user information, the platform said on Tuesday.

Konstantinos Papamiltiadis, Facebook’s director of platform partnerships, said in a News for Developers post that the access has inappropriately been left open and that data may have been accessed by some developers for over a year. “At least” 11 partners accessed group members’ information in the last 60 days, he said.

When it made the change in April 2018, Facebook explained that at the time, apps needed the permission of a group admin or member to access group content for closed groups, and the permission of an admin for secret groups.

The apps help admins do things like easily post and respond to content in their groups. Facebook said that it wanted to better protect information about group members and conversations, so it changed things around: with the newly locked-down Groups application programming interface (API), any third-party app would need approval from Facebook and an admin to ensure that the apps were actually benefitting the group.

It shut down the apps’ ability to access the member list of a group and removed personal information, such as names and profile photos, attached to posts or comments that the approved apps could access. After April 2018, if an admin authorized an app’s access, it would only get information such as the group’s name, the number of users, and the content of posts.

An app could still access information such as name and profile picture, but only if group members opted in to that data sharing.

Well, anyway, that’s the way it should have been.

During an ongoing review, Facebook found that some apps were still getting information such as group members’ names and profile pictures.

Most of the apps are for social media management and video streaming: they’re designed to help group admins manage their groups and to do things like help members to share videos to their groups. Facebook gave the example of a business that manages a large community that has members that span multiple groups: such a business could use a social media management app to provide customer service, including customized responses, at scale.

Papamiltiadis said that the number of developers that actually accessed the, supposedly off-limits, data is likely to be less than 100, and that the number has likely decreased over time.

Facebook hasn’t seen any evidence that the developers have abused their data access. Still, it’s asking them to delete any member data they may have retained and plans to conduct audits to confirm that it’s been scrubbed.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/R8nvSYc8u9M/

Pilot presses the wrong button, triggers airport hostage alarm

There’s an much-quoted, if often inexactly remembered, exchange in The Hitchiker’s Guide to the Galaxy that goes like this:

ARTHUR: This is my idea of a spaceship! 
        All gleaming white, flashing lights, everything. 
        What happens if I press this button?

FORD: I wouldn’t...

[ARTHUR presses button]

ARTHUR: Oh!

FORD: What happened?

ARTHUR: A sign lit up saying, 
        "Please do not press this button again."

We’ve all been there – not in a spaceship, of course, but in front of a button that made us wonder.

And we’ve all been faced with the unexpected after-effects of pressing or clicking on one of those “she’ll be right” options.

The washing machine that was working fine but now won’t even open the door to let us extract the half-washed clothes; the computer that won’t turn on at all any more, let alone reboot; the hotel room at an unbelievably low online price that came back at $375 a night after we refreshed the page.

So spare a thought for the Air Europa crew member who pressed the wrong button, metaphorically at least, while getting ready to depart on a flight from Amsterdam to Madrid yesterday afternoon.

We don’t know what they hoped the metaphorical button was for – play a welcome message, perhaps; thank the eagerly-waiting passengers for not being selfish about the overhead locker space (we can but dream!); remind everyone who’d forgotten to buy gifts for their children about the amazingly expensive range of toys available for in-flight purchase; explain the byzantine protocol for using your oxygen mask…

…but that’s not the button they actually pressed.

Instead, someone – apparently the pilot – activated the secret alert that quietly sent off a message along the lines of:

Dear Schiphol Airport,

Don't look now and give the game away, 
but we have a hostage situation here on board.

Your earliest assistance would be greatly appreciated.

Sincerely yours,

The Crew.

The good news is that there wasn’t a problem and no one was threatened.

The even better news is that even though the mistake caused some disruption, with news reports saying that some parts of the airport were evacuated because of precautions that turned out to be unnecessary, there don’t seem to have been any damaging or lasting side-effects.

False alarms, obviously, are best avoided, not least because they breed complacency, but a system that can take even disastrous-sounding news in its stride without making the cure worse than the disease is to be applauded.

This isn’t the first false attack alert in recent times – in the past two years we’ve written variously about:

  • An incoming missile alert in Hawaii. In this case, the blame was placed on a computer system that put the [Send out a real alert] button – which you’d hope never to need – and the [Test the alert system] button – one that you’d hope would be used regularly and frequently – confusingly close together. Apparently, the button to [Cancel an alert] was locked down against misuse to the point that the person who was able to activate the fake alert was not trusted to cancel it. This led to a 38-minute delay in rescinding the bogus warning.
  • A warning in Japan saying, “North Korea Missile Alert. Take Cover.” The warning was followed five minutes later with the brief but reassuring words, “Mistake. There is no alert.”
  • A hacked Nest security camera that broadcast a fake nuclear attack warning to a Californian couple. After a few minutes of panic, during which their 8-year-old son thoughtfully but ineffectively took cover under a rug, the alert parents happily realised, tellingly, that play had not stopped in the live football game on TV.

The first two fake alerts above, like the Air Europa incident, were honest mistakes, and you can imagine how they could be made even by someone with everyone’s very best interests at heart.

But the last case boggles the mind – of all the pranks that a hacker could play on a family whose home network they’d broken into, simulating a nuclear attack warning seems peculiarly and horribly heartless.

That’s one of the reasons why hacking into other people’s network is illegal even if you didn’t intend to cause trouble or to commit any further crimes.

You weren’t invited; you shouldn’t be there; it isn’t funny; and there’s too much at stake to treat any sort of computer break in as a mere prank.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PvuU6HZFtWo/

Linux users warned to update libarchive to beat flaw

Every now and again, a security vulnerability is discovered in a program with little fanfare, despite the fact that it’s buried in plain sight inside software lots of people depend on.

A good example is libarchive, which has a flaw discovered by Google researchers in May using the ClusterFuzz and OSSFuzz automated ‘fuzzing’ tools and fixed by libarchive’s maintainers on 12 June in version 3.4.0.

Libarchive, for those not familiar with it, is a compression and archiving library originally developed for FreeBSD that has achieved widespread popularity because it functions like a do-everything compressed archive handler supporting file and compression formats including ZIP, gzip, tar, uuencode, 7z, Microsoft CAB, ISO9660 (CD images) and many more.

It’s also used by Debian, Ubuntu, Gentoo, Arch Linux, and the Chromebook Chrome OS, as well as tools such as the Samba Linux-Windows interoperability suite, all of which are now receiving the June patch.

It’s even part of Apple’s macOS and Microsoft’s Windows 10, although neither are thought to be affected by the vulnerability.

The bug is identified as CVE-2019-18408, a high-priority ‘use-after-free’ bug when dealing with a failed archive.

No real-world exploits have been detected but if one existed, it would attempt to use a malicious archive to induce a denial-of-service state or arbitrary code execution.

Obviously, this sets a low bar for an attacker which earns it a CVSS rating of 7.5. However, the real nuisance of this one is simply the inconvenient volume of software using it, which must now be patched.

Given that Google discovered the issue, we suspect the Chrome OS will have quietly been patched over the summer but that still leaves Debian, Ubuntu and many others to get busy.

Given the range of software using libarchive, there’s a lot for attackers to aim at if there are any laggards.

It’s also not the first security issue libarchive has suffered in recent times. A similar vulnerability cropped up in 2016 that led to CVE-2016-1541.

As with that bug, there are no short-term mitigations – so the answer is to update as soon as possible.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/r2L9Nrq5wnQ/

WordPress sites hit by malvertising

An old piece of malware is storming the WordPress community, enabling its perpetrators to take control of sites and inject code of their choosing.

According to WordPress security company Wordfence, which published a detailed white paper on the malware earlier this week, WP-VCD isn’t a new piece of malware. It dates back to February 2017, but it has recently become even more successful. The company says that it has topped their list of WordPress malware infections since August this year. New features have been added to the malware, but its core functions have remained the same.

The malware spreads through pirated versions of WordPress themes and plugins that the attackers distribute through a network of rogue sites.

If administrators looking for free WordPress functionality download these assets and use them in their own WordPress sites, then they’ve essentially infected their own servers.

This is an ingenious attack vector because the criminals distributing the plugins don’t have to worry about finding new exploits in WordPress code or hacking legitimate extensions. Instead, as Wordfence explains, the crooks are exploiting human greed:

The campaign’s distribution doesn’t rely on exploiting new software vulnerabilities or cracking login credentials, it simply relies on WordPress site owners seeking free access to paid software.

Once it has infected one site, the malware then installs a backdoor for its operators and communicates with its command-and-control (C2) server before spreading to others hosted in the same infrastructure. Finally, it removes the malicious code in the installed plugin to cover its tracks.

The backdoor lets the attackers update the site with new malicious code, which makes money for its criminal peddlers in two ways. First, it uses search engine poisoning techniques to manipulate search results and lure unsuspecting users to malicious sites.

Second, it pushes malicious adverts (malvertising) into the pages that victims visit, enabling the attackers either to inject rogue JavaScript into their browsers, or to redirect them to other websites.

Why has the WP-VCD WordPress malware been so effective? Wordfence explains that its attackers can use infected sites to propagate their malware:

Malvertising code is deployed to generate ad revenue from infected sites, and if the influx of new WP-VCD infections slows down, the attacker can deploy [search poisoning] code to drive up search engine traffic to their distribution sites and attract new victims.

The WP-VCD malware is tricky to clean because it injects malicious code into other files on the system, and keeps an eye on infected files to reinfect them automatically if the admin tries to clean them up.

What to do?

Naked Security’s plugin advice for WordPress administrators is:

  • Minimise the number of plugins you have. Always remove plugins if you aren’t using them any more. Keep your attack surface area as small as you can.
  • Keep your plugins up to date. Blogging software such as WordPress can keep itself updated, but you need to keep track of the plugins yourself.
  • Get rid of plugins that aren’t getting any more love and attention from their developers. Don’t stick with ‘abandonware’ plugins, because they’ll never get security fixes.
  • Learn what to look for in your logs. Know where to go to look for a record of what your web server, your blogging software and your plugins have been up to. Attacks often stand out clearly and early if you know what to look for, and if you do so regularly.

Oh, and don’t steal software.

Technically, there’s no reason why pirating software should be more dangerous than acquiring it lawfully – an exact copy is, after all, an exact copy. But the shady nature of rogue software download sites means that the only thing you can be sure of is that you’re dealing with crooks.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/G42iCdpslDg/

Facebook scam steals famous faces and BBC branding

A brand-thieving email scam that first showed up in January 2019 has resurfaced…

…this time on Facebook.

We received this one from Naked Security reader Rajan Sanhotra who urged us to warn other people, given the high-profile names and brands that were fraudulently exploited in the scam.

The stolen images and logos used in the attack make what a marketing expert would call an enticing “call to action”, with no SHOUTING CAPITAL LETTERS, no obvious mis-spellings (other than the word “I” written in lower case), no grammatical errors, and no REPEATED EXCLAMATION POINTS!!!

Instead of the old-school giveaways, you’ll see an unexceptionable-looking sponsored post on your Facebook timeline, like this:

Even if you’re not from Europe, or not interested in sport, the article looks both harmless and at least vaguely interesting, featuring as it does the world-famous football manager Sir Alex Ferguson.

Arguably the best sports team manager ever, winner of the most football trophies, a Knight Bachelor of the United Kingdom, still well-known and globally recognisable several years into retirement – clicking through to see what Sir Alex is up to at the moment seems innocent and harmless enough.

And harmless it was, when we visited the link given in the Facebook post by copying it directly into the address bar of our browser, rather than clicking through from Facebook.

Disappointing, perhaps; dull, yes; but directly harmful or obviously scammy?

No.

The site we visited claimed to be a blog offering free tips to enhance your relationship, wasn’t directly selling anything, had a moderately professional look, had an HTTPS certificate as you would expect, and the subject matter of the page matched the ad that brought us to it.

All in all, the site gave off an unexceptionable and ‘mostly harmless’ feel.

In truth, the site quickly revealed itself to be both narrow (only seven articles on the entire site) and shallow (the articles were all short and uninformatively basic, and none of the contact details gave anything away)…

…but even though you’d be very unlikely to recommend the website to anyone else, there was nevertheless nothing that immediately screamed, “Beware! Report this page to a scamwatch site! Warn your friends about this page! Get out of here now and run a virus check for safety’s sake!”

Likewise, a search engine visiting the site would see nothing obviously bad: no malware; no aggressive ads; no popover password dialogs; no autoplaying videos; no wild inaccuracies or falsehoods.

You’d be excused, even if you were carefully scanning the internet looking for cybercriminality, for just letting this one go by.

Simply put, it looked like a small-time, average-quality, say-no-more website that was effectively hiding in plain sight – in one word, “Meh.”

But when our unlucky reader clicked the same link from the Sponsored post in Facebook, they got a very different result:

This time, the page was a fairly convincing ripoff of the BBC website, with an article featuring a picture of Sir Alex at an event or a post-game wrapup, but with the image background hacked to make it look as if he were addressing a Bitcoin conference.

The page claims to describe an episode of the venerable BBC TV programme Panorama entitled “Who wants to be a Bitcoin millionaire?”, offering a sidebar where the “live” episode can be viewed.

Ironically, Panorama did produce an investigative documentary with that very title, back in 2018, but the BBC’s programme was not advertised with a tagline insisting that “this is one train not to be missed,” as the imposter site insists.

Our reader was, of course, offered an easy way to join the bitcoin revolution – below the illegally modified photo of Alex Ferguson, below the stolen BBC identity, and below the bogus claims that Sir Alex “revealed today” that he’s amassed a Bitcoin fortune of his own, was a signup box.

By investing just $193 to open an account with a cryptocoin business, our reader would be joining a money-making rollercoaster that only ever went up.

(We invented that non-downhill rollercoaser metaphor ourselves, but you get the idea: pay in a modest amount of money now, before it’s too late, and you too will be wealthy, just like Sir Alex Ferguson – though without needing to put in any of the time and effort that he did, or to possess any of the unparalleled skill and ability that made him famous and successful.)

What to do?

You’d probably back yourself to spot this sort of scam any day of the week if it arrived in an old-fashioned phishing email.

Most of us have experienced so much spam over the years that we’re well-tuned to detect emails that don’t belong.

And that’s one reason that the crooks love social media – using a sponsored post, or by posting from a compromised account of someone we know, they can bypass our “spamtennae” and suck us into scams that we’d otherwise avoid with ease.

  • Think before you click. The crooks don’t always get their ducks in a row. Is it really likely that an account devoted to Fast Cars would sponsor a post linking to a story about a famous footballer?
  • Don’t be tricked by logos and images. Creating a website that looks like an official BBC page isn’t technically difficult, because all the needed logos and web content can be stolen from the real site and republished easily. Fraudulently altering the background of a picture convincingly enough for a website image can be done with free tools.
  • No one can guarantee you financial returns on cryptocurrency. If you really want to buy into the Bitcoin scene, do your homework, and pick a cryptocurrency exchange based on your own research. Take your time – investment opportunities that put you under time pressure are aiming to get you to make a hasty decision.
  • If it sounds too good to be true, it is. Enough said.

While we’re here, please be a responsible social networker, too: don’t forward things to your friends if you aren’t sure of them yourself, even if it feels like a fun thing to do – that’s how fake news and internet hoaxes get a foothold.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/WGY1bNlnPAA/

S2 Ep16: BlueKeep, ransomware and sextortion – Naked Security Podcast

Mass ransomware hit Spain earlier this week, BlueKeep’s back and there’s yet another twist in the sextortion saga – we discuss all this and more in the latest episode of our podcast.

I hosted the show this week with Sophos experts Mark Stockley, Peter Mackenzie and Paul Ducklin.

Listen below, or wherever you get your podcasts – just search for Naked Security.

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

We also have a brand new Naked Security YouTube channel. We’ll be sharing full-length videos of the podcast plus lots of other new concepts, so subscribe now!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/eQT-finaN3E/

Morrisons tells top court it’s not liable for staffer who nicked payroll data of 100,000 employees

Brit supermarket Morrisons is arguing in the Supreme Court that it shouldn’t be held vicariously liable for the actions of a rogue employee who stole and leaked the company’s payroll.

In a world where nobody’s quite sure where data protection law ends and traditional civil law torts begin, the outcome of the case may well determine for years to come whether companies should be blamed and made to pay compensation if one of their employees breaks the law.

Morrisons is fighting off a lawsuit from around 5,000 current and former employees as it tries to overturn an earlier Court of Appeal ruling.

Arguing on Morrisons’ behalf yesterday, Lord Pannick QC, the Supreme Court’s favourite barrister*, said: “In relation to vicarious liability, we say the legal test is whether there is a sufficiently close connection between the wrongful conduct of the employee and what he was employed to do, assessed by ref to job function, time, when did he carry out the acts, the geography, where did he carry out the acts and motive.”

At the heart of the case is a deceptively simple question: was former Morrisons auditor Andrew Skelton acting “in the course of his employment” when he copied nearly 100,000 people’s payroll data to a USB stick and dumped it on a hidden Tor site? The supermarket, naturally, argues that he wasn’t – and therefore shouldn’t be held liable for his actions.

If Skelton’s actions formed an “unbroken thread”, as Lord Pannick put it, between what he was authorised to do as an employee and things that were not part of his job description, that will be enough to hold the supermarket liable for his criminal actions – prompting a hefty series of payouts.

“It’s not sufficient for the claimants to show that the employment provided the opportunity for the wrongdoing,” insisted the barrister, who went on to describe a number of past cases where employees had done wrong and their employers hadn’t been held liable. Broadly, he was saying, this is what other courts found in similar circumstances so why should Morrisons be held vicariously liable now?

“When Mr Skelton downloaded the data onto his personal USB he had metaphorically taken off his uniform. He wasn’t acting or purporting to act on behalf of his employer or for the purpose of his employment,” added Lord Pannick, who also argued that the Data Protection Act 1998 (which applied when the original incident happened) excludes vicarious liability for Morrisons in this case.

Lady Hale, president of the Supreme Court – wearing a purple jumper with a poppy brooch – commented: “There was a series of thefts from judges’ rooms in the Royal Courts of Justice some years ago. That was an employee of the RCJ using the pass that he had in order to get into the judges’ rooms and steal things. I don’t think anybody’s suggesting the courts and tribunals service was vicariously liable.”

Up against Lord Pannick is barrister Jonathan Barnes of legal chambers 5RB. He will argue that the Data Protection Act 1998 doesn’t exclude vicarious liability for Morrisons and will say that the Court of Appeal’s previous findings should be upheld in full.

The case continues today and The Register will be covering the claimants’ legal arguments in full. ®

Bootnotes

* Lord Pannick was the lawyer who convinced the Supreme Court to rule that Prime Minister Boris Johnson had broken the law by advising the Queen to dissolve Parliament.

** A Twitter thread of legal commentary on Lord Pannick’s submissions by media law barrister Greg Callus can be found here.

Sponsored:
Technical Overview: Exasol Peek Under the Hood

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/07/morrisons_supreme_court_payroll_data_appeal/

Black Hat Q&A: Hacking a ’90s Sports Car

Security researcher Stanislas Lejay offers a preview of his upcoming Black Hat Europe talk on automotive engine computer management and hardware reverse engineering.

Communicating with your car and building your own tools is easier than you think, and well worth the effort, says Stanislas Lejay who will be briefing attendees in London at Black Hat Europe next month on Unleashing the Power of My 20+ Years Old Car. It’s a fun and fascinating look at Lejay’s efforts to bypass the speed limiter (set at ~180 km/h) and still pass inspection.

Lejay opens up to Dark Reading about the process, what he learned, and what Black Hat attendees can look forward to in his Briefing.

Alex: Tell us a bit about how you got into cybersecurity, and what you’re currently working on.

I went to a computer engineering school in France (EPITA) and followed the normal 5-year course. However, in the middle of my second year, a senior showed me a book called “Hacking: The Art of Exploitation” that I started reading “just for fun.” But as I was reading, I found it fascinating to try to think the other way around to break code, and make it do stuff it was never designed to do.

So I started learning reverse engineering and exploitation in my free time. (We didn’t have any class related to that until the fourth year, if you choose the infosec specialization.) I started participating in a few capture the flag competions (CTFs), ROPing in my own code, and just trying to see how far I could go. I played with console hacking, emulation, firmwares, and eventually started working on cars.

A few years, projects and conferences later, I work as an automotive computer security engineer near Tokyo and fiddle with my own cars’ engine control units (ECUs) in my spare time.

Alex: What inspired you to pitch this talk for Black Hat Europe?

Stanislas Lejay: This talk is a result of a real-life project I had going on, with a real purpose. I think that talking about a project with successes and failures, and a clear goal in sight, is the best way to actually get people interested in stuff they wouldn’t bother learning about otherwise. People seemed to enjoy my last talk about “car hacking,” so while writing an article about it is nice, being able to show it to an audience and exchange thoughts on the subject afterward sounds even better.

Alex: Any fun anecdotes about fiddling with your cars in Japan?

Stanislas Lejay: Well, so far it can still pass “Shaken” (the mandatory car inspection every two years) because my system doesn’t modify the ECU and is basically just a bypass circuit that I can activate or not with a switch. So, in regard to the law, my car is still 100% stock but for “a few additional wires and microcontrollers.” All my cars are still road-legal, so far, as it is one of my main concerns when modifying them. So no, sorry, no fun anecdote on that side!

Alex: What do you hope Black Hat attendees will get out of seeing your talk?

Stanislas Lejay: While this talk doesn’t expose anything new, even less knowing that the car is 20 years old, it should still let people get an idea of how fun it is to play with cars, what you can do with them, and that most aftermarket tools you can buy for pretty high prices are not witchcraft. Communicating with your car and building your own tools for it is actually not that hard and can help you get a lot of insights, for cheap, on what’s going on in your car when you actually drive it.

Get more information on Lejay’s Briefing and lots of other cutting-edge content in the Briefings schedule for Black Hat Europe, which returns to The Excel in London December 2-5, 2019. For more information on what’s happening at the event and how to register, check out the Black Hat website

Article source: https://www.darkreading.com/application-security/black-hat-qanda-hacking-a-90s-sports-car/d/d-id/1336283?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple