STE WILLIAMS

Gmail users, here’s how (and why) you should set up prompt-based 2FA

Last week, Google rolled out two-factor authentication prompts to its updated Gmail app, all in the hopes that more people using Google products will use two-factor authentication to protect their accounts, and that users will choose prompt-based authentication over less secure methods, like SMS codes.

Why turn on two-step verification (also known as two-factor authentication, or 2FA)? Because a password, even a strong one (which you aren’t using anywhere else, are you?), isn’t enough to keep your account secure.

If the service you’re using offers 2FA, you should enable it — it’s another layer of protection on your account that stops someone who can steal or guess your password from getting access.

The beauty of what the Gmail app offers is that it makes two-step authentication easier to use.

Instead of waiting for an email or SMS to appear on your phone, or setting up an authentication code on a 3rd party code generator, and then typing in the code you receive or generate, it’s just one touch to authenticate.

In this case, you simply open Gmail app, which will ask if it’s you trying to sign in on a new device. You just hit a button to confirm, yes, it’s actually me trying to sign in to my account on that computer.

Ease of use is important because, for all the security benefits that 2FA brings, Gmail users just haven’t been using it.

The prompt-based approach to 2FA is something many organizations, including Google, have been pushing for a few years, as the SMS-based 2FA method can be vulnerable to fraud. It is better than nothing, but push-based methods—like the Google prompt—are more secure, and easier to use.

If this is something you’ve held off on doing, here’s how to get the prompt-based 2FA set up on your Google account. (Note that the setup is slightly different for Android and iOS users.)

Android users: Google Play Services deliver the prompt on your phone, so make sure your version is updated for this feature to work.
iOS users: The Google prompt works on iPhone version 5s and higher via the Google app and now the Gmail app as well.

First, you’ll need to navigate to the two-step authentication setting on your Google account on a computer (for Android or iOS users), or via the settings within your Google app (for iOS users). To find the 2FA setting from either a computer or the app, go to the settings of your Google profile, and select “signing in to Google” from under the Sign-in and Security area.

The screenshots below are from iOS on an iPhone 7, but it’s very similar when going through this process on a computer.

In the “signing in to Google section,” click the “two-step verification” option and hit the “try it now” prompt.

You’ll now see what the prompt looks like:

If it was you trying to sign in, hit “Yes,”.

You’re not done yet though! The app will ask you to confirm that you want to turn this feature on, so tap “turn it on.”

Now you should be ready to go with the prompts on your Google account, and the 2-step verification screen will show you that Google prompts are enabled, along with any other prior 2FA methods you may have enabled (like the Authenticator app, SMS or physical keys).

If you have notifications enabled for the Google app, next time you (or anyone else!) tries to sign in to your Google account on a new device, you’ll be pinged to open the app and verify that it’s you. If you don’t have notifications enabled, you’ll need to open the Google app yourself to verify the login.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cFdRpPn5u-A/

Yahoo fined $35m for staying quiet about mega breach

The US Securities and Exchange Commission (SEC) on Tuesday announced that Altaba – a holding company that swept up Yahoo’s remains after Verizon took over its internet business last year – has agreed to pay a $35 million fine for Yahoo having waited more than two years to tell investors about a breach it knew of as early as December 2014.

Which breach? Good question. The fine pertains to the 2014 breach, in which half a billion accounts were plundered by Russian thieves.

The intruders made off with what Yahoo’s internal security team referred to as the “crown jewels”. The stolen data included usernames, email addresses, phone numbers, birthdates, encrypted passwords (encrypted after a fashion, at any rate, with creaky old MD5 password hashing), and security questions and answers.

At the time, the thinking was… Huh, how come it took two years to uncover this huge breach?

It turns out that Yahoo’s security team had actually discovered the intrusion within days of it happening in December 2014, not two years later. The breach was, in fact, reported to Yahoo’s senior management and legal department.

Be that as it may, Yahoo didn’t properly investigate the breach, and it didn’t give much thought to whether it should be disclosed to investors – until, that is, Verizon came calling, according to the SEC’s order (PDF):

The fact of the breach was not disclosed to the investing public until more than two years later, when in 2016 Yahoo was in the process of closing the acquisition of its operating business by Verizon Communications, Inc.

Yahoo has neither confirmed nor denied the SEC’s findings.

The fine has nothing to do with the data breach, nor with subpar security practices, nor with Yahoo’s failure to inform users. Rather, the SEC is miffed because huge breaches can have huge financial and legal repercussions. Yahoo even noted that in filings to investors.

Steven Peikin, Co-Director of the SEC Enforcement Division, was quoted in the SEC’s order:

We do not second-guess good faith exercises of judgment about cyber-incident disclosure. But we have also cautioned that a company’s response to such an event could be so lacking that an enforcement action would be warranted. This is clearly such a case.

Jina Choi, Director of the SEC’s San Francisco Regional Office, said that Yahoo’s investors were left “totally in the dark” by the company’s failure to tell them about the breach:

Yahoo’s failure to have controls and procedures in place to assess its cyber-disclosure obligations ended up leaving its investors totally in the dark about a massive data breach. Public companies should have controls and procedures in place to properly evaluate cyber incidents and disclose material information to investors.

The SEC noted that earlier this year, it released guidance to help public companies figure out what to disclose about data breaches.

The SEC says its investigation is continuing.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nTKdvuBT2h0/

Yahoo fined $35m for staying quiet about mega breach

The US Securities and Exchange Commission (SEC) on Tuesday announced that Altaba – a holding company that swept up Yahoo’s remains after Verizon took over its internet business last year – has agreed to pay a $35 million fine for Yahoo having waited more than two years to tell investors about a breach it knew of as early as December 2014.

Which breach? Good question. The fine pertains to the 2014 breach, in which half a billion accounts were plundered by Russian thieves.

The intruders made off with what Yahoo’s internal security team referred to as the “crown jewels”. The stolen data included usernames, email addresses, phone numbers, birthdates, encrypted passwords (encrypted after a fashion, at any rate, with creaky old MD5 password hashing), and security questions and answers.

At the time, the thinking was… Huh, how come it took two years to uncover this huge breach?

It turns out that Yahoo’s security team had actually discovered the intrusion within days of it happening in December 2014, not two years later. The breach was, in fact, reported to Yahoo’s senior management and legal department.

Be that as it may, Yahoo didn’t properly investigate the breach, and it didn’t give much thought to whether it should be disclosed to investors – until, that is, Verizon came calling, according to the SEC’s order (PDF):

The fact of the breach was not disclosed to the investing public until more than two years later, when in 2016 Yahoo was in the process of closing the acquisition of its operating business by Verizon Communications, Inc.

Yahoo has neither confirmed nor denied the SEC’s findings.

The fine has nothing to do with the data breach, nor with subpar security practices, nor with Yahoo’s failure to inform users. Rather, the SEC is miffed because huge breaches can have huge financial and legal repercussions. Yahoo even noted that in filings to investors.

Steven Peikin, Co-Director of the SEC Enforcement Division, was quoted in the SEC’s order:

We do not second-guess good faith exercises of judgment about cyber-incident disclosure. But we have also cautioned that a company’s response to such an event could be so lacking that an enforcement action would be warranted. This is clearly such a case.

Jina Choi, Director of the SEC’s San Francisco Regional Office, said that Yahoo’s investors were left “totally in the dark” by the company’s failure to tell them about the breach:

Yahoo’s failure to have controls and procedures in place to assess its cyber-disclosure obligations ended up leaving its investors totally in the dark about a massive data breach. Public companies should have controls and procedures in place to properly evaluate cyber incidents and disclose material information to investors.

The SEC noted that earlier this year, it released guidance to help public companies figure out what to disclose about data breaches.

The SEC says its investigation is continuing.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nTKdvuBT2h0/

Access denied! World’s largest denial of service site busted

The world’s biggest market for distributed-denial-of-service attacks (DDoSes) – Webstresser.org – has been dismantled, Europol announced on Wednesday.

At least four of the attack-for-hire site’s admins have been arrested, and police are knocking on the doors of its users, some of whom have been arrested and some of whom are just receiving warnings.

Europol says that the service’s top users are in the Netherlands, Italy, Spain, Croatia, the UK, Australia, Canada and Hong Kong. The illegal service was shut down and its infrastructure seized in the Netherlands, the US and Germany.

In its announcement, Europol said that Webstresser is behind more than four million cyberattacks worldwide, though the number may be considerably higher.

On Wednesday, after Croatian police said they’d arrested a 19-year-old man as one of the alleged operators of the attacks-for-hire service, Forbes said that police had boosted the estimated number of Webstresser attacks to six million.

Webstresser had over 136,000 registered users as of this month. Investigators working on the so-called Operation Power Off arrested the four alleged Webstresser admins on Tuesday in the UK, Canada, Croatia and Serbia.

This was a complex investigation, Europol said. It was led by the Dutch Police and the UK’s National Crime Agency with the support of Europol and a dozen law enforcement agencies from around the world.

The Webstresser attacks targeted critical online services for banks, government institutions, police forces, and people in the gaming industry.

DDoS attacks are blunt instruments that work by overwhelming targeted sites with so much traffic that nobody can reach them. They can be used to render competitor or enemy websites temporarily inoperable out of malice, lulz or profit: some attackers extort site owners into paying for attacks to stop.

Hiring a service to paralyze your enemies’, your competition’s and/or your targets’ sites makes it as easy as simply handing over the money, no technical skill required. Gert Ras, head of the Netherlands National High Tech Crime Unit, told Forbes that Webstresser accepted payments via PayPal and Bitcoin. These guys liked it best when customers paid in virtual currency: Bitcoin payments got users a 15% discount.

DDoS stressers are sometimes marketed as legal testing tools, although they are anything but.

It was cheap. As you can see from Webstresser’s pricing table, archived on 19 April before the site was taken down. Memberships started at the “bronze” level, for EUR 15 or USD $18.99/month, went up to $49.99/month for the “platinum” service, and topped out at $102/month for “lifetime bronze.”

It might well have been cheap, but it was professional. The services Webstresser offered:

Ras told Forbes that Webstresser is “the most professional” DDoS service he’s seen. It advertised attacks up to 350Gbps. That’s a sizable hit. For the sake of comparison, last month we saw what was deemed to be the largest DDoS to date. The attack peaked at 1350Gbps with a follow-up reaching 400Gbps.

Europol said that some people with technical acumen may get involved in “seemingly low-level fringe cybercrime activities, unaware of the consequences that such crimes carry.”

But those penalties can be severe:

If you conduct a DDoS attack, or make, supply or obtain stresser or booter services, you could receive a prison sentence, a fine or both.

If you’ve got that kind of skill set, Europol says, put it toward something a bit more positive that will earn you an honest buck instead of time behind bars:

Skills in coding, gaming, computer programming, cyber security or anything IT-related are in high demand and there are many careers and opportunities available to anyone with an interest in these areas.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nZZyE84k-78/

Access denied! World’s largest denial of service site busted

The world’s biggest market for distributed-denial-of-service attacks (DDoSes) – Webstresser.org – has been dismantled, Europol announced on Wednesday.

At least four of the attack-for-hire site’s admins have been arrested, and police are knocking on the doors of its users, some of whom have been arrested and some of whom are just receiving warnings.

Europol says that the service’s top users are in the Netherlands, Italy, Spain, Croatia, the UK, Australia, Canada and Hong Kong. The illegal service was shut down and its infrastructure seized in the Netherlands, the US and Germany.

In its announcement, Europol said that Webstresser is behind more than four million cyberattacks worldwide, though the number may be considerably higher.

On Wednesday, after Croatian police said they’d arrested a 19-year-old man as one of the alleged operators of the attacks-for-hire service, Forbes said that police had boosted the estimated number of Webstresser attacks to six million.

Webstresser had over 136,000 registered users as of this month. Investigators working on the so-called Operation Power Off arrested the four alleged Webstresser admins on Tuesday in the UK, Canada, Croatia and Serbia.

This was a complex investigation, Europol said. It was led by the Dutch Police and the UK’s National Crime Agency with the support of Europol and a dozen law enforcement agencies from around the world.

The Webstresser attacks targeted critical online services for banks, government institutions, police forces, and people in the gaming industry.

DDoS attacks are blunt instruments that work by overwhelming targeted sites with so much traffic that nobody can reach them. They can be used to render competitor or enemy websites temporarily inoperable out of malice, lulz or profit: some attackers extort site owners into paying for attacks to stop.

Hiring a service to paralyze your enemies’, your competition’s and/or your targets’ sites makes it as easy as simply handing over the money, no technical skill required. Gert Ras, head of the Netherlands National High Tech Crime Unit, told Forbes that Webstresser accepted payments via PayPal and Bitcoin. These guys liked it best when customers paid in virtual currency: Bitcoin payments got users a 15% discount.

DDoS stressers are sometimes marketed as legal testing tools, although they are anything but.

It was cheap. As you can see from Webstresser’s pricing table, archived on 19 April before the site was taken down. Memberships started at the “bronze” level, for EUR 15 or USD $18.99/month, went up to $49.99/month for the “platinum” service, and topped out at $102/month for “lifetime bronze.”

It might well have been cheap, but it was professional. The services Webstresser offered:

Ras told Forbes that Webstresser is “the most professional” DDoS service he’s seen. It advertised attacks up to 350Gbps. That’s a sizable hit. For the sake of comparison, last month we saw what was deemed to be the largest DDoS to date. The attack peaked at 1350Gbps with a follow-up reaching 400Gbps.

Europol said that some people with technical acumen may get involved in “seemingly low-level fringe cybercrime activities, unaware of the consequences that such crimes carry.”

But those penalties can be severe:

If you conduct a DDoS attack, or make, supply or obtain stresser or booter services, you could receive a prison sentence, a fine or both.

If you’ve got that kind of skill set, Europol says, put it toward something a bit more positive that will earn you an honest buck instead of time behind bars:

Skills in coding, gaming, computer programming, cyber security or anything IT-related are in high demand and there are many careers and opportunities available to anyone with an interest in these areas.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nZZyE84k-78/

20 years ago today! What we can learn from the CIH virus…

It was 20 years ago today…

…as far as we can tell, anyway, that a Taiwanese university student called Chen Ing Hau set out to create a computer virus that would show the world just how jolly clever he was.

Chen’s virus was dubbed CIH, simply because those three letters were visible inside the programming code of the malware.

In 1998, when we analysed the virus for the first time, we didn’t know what CIH stood for, but it didn’t make a rude word in any language we could think of, so it was as good a moniker as any.

A lot has changed in the cybercrime scene since 1998, and CIH, or W95/CIH-10xx to use the full name that Sophos products use to identify it, is in some ways little more than a museum curiosity now.

It targeted Windows 95, which is extinct in the wild these days, so the CIH code has nowhere in the real world to live in 2018.

But there is still lots worth remembering, and plenty of lessons we can learn (or perhaps re-learn) from how CIH worked.

So CIH is far more than a museum curiosity when it comes to cybersecurity awareness.

These days, most malware takes the form of what’s called a Trojan Horse: a standalone program that looks like any other on the outside, but on the inside is malicious.

CIH, however, is a true computer virus – a piece of parasitic program code that can’t run on its own, but that needs a host file as a carrier.

If you run an infected file by mistake, then the carrier file runs the parasitic CIH code first, after which the virus transfers control back to the original program, which then runs as usual.

That’s an astonishingly effective disguise!

Viruses spread automatically

As well as hiding in files that you expect to be there, viruses spread automatically (it’s the self-spreading that makes them viruses rather than Trojan Horses), and once active, the CIH malware seeks out and infects all the other programs on your computer.

In other words, a computer that was infected by CIH didn’t just have one virus, it typically had tens, or hundreds – or, in the case of a file server, perhaps hundreds of thousands – of independently dangerous copies of the virus on it.

Those infected files didn’t have genuine looking names like 2018-04-26-invoice.PDF, they had genuinely genuine filenames, such as NOTEPAD, CALC, Winword and any other software you might have installed.

Worse still, CIH cleanup wasn’t just a question of sluicing out the infected files, because they were files you needed afterwards, converted back to a safe and uninfectious form.

Disinfection meant that you were metaphorically picking the fly out of the ointment, with an attention to detail sufficient to leave behind ointment that could still be used afterwards.

You also had to identify and disinfect all the infected jars of ointment: if you left just one of them behind, the virus might get reactivated at any time.

What’s worse than ransomware?

CIH was about showing off, not about making money by scamming victims out of paying up to get out of trouble.

Part of the showing off was that on 26 April every year – the anniversary of Chen’s creation – CIH stopped being a virus.

Instead of spreading as widely as it could, on 26 April it went into “warhead mode”, overwriting your computer BIOS with garbage.

That’s right: an unauthorised firmware update that aimed to leave your computer completely unbootable, and in many cases unrepairable, at least by software alone.

The BIOS chip contains the startup code that runs at the very instant the computer is powered up, so garbage in the BIOS means your computer hangs instead of booting.

Intel CPUs fire up with all bits in their CPU registers set to zero except for CS (the code segment register), which gets all its bits set to 1, so that the very first instruction executed comes from the 20-bit memory address FFFF:0000, 16 bytes short of the old-school PC memory limit of 1MB. That address is mapped into the BIOS chip, to ensure that there is something useful there at startup time, and the FFFF:0000 real-mode address usually contains a JMP instruction backwards to the very part of the BIOS chip that CIH overwrites. So, unless you were very lucky indeed, a PC trashed by CIH would hang at the second machine code instruction, a nanosecond or so after every restart.

In those days, there was no cryptographic verification of firmware updates, so anyone could write anything if they knew the trick to enable write access.

Also, write access to the BIOS was managed using “security through obscurity”, with many flash chips automatically activating write access after a special pattern of memory accesses that was unlikely to happen by mistake.

Once you found the chip maker’s documentation that showed the “secret” pattern, you and everyone you felt lime telling would know the “secret”, too.

If you’re a network hacker who has ever experimented with port knocking on a network device, this is a similar idea for memory chips, except that you don’t get to choose your knocking sequence – it’s hard-wired into every chip.

In 1998, some motherboards still had BIOS chips plugged into sockets, so a techie user with the right sort of chip reprogrammer could reflash the BIOS externally and revive the computer.

But many motherboards had already adopted the modern, compact technique of soldering the BIOS chip directly to the surface of the board, making it as good as impossible to desolder, reprogram and replace a CIH-trashed chip.

Those victims were stuck with the time and expense of replacing their motherboards, all on account of Chen’s deliberate cybervandalism.

These days, many hackerspaces might be able to help you out, using a fine soldering iron to detach the chip, and a small pizza oven – seriously, search engine it! – to “reflow” the reflashed chip into its rightful place on the board.

As a side-branch to this story, we modified all the vulnerable malware detonation PCs in SophosLabs after we’d figured out this virus.

We carefully disconnected the write enable line on their BIOS chips, wiring it out through a switch on the front panel instead, so we could run malware tests with a hardware write block on the BIOS chip.

This protected the BIOS from modifications that might leave us with permanently broken computers, or, even worse, with research hardware that was subtly but non-obviously compromised, even after a reboot and full disk reimage.

Fortunately, the BIOS-trashing code in CIH never caught on, and apart from a few copycat CIH variants, we never faced an onslaught of computer-nuking malware.

Code caving

The final notable feature of CIH that we’ll look at here, a technique that is very much back in fashion amongst penetration testers and cybercriminals alike, is the trick of adding malicious code to an existing file without changing its size.

Early parasitic viruses were usually prependers or appenders, meaning that they inserted their modifications into the victim file at the very start, or added them at the very end, something that made the malware coding easier but unavoidably increased the size of the file, too.

CIH, in contrast, is a cavity infector, meaning that it finds unused or unimportant parts of the host file and insinuates itself there instead, so that the size of the infected file doesn’t change.

These days, this sort of code caving isn’t usually used for virus spreading, but instead to produce one-off Trojanised versions of well-known utilities that still work as they used to, using the utility as a cover story for some sort of malicious activity.

What happened to Chen?

In April 1999, a year after its creation, the CIH virus was still around, and on 26 April 1999, many motherboards did get zapped by Chen’s anniversary code.

Fortunately, the “secret” write-enable sequence used by Chen didn’t work on all chipsets (we estimated at the time that about 75% of computers in the UK at that time were immune to its BIOS wiping warhead), which reduced its impact, but plenty of users around the world were nevertheless keen to see the malware creator brought to book.

Once Chen Ing Hau was outed as the CIH author, we assumed he would end up in serious trouble, probably facing a prison sentence as well as other criminal sanctions such as a fine or a restitution payment.

As far as we know, however, he was detained and investigated in 2000, but due to the nature of cybersecurity laws in Taiwan at the time, he was never tried for or convicted of any crime.

What to learn?

  • Don’t base your malware disaster recovery plans entirely around worms and Trojans. Even fast-spreading malware like 2017’s WannaCry and NotPetya outbreaks weren’t parasitic viruses, so the total number of infected objects across an affected network was much lower that after a true virus attack. Give some thought as to how you would cope with the mass modification caused by an old-school virus outbreak. Cybercrooks still unleash viruses from time to time, so they are still a realistic type of attack.
  • Don’t rely on security through obscurity. If you’re an Internet of Things vendor, this warning is for you. Hidden “secrets” such as weak protocols listening on unusual ports, or undocumented, hard-wired passwords, are not only dangerous, they are disrespectful to your customers. As soon as someone knows your “secret”, everyone knows it and anyone can use it. The US Congress is proposing minimum standards for IoT devices that require vendors to publish firmware updates to fix bugs *and* to provide a safe and secure way of delivering them – so get with it now!
  • Don’t bank on getting off if you’re caught. We don’t think Chen Ing Hau would be quite so fortunate today. We suspect he’d face a stiff prison sentence, a large fine, a long period of supervised release, and quite possibly a slew of court cases seeking restitution for the damage he caused.

Stay safe out there, learn from the past, and if you can guess what cybersecurity will look like 20 years from now…

…please let us know in the comments!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Rm5vhCmkV1Y/

20 years ago today! What we can learn from the CIH virus…

It was 20 years ago today…

…as far as we can tell, anyway, that a Taiwanese university student called Chen Ing Hau set out to create a computer virus that would show the world just how jolly clever he was.

Chen’s virus was dubbed CIH, simply because those three letters were visible inside the programming code of the malware.

In 1998, when we analysed the virus for the first time, we didn’t know what CIH stood for, but it didn’t make a rude word in any language we could think of, so it was as good a moniker as any.

A lot has changed in the cybercrime scene since 1998, and CIH, or W95/CIH-10xx to use the full name that Sophos products use to identify it, is in some ways little more than a museum curiosity now.

It targeted Windows 95, which is extinct in the wild these days, so the CIH code has nowhere in the real world to live in 2018.

But there is still lots worth remembering, and plenty of lessons we can learn (or perhaps re-learn) from how CIH worked.

So CIH is far more than a museum curiosity when it comes to cybersecurity awareness.

These days, most malware takes the form of what’s called a Trojan Horse: a standalone program that looks like any other on the outside, but on the inside is malicious.

CIH, however, is a true computer virus – a piece of parasitic program code that can’t run on its own, but that needs a host file as a carrier.

If you run an infected file by mistake, then the carrier file runs the parasitic CIH code first, after which the virus transfers control back to the original program, which then runs as usual.

That’s an astonishingly effective disguise!

Viruses spread automatically

As well as hiding in files that you expect to be there, viruses spread automatically (it’s the self-spreading that makes them viruses rather than Trojan Horses), and once active, the CIH malware seeks out and infects all the other programs on your computer.

In other words, a computer that was infected by CIH didn’t just have one virus, it typically had tens, or hundreds – or, in the case of a file server, perhaps hundreds of thousands – of independently dangerous copies of the virus on it.

Those infected files didn’t have genuine looking names like 2018-04-26-invoice.PDF, they had genuinely genuine filenames, such as NOTEPAD, CALC, Winword and any other software you might have installed.

Worse still, CIH cleanup wasn’t just a question of sluicing out the infected files, because they were files you needed afterwards, converted back to a safe and uninfectious form.

Disinfection meant that you were metaphorically picking the fly out of the ointment, with an attention to detail sufficient to leave behind ointment that could still be used afterwards.

You also had to identify and disinfect all the infected jars of ointment: if you left just one of them behind, the virus might get reactivated at any time.

What’s worse than ransomware?

CIH was about showing off, not about making money by scamming victims out of paying up to get out of trouble.

Part of the showing off was that on 26 April every year – the anniversary of Chen’s creation – CIH stopped being a virus.

Instead of spreading as widely as it could, on 26 April it went into “warhead mode”, overwriting your computer BIOS with garbage.

That’s right: an unauthorised firmware update that aimed to leave your computer completely unbootable, and in many cases unrepairable, at least by software alone.

The BIOS chip contains the startup code that runs at the very instant the computer is powered up, so garbage in the BIOS means your computer hangs instead of booting.

Intel CPUs fire up with all bits in their CPU registers set to zero except for CS (the code segment register), which gets all its bits set to 1, so that the very first instruction executed comes from the 20-bit memory address FFFF:0000, 16 bytes short of the old-school PC memory limit of 1MB. That address is mapped into the BIOS chip, to ensure that there is something useful there at startup time, and the FFFF:0000 real-mode address usually contains a JMP instruction backwards to the very part of the BIOS chip that CIH overwrites. So, unless you were very lucky indeed, a PC trashed by CIH would hang at the second machine code instruction, a nanosecond or so after every restart.

In those days, there was no cryptographic verification of firmware updates, so anyone could write anything if they knew the trick to enable write access.

Also, write access to the BIOS was managed using “security through obscurity”, with many flash chips automatically activating write access after a special pattern of memory accesses that was unlikely to happen by mistake.

Once you found the chip maker’s documentation that showed the “secret” pattern, you and everyone you felt lime telling would know the “secret”, too.

If you’re a network hacker who has ever experimented with port knocking on a network device, this is a similar idea for memory chips, except that you don’t get to choose your knocking sequence – it’s hard-wired into every chip.

In 1998, some motherboards still had BIOS chips plugged into sockets, so a techie user with the right sort of chip reprogrammer could reflash the BIOS externally and revive the computer.

But many motherboards had already adopted the modern, compact technique of soldering the BIOS chip directly to the surface of the board, making it as good as impossible to desolder, reprogram and replace a CIH-trashed chip.

Those victims were stuck with the time and expense of replacing their motherboards, all on account of Chen’s deliberate cybervandalism.

These days, many hackerspaces might be able to help you out, using a fine soldering iron to detach the chip, and a small pizza oven – seriously, search engine it! – to “reflow” the reflashed chip into its rightful place on the board.

As a side-branch to this story, we modified all the vulnerable malware detonation PCs in SophosLabs after we’d figured out this virus.

We carefully disconnected the write enable line on their BIOS chips, wiring it out through a switch on the front panel instead, so we could run malware tests with a hardware write block on the BIOS chip.

This protected the BIOS from modifications that might leave us with permanently broken computers, or, even worse, with research hardware that was subtly but non-obviously compromised, even after a reboot and full disk reimage.

Fortunately, the BIOS-trashing code in CIH never caught on, and apart from a few copycat CIH variants, we never faced an onslaught of computer-nuking malware.

Code caving

The final notable feature of CIH that we’ll look at here, a technique that is very much back in fashion amongst penetration testers and cybercriminals alike, is the trick of adding malicious code to an existing file without changing its size.

Early parasitic viruses were usually prependers or appenders, meaning that they inserted their modifications into the victim file at the very start, or added them at the very end, something that made the malware coding easier but unavoidably increased the size of the file, too.

CIH, in contrast, is a cavity infector, meaning that it finds unused or unimportant parts of the host file and insinuates itself there instead, so that the size of the infected file doesn’t change.

These days, this sort of code caving isn’t usually used for virus spreading, but instead to produce one-off Trojanised versions of well-known utilities that still work as they used to, using the utility as a cover story for some sort of malicious activity.

What happened to Chen?

In April 1999, a year after its creation, the CIH virus was still around, and on 26 April 1999, many motherboards did get zapped by Chen’s anniversary code.

Fortunately, the “secret” write-enable sequence used by Chen didn’t work on all chipsets (we estimated at the time that about 75% of computers in the UK at that time were immune to its BIOS wiping warhead), which reduced its impact, but plenty of users around the world were nevertheless keen to see the malware creator brought to book.

Once Chen Ing Hau was outed as the CIH author, we assumed he would end up in serious trouble, probably facing a prison sentence as well as other criminal sanctions such as a fine or a restitution payment.

As far as we know, however, he was detained and investigated in 2000, but due to the nature of cybersecurity laws in Taiwan at the time, he was never tried for or convicted of any crime.

What to learn?

  • Don’t base your malware disaster recovery plans entirely around worms and Trojans. Even fast-spreading malware like 2017’s WannaCry and NotPetya outbreaks weren’t parasitic viruses, so the total number of infected objects across an affected network was much lower that after a true virus attack. Give some thought as to how you would cope with the mass modification caused by an old-school virus outbreak. Cybercrooks still unleash viruses from time to time, so they are still a realistic type of attack.
  • Don’t rely on security through obscurity. If you’re an Internet of Things vendor, this warning is for you. Hidden “secrets” such as weak protocols listening on unusual ports, or undocumented, hard-wired passwords, are not only dangerous, they are disrespectful to your customers. As soon as someone knows your “secret”, everyone knows it and anyone can use it. The US Congress is proposing minimum standards for IoT devices that require vendors to publish firmware updates to fix bugs *and* to provide a safe and secure way of delivering them – so get with it now!
  • Don’t bank on getting off if you’re caught. We don’t think Chen Ing Hau would be quite so fortunate today. We suspect he’d face a stiff prison sentence, a large fine, a long period of supervised release, and quite possibly a slew of court cases seeking restitution for the damage he caused.

Stay safe out there, learn from the past, and if you can guess what cybersecurity will look like 20 years from now…

…please let us know in the comments!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Rm5vhCmkV1Y/

Know what Instagram knows – here’s how you download your data

Instagram, the visual story-centric social media platform owned by Facebook, has now added a long-requested feature: the ability for users to download their data – including images, posts and comments.

Not to be cynical, but Instagram is not making this move out of the kindness of its heart: the compliance deadline for GDPR is in a month and data portability is one of its many requirements.

What’s data portability? From the GDPR articles:

Data subject shall have the right to receive the personal data concerning him or her, which he or she has provided to a controller, in a structured, commonly used and machine-readable format

To decode this a bit, the “data subject” means the user of the service – in this case anyone who uses Instagram.

While GDPR is meant to apply to any service that holds or uses the personal data of any European citizen, many online services are making GDPR-mandated services available to all their users regardless of where they live. This seems like a pretty good thing for everyone and a victory for user rights and privacy around the world.

So if you happen to be an Instagram user who’d like to quit the platform and delete your account, but would like to save the photos, videos and stories you’ve posted, the new Instagram “data download” option will allow you to take all your data with you when you leave.

To access this service, log in to your account, go to your profile, click “edit profile,” and then go to the “privacy and security” area. You’ll see a “Data download” header with a link to request a download of your data.

Once you make the request for your data, keep in mind that you won’t receive it right away. When I made my request, it took a few hours to get an email that my data was ready to download. Instagram says that it could take up to two days in some cases.

 

My data was split up into multiple parts. I’ve been a frequent Instagram user since 2012 or so, so I do have a lot of data I suppose.

The GDPR regulations say, in somewhat vague terms, mostly what needs to happen with user data and a little less on how it should happen.

The data portability requirement, for example, says users should be able to get their data in a “commonly used and machine-readable format.” If you try to download your data from Facebook, it’s presented to you in an HTML format, a machine-readable format that can be opened in a web browser so you can see it and click around to see the data they have on you.

Note that the GDPR regulation says “machine-readable” and not “human-readable,” though. Given that Instagram is part of Facebook, I had high hopes for what I’d see when downloading my Instagram data – that it would be readable and navigable, as my Facebook data was. Unfortunately, this was not the case.

While my images and videos were easy to view in JPG and MP4 formats respectively, all my other data was in JSON (JavaScript Object Notation) format, and while I know how to read it in a text editor, it’s not the friendliest option for the average Instagram user.

The good news is that JSON is a very well understood and commonly used format for storing and exchanging data, particularly on apps and websites. With Instagram data now available in this format a host of programs to read it, or import it into other apps and services, probably isn’t far away.

Opening up a JSON file, it’s clear that it’d take a little bit of formatting and parsing work to make this data dump really readable.

While it’s not terribly “human-readable”, this data checks the GDPR box of being “machine-readable,” and realistically, it’s probably not as interesting to most users as their pictures and videos. And let’s not lose sight of the big picture here – the good news is that your data is no longer walled up in the Instagram platform, and if you decide you want to leave, you can finally take your data with you.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Lg8VV8DODXA/

Know what Instagram knows – here’s how you download your data

Instagram, the visual story-centric social media platform owned by Facebook, has now added a long-requested feature: the ability for users to download their data – including images, posts and comments.

Not to be cynical, but Instagram is not making this move out of the kindness of its heart: the compliance deadline for GDPR is in a month and data portability is one of its many requirements.

What’s data portability? From the GDPR articles:

Data subject shall have the right to receive the personal data concerning him or her, which he or she has provided to a controller, in a structured, commonly used and machine-readable format

To decode this a bit, the “data subject” means the user of the service – in this case anyone who uses Instagram.

While GDPR is meant to apply to any service that holds or uses the personal data of any European citizen, many online services are making GDPR-mandated services available to all their users regardless of where they live. This seems like a pretty good thing for everyone and a victory for user rights and privacy around the world.

So if you happen to be an Instagram user who’d like to quit the platform and delete your account, but would like to save the photos, videos and stories you’ve posted, the new Instagram “data download” option will allow you to take all your data with you when you leave.

To access this service, log in to your account, go to your profile, click “edit profile,” and then go to the “privacy and security” area. You’ll see a “Data download” header with a link to request a download of your data.

Once you make the request for your data, keep in mind that you won’t receive it right away. When I made my request, it took a few hours to get an email that my data was ready to download. Instagram says that it could take up to two days in some cases.

 

My data was split up into multiple parts. I’ve been a frequent Instagram user since 2012 or so, so I do have a lot of data I suppose.

The GDPR regulations say, in somewhat vague terms, mostly what needs to happen with user data and a little less on how it should happen.

The data portability requirement, for example, says users should be able to get their data in a “commonly used and machine-readable format.” If you try to download your data from Facebook, it’s presented to you in an HTML format, a machine-readable format that can be opened in a web browser so you can see it and click around to see the data they have on you.

Note that the GDPR regulation says “machine-readable” and not “human-readable,” though. Given that Instagram is part of Facebook, I had high hopes for what I’d see when downloading my Instagram data – that it would be readable and navigable, as my Facebook data was. Unfortunately, this was not the case.

While my images and videos were easy to view in JPG and MP4 formats respectively, all my other data was in JSON (JavaScript Object Notation) format, and while I know how to read it in a text editor, it’s not the friendliest option for the average Instagram user.

The good news is that JSON is a very well understood and commonly used format for storing and exchanging data, particularly on apps and websites. With Instagram data now available in this format a host of programs to read it, or import it into other apps and services, probably isn’t far away.

Opening up a JSON file, it’s clear that it’d take a little bit of formatting and parsing work to make this data dump really readable.

While it’s not terribly “human-readable”, this data checks the GDPR box of being “machine-readable,” and realistically, it’s probably not as interesting to most users as their pictures and videos. And let’s not lose sight of the big picture here – the good news is that your data is no longer walled up in the Instagram platform, and if you decide you want to leave, you can finally take your data with you.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Lg8VV8DODXA/

The Default SAP Configuration That Every Enterprise Needs to Fix

Nine out of ten organizations are vulnerable to a 13-year-old flaw that puts their most critical business systems at risk of complete criminal takeover.

A new report out today shows that 90% of SAP systems in the enterprise are exposed to complete system compromise via a 13-year-old configuration vulnerability that few organizations have taken action on. This exposure puts business-critical systems like ERP, HR, finance and supply chain all at risk.

Detailed in a report published today by ERP security firm Onapsis, the flaw in question is a configuration problem in SAP NetWeaver that makes it possible for a remote unauthenticated attacker with only network access to the system to claw out unrestricted access to all SAP systems. While the potential attack scenario is not completely trivial – it requires the attacker to have knowledge of SAP’s architecture and coding standards – it’s also not difficult to carry out either. And the payoff is big. 

As the underlying platform for all SAP deployments, SAP NetWeaver is used by 378,000 customers worldwide, including 87% of the Global 2000. The configuration insecurity is present by default in all versions of SAP NetWeaver, including cloud and next-generation digital business suite S/4HANA.

“It’s not something that organizations need to patch – it’s something that they need to change in their actual SAP implementation,” explains JP Perez-Etchegoyen, CTO at Onapsis.  “Basically this is a configuration setting in SAP applications that is configured wide open by default. It was well documented in 2005, but we still find it in nine out of 10 SAP implementations today.”

The insecurity makes it possible for an attacker to register a rogue application server and start receiving client connections from the SAP system, basically pretending to be a part of the trusted application servers that make up an impacted organization’s SAP ecosystem.

“Typically, organizations have their existing implementation in a flat network, meaning that all the SAP services are available and reachable,” Perez-Etchegoyen explains. “So this will allow an attacker without username and password to basically access all the information stored and processed within the system.”

These kind of systems are a treasure trove for corporate espionage, data theft and any other kind of cyber grift imaginable. The digital assets at stake include detailed information about vendors, customers, financial records and detailed operational blue prints. What’s more, it’s not just privacy or confidentiality that’s at stake. The integrity of the entire system is put at risk, as an attacker could easily enough start generating fake P.O.s to themselves, manipulate data or even completely sabotage the nerve center of an enterprise’s business critical systems by taking the system down.

“They can access the data, modify the data, pretty much anything they want,” he says. “In the biggest organizations in the world, pretty much all of the business processes are supported by SAP and pretty much the most important information is stored there. We do believe that this is a very big risk that needs to be addressed.”

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here. Register with Promo Code DR200 and save $200.

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/the-default-sap-configuration-that-every-enterprise-needs-to-fix/d/d-id/1331641?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple