STE WILLIAMS

FBI Warns of Cyber Extortion Scam

Spear-phishing techniques are breathing new life into an old scam.

Extortion is a very old crime that’s being given new life in the cyber world. A recent public service announcement from the FBI warns computer users to be on the lookout for threats that use stolen information to tailor extortion demands to specific email addresses.

The sheer number of email addresses, names, and other personally identifiable information (PII) pieces that have been stolen makes extortionists’ jobs much easier. In the usual case, the criminals will send an email (or even a paper letter) with personal information in the lead paragraph and threaten to expose visits to pornography sites, marital infidelity, or other potentially embarrassing behavior unless a fee is paid.

This being 2018, the most common mechanism for paying the ransom is Bitcoin, generally within a 48-hour window, unless the victim is ready to see their behavior shared on social media.

The FBI recommends declining to pay the extortion request, and rather contacting local law enforcement and the IC3 (Internet Crime Complaint Center). They also recommmend taking standard email and online precautions to avoid becoming a victim of this scam.

For more, read here.

 

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early bird rate ends August 31. Click for more info

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/fbi-warns-of-cyber-extortion-scam/d/d-id/1332538?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

UK cyber cops: Infosec pros could help us divert teens from ‘dark side’

UK police are looking to cybersecurity firms to help implement a strategy of steering youngsters away from a life in online crime.

The National Crime Agency’s Prevent campaign sits within the wider five-year UK National Cyber Security Strategy of 2016-2021. The NCA’s scheme aims to point teenagers towards careers in cyber security rather than following a path that may lead them towards more serious offending.

Cyber Security Challenge is running the programme in partnership with the UK’s National Crime Agency.

“The Prevent strategy is new,” Jez Rogers, a police cyber prevention officer from the South East Regional Organised Crime Unit told El Reg. “As a team we are only about six months old. We and the national network are still finding our feet. Our primary audience is 13 to 19 years old as the average age of arrest for cybercrime is 17 (one in four teenagers have committed some form of cybercrime).”

One key aspect of the scheme is organising “cybercrime intervention workshops”, a day-long event for young hackers who are straying into criminality. The focus is on showing youngsters that there’s a lucrative legitimate career for their interests and skills if they change tack.

A spokeswoman for Cyber Security Challenge said that so far three workshops each with about a dozen youngsters have been run nationally (two in Bristol and one in Newcastle). Only one girl has featured among the groups so far. Most of the youngsters have strayed into criminality after getting involved in the shadier aspects of gaming, some have committed hacks against their school’s systems and others have engaged in lower-level forms of other online criminality or hacktivism. Attendees under 18 are accompanied on the day by a parent or guardian.

The common thread is youngsters who have shown technical talent and interest in computers but poor judgment. It hasn’t reached this stage as yet but Cyber Security Challenge wants the scheme to be regarded as an alternative to a police caution for minor offending and comparable to workshops for adult drivers who commit speeding offences.

“The numbers of young people ‘on our books’, as in managed offenders, is low, however through referrals our interventions are gathering pace,” Rogers told us.

“In essence we will identify, intervene, divert, and if necessary manage, those with cyber talent. We are aiming to educate young people, even those that are on the cusp, or committing lower-level offences against the Computer Misuse Act, to turn them away from the ‘dark side’ and illuminate the positive opportunities that exist for their talents as long as they wear a ‘white hat’.”

He added: “We preach the expected global shortfall figures for cyber security professionals by 2022 as a potential motivator.”

The shortfall in skilled infosec professionals is expected to reach 1.8 million worldwide by 2022, or 350,000 in Europe, according to infosec training and professional certification org (ISC)2.

Rogers wants to draw attention to the scheme as well as canvass for assistance from private sector firms.

“We are a little ‘chicken and egg’ but we do need industry to step up to provide mentorships/apprenticeships/work experience and the like to show positive diversions and pathways for those young people we deal with.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/13/cybercrime_prevent_strategy/

Former NSA top hacker names the filthy four of nation-state hacking

DEF CON Rob Joyce, the former head of the NSA’s Tailored Access Operations hacking team, has spilled the beans on which nations are getting up to mischief online.

Joyce gave one of the first talks at the DEF CON hacking conference in Las Vegas and interest was intense – the lines to get in stretched around the hall. Joyce congratulated the crowd on their work in hacking systems to make them safer but warned tougher times were to come.

Nation state hacking is nothing new, but Joyce warned that the practice is increasingly being weaponized so as to cause maximum disruption. Everyone is going to have to be a lot more careful in the future to avoid chaos, he said.

According to Joyce there are four primary actors when it comes to states hacking states: Russia, China, Iran and North Korea. Notably missing from the list was the US, but let’s face it, he wasn’t going to go into detail about that.

Investigations into possible Russian hacking of the 2016 US election and the UK’s Brexit vote are still ongoing but that wasn’t the half of it, Joyce said. Russian hackers are constantly trying to penetrate key US networks, he claimed, adding that it is a constant struggle to keep them out as they are very persistent and motivated.

Saudi

Disk-nuking malware takes out Saudi Arabian gear. Yeah, wipe that smirk off your face, Iran

READ MORE

Hacking by China used to be more common, he said, but had a different focus. Middle Kingdom meddlers were more interested in harvesting American intellectual property to kickstart their own industries. This activity has dropped off recently, he said, but he predicted they may restart if Sino-US relationships worsen.

Iran, the third big player, has also slackened off its attacks on the US recently, said Joyce. However, it has also been setting up attacks in its home turf of the Middle East, particularly against Saudi Arabian targets.

The final player is North Korea, which remains very backward but has a high degree of hacking skill thanks to dedicated training programmes for talented youth. “Best Korea” is unusual in that its hackers actively try to steal money, something the cash-strapped state certainly needs.

Joyce also applauded the pioneering work by DEF CON in showing the glaring security flaws in voting machines. Election hacking is real, he said, and there are active campaigns to hack the US vote. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/13/former_nsa_top_hacker_names_the_filthy_four_of_nationstate_hacking/

Prank ‘Give me a raise!’ email nearly lands sysadmin with dismissal

Who, Me? Welcome again to Who, Me?, where we invite Reg readers to begin the week crossing their fingers it will be better than those of our featured techies.

This week, meet “Damian”, whose tale is a warning not to get too cocky when demonstrating a security glitch.

Damian’s tale is of a time when he was working as an admin maintaining server backup software in the European region.

“We used BackupExec at that time,” he said. “This software had the ability to email particular recipients if a backup job was successful/failed/pending, and so on.”

As he was setting this up, Damian came across an undocumented feature or, rather, a security glitch.

“Basically, when you were setting up the email feature you had to manually enter the address for the ‘From’ field,” he told us.

“Usually you would just put the servername followed by @companyname.com, and when the recipient would get an email it would show that it came from the server in question.”

However, Damian had spotted that you could put any email address into this field and it would look like it had come from them – something that could obviously be exploited by a miscreant or mischief.

Business man with suitcase jumping over urban obstacles

Early experiment in mass email ends with mad dash across office to unplug mail gateway

READ MORE

Upon telling a US colleague about this (whom we’ll call Joe Bloggs), Damian was asked to demonstrate the issue.

“So, I typed in the recipient name, ‘[email protected]’, and in the Subject Field I put ‘Give me a raise’,” said Damian. “And in the From field I stupidly put in the email address of our CEO.”

All should have been well – but after hitting send, Joe didn’t get the email.

“Upon closer inspection,” Damian recalled, “I had made a typo in Gerry’s email address.”

Of course by this time, the cogs of email technology were in motion.

“The email servers did what they were supposed to do, and returned the email to the original sender, saying ‘address not found’,” Damian said.

“Needless to say, I hadn’t made a typo in the CEO’s fucking email address… so he promptly received an email with ‘Give me raise’ in the subject line!”

And, although it was misspelled, there was a pretty big smoking gun in the failed email – and as Reg readers will know, once the CEO is involved, someone must be blamed.

“The CEO calls senior IT and Joe gets hauled in because they figured the email was meant for him,” said Damian.

Joe cracked under the pressure, but pointed out that Damian was illustrating a security issue with the software – and saved his colleague’s head from rolling.

“I got bollocked but kept my job,” Damian said. “Never sweated so much in my life!”

What’s had you mopping your brow lately? Tell us about the time you took a security demo too far by emailing us here. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/13/who-me/

Criminal justice software code could send you to jail and there’s nothing you can do about it

DEF CON American police and the judiciary are increasingly relying on software to catch, prosecute and sentence criminal suspects, but the code is untested, unavailable to suspects’ defense teams, and in some cases provably biased.

In a presentation at the DEF CON hacking conference in Las Vegas, delegates were given the example of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system, which is used by trial judges to decide sentencing times and parole guidelines.

“The company behind COMPAS acknowledges gender is a factor in its decision-making process and that, as men are more likely to be recidivists, so they are less likely to be recommended for probation,” explained Jerome Greco, digital forensics staff attorney for the Legal Aid Society.

“Women [are] thus more likely to get probation, and there are higher sentences for men. We don’t know how the data is swaying it or how significant gender is. The company is hiding behind trade secrets legislation to stop the code being checked.”

These so-called advanced systems are often trained on biased data sets, he said. Facial recognition software is often trained on data sets willed with predominantly white men, he said, making it less effective at correctly matching up people of color, according to research by academics.

facial

America’s top maker of cop body cameras says facial-recog AI isn’t safe

READ MORE

“Take predictive policing software, which is used to make decisions for law enforcement about where to patrol,” Greco said. “If you use an algorithm based on data from decades of racist policing you get racist software. Police can say ‘It’s not my decision, the computer told me to do it,’ and racism becomes a self-feeding circle.”

It’s not just manufacturers who are fighting disclosure around their crime-fighting tools – the police are too. The use of stingray devices, which mimic cellphone towers to catch and analyse data, was kept quiet for years – the New York Police Department used such a device over 1,000 times between 2008 and 2015*, and never mentioned it, Greco said.

While the use of Stingray is now established, the equipment has been upgraded and similar kit cannot also analyse mobile messages and data streams, he said. They are also using password cracking code for mobile phones that hasn’t been assessed and which cannot be assessed – because it is only ever sold to law enforcement, he claimed.

“Software needs an iterative process of debugging and improvement,” said Dr Jeanna Matthews, associate professor and fellow of Data and Society at Clarkson University. “There’s a huge advantage to independent third party testing, and it needs teams incentivised to finding problems, not those with an incentive to say everything’s fine.” ®

* The statistic is backed by information obtained from the NYPD via a Freedom of Information Law request by the New York Civil Liberties Union.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/13/criminal_justice_code/

The off-brand ‘military-grade’ x86 processors, in the library, with the root-granting ‘backdoor’

Black Hat A forgotten family of x86-compatible processors still used in specialist hardware, and touted for “military-grade security features,” has a backdoor that malware and rogue users can exploit to completely hijack systems.

The vulnerability is hardwired into the silicon of Via Technologies’ C3 processors, which hit the market in the early to mid-2000s.

Specifically, the chip-level backdoor, when activated, allows software to feed instructions to a hidden coprocessor that has total control over the computer’s hardware. This access can be exploited by normal programs and logged-in users to alter the operating system kernel’s memory, gain root, or administrator-level, permissions, and cause other mischief.

This weird and wonderful piece of semiconductor history was uncovered by Christopher Domas, an adjunct instructor at Ohio State University in the US, who presented his findings on Thursday at the 2018 Black Hat USA security conference in Las Vegas. He offered further in-depth details in a companion GitHub repository that also contains code for detecting and closing the backdoor if found.

“The backdoor allows ring 3 (userland) code to circumvent processor protections to freely read and write ring 0 (kernel) data,” according to Domas. “While the backdoor is typically disabled (requiring ring 0 execution to enable it), we have found that it is enabled by default on some systems.”

Here’s a demonstration of an exploit executing a special sequence of instructions to make the coprocessor alter kernel memory and escalate a program’s privileges to root on a Linux-flavored vulnerable machine:

If the backdoor is enabled, when the x86 CPU encounters two particular bytes, it passes a payload of non-x86 instructions, pointed to in the eax register, to the coprocessor to execute. This code reaches into kernel memory and upgrades the running program’s access rights to superuser status.

Domas codenamed the backdoor “Rosenbridge,” and described the coprocessor as a non-x86 RISC-like CPU core embedded alongside the x86 core in the processor package. He differentiates it from other coprocessors where vulnerabilities have been identified, such as Intel’s Management Engine, by noting that it is more deeply embedded. It has access to not just the CPU’s main memory, but also to the register file and execution pipeline, he said.

In theory, backdoor access should require kernel-level privileges, but according to Domas, it is available by default on some systems, which means userland code can use the feature to tamper with the operating system.

Intel left a fascinating security flaw in its chips for 16 years – here’s how to exploit it

READ MORE

Not everyone agrees “backdoor” is the right term. Thilo Schumann, an electrical engineer based in Germany, in a tweet argued the exceptional access is a documented feature of the Via C3 in that it allows non-x86 software instructions to be executed alongside x86 code. In other words, it is used to extend the x86 core’s instruction set with bonus instructions, which are executed by the coprocessor.

Bit 0 in the C3’s Feature Control Register (FCR) can be set to enable an alternate instruction set, according to the C3 Nehemiah data sheet. The default setting uses the x86 instruction set; setting the bit to 1 enables the alternate instruction set (ALTINST).

“This alternate instruction set includes an extended set of integer, MMX, floating-point, and 3DNow! instructions along with additional registers and some more powerful instruction forms over the x86 instruction architecture,” the data sheet explained. “For example, in the alternate instruction set, privileged functions can be used from any protection level, memory descriptor checking can be bypassed, and many x86 exceptions such as alignment check can be bypassed.”

The data sheet stated this is intended for testing, debugging, and special applications. It advises customers who need access to contact Via, because the coprocessor’s instruction set appears not to be publicly documented. Therefore, while you can enable the hidden CPU yourself, you’ll need help writing code for it. Enabling it also means programs can bypass the x86 core’s security mechanisms, so it’s not ideal for general-purpose systems.

The access technique described by Domas works with Via C3 Nehemiah chips, which were made in 2003. The C3 line was aimed at industrial hardware, healthcare equipment, ATMs, sales terminals, and the like, yet also powered some consumer desktop and mobile computers.

The data sheet, however, says special access is available in all C3 processors, not just the Nehemiah family.

“While all VIA C3 processor processors contain this alternate instruction feature, the invocation details (e.g., the 0x8D8400 ‘prefix’) may be different between processors,” the docs explain.

Domas downplayed the impact of his findings, noting that subsequent generations of the fifteen-year-old chip don’t have the backdoor. He considers the work primarily of interest to researchers. But for those who happen to know where a cash machine running a 15-year-old C3 might be found, the flaw might merit more than academic interest.

Via Technologies, a Taiwan-based chip designer, did not respond to a request for comment. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/10/via_c3_x86_processor_backdoor/

Snap code snatched, Pentagon bans bands, pacemakers cracked, etc

Roundup This week, the infosec world descended on Las Vegas for BlackHat and DEF CON to share stories of bug hunting, malware neural nets, hefty payout offers, and more.

Meanwhile, outside of the desert…

Snapchat source sourced

Photo-slinging biz Snapchat had a pretty rough week, as a mystery code dump on GitHub turned out to be a chunk of the source for its iOS mobile app. The internal source code stayed up for a few days, and some users speculated as to whether it was genuine.

That question was answered when Snapchat filed a DMCA takedown notice to get it scrubbed from the site. While this got the code promptly yanked from GitHub, it also confirmed to everyone that the plundered source was, in fact, actual code from Snapchat’s mobile app.

The code snippet was reportedly taken from a buggy update issued for the iOS app back in May.

NoDaddy, no!

Web hosting biz GoDaddy accidentally left an Amazon Web Services (AWS) S3 bucket open to the world that exposed details of 31,000 of its servers, such as their specifications and storage capacity. This was, apparently, due to a misconfiguration caused by an AWS salesperson.

Pacemakers, insulin pumps ‘hackable’

Medical device maker MedTronic is under fire for declining to fix some security weaknesses discovered in its pacemakers and insulin pump equipment.

Infosec bods Billy Rios and Jonathan Butts reported the flaws over a year ago to the manufacturer, and this week spoke about their experiences in dealing with the biz, and the slow rate of progress in getting things fixed, at Black Hat USA 2018.

We’re told miscreants can, over the air, stop vulnerable pumps from delivering insulin, or inject unexpected doses. Hackers can also insert malware into the firmware of a vulnerable pacemaker to disrupt its operation. Such attacks in the real world would be rather debilitating for a patient.

The insulin pumps can be screwed around by someone within wireless range. The pacemaker was infected by reprogramming it using a terminal doctors use to monitor and configure patients’ devices. The software on the terminal had to be altered to achieve this, which required physical access. Alternatively, someone on the local network could intercept and tamper with the firmware as it was downloaded to the programmer via the internet.

MedTronic said the insulin pump in question, apart from not being generally available at least in the US anymore, does not accept over-the-air commands by default, requires replaying radio signals to exploit, and will alert the user of the change in dosage. Rios and Butts argued that the equipment should in any case implement stronger authorization mechanisms for wireless-issued orders.

Similarly, the duo said the pacemakers should only accept firmware cryptographically signed by MedTronic, rather than any old code, when being updated by their reprogramming terminal. MedTronic dismissed malicious reprogramming as an impractical attack, and “low risk,” adding that patients should be safe if they and their doctors ensure the reprogramming terminals remain unhacked. Tell that to the hospitals hit by ransomeware.

The manufacturer has emitted a bunch of advisories lately for its products regarding the pair’s discoveries:

  • MyCareLink Patient Monitor 24950 and 24952: “Successful exploitation of these vulnerabilities may allow an attacker with physical access to obtain per-product credentials that are utilized to authenticate data uploads and encrypt data at rest. Additionally, an attacker with access to a set of these credentials and additional identifiers can upload invalid data to the Medtronic CareLink network.” Changes have been made server-side to mitigate this. (CVE-2018-10626 and CVE-2018-10622)
  • MiniMed 508 Insulin Pump: “Successful exploitation of these vulnerabilities may allow an attacker to replay captured wireless communications and cause an insulin (bolus) delivery. This is only possible when non-default options are configured. Additionally, the pump will annunciate this by providing a physical alert, and the user has the capability to suspend the bolus delivery.” No mitigations planned. (CVE-2018-10634 and CVE-2018-14781)
  • N’Vision Clinician Programmer: “The 8840 Clinician Programmer executes the application program from the 8870 Application Card. An attacker with physical access to an 8870 Application Card and sufficient technical capability can modify the contents of this card, including the binary executables. If modified to bypass protection mechanisms, this malicious code will be run when the card is inserted into an 8840 Clinician Programmer.” No mitigations planned. (CVE-2018-10631 plus CVE-2018-8849)
  • 2090 Carelink Programmer: “Successful exploitation of these vulnerabilities may allow an attacker with physical access to a 2090 Programmer to obtain per-product credentials to the software deployment network. These credentials grant access to the software deployment network, but access is limited to read-only versions of device software applications.” That means a miscreant could download copies of the software using hardcoded login details.

    “Additionally, successful exploitation of these vulnerabilities may allow an attacker with local network access to influence communications between the Programmer and the software deployment network.” That means it’s possible for someone on the local network to tamper with pacemaker firmware as it is downloaded to the programmer. Changes were made server-side to thwart this meddling. (CVE-2018-5446, CVE-2018-544, and CVE-2018-10596)

Whether exploiting these is easier or harder than just stabbing, shooting, or poisoning a victim is an exercise we’ll leave to, er, well, hopefully no one.

Ionescu unveils low-level Windows 10 debug kit

Shortly before everyone headed off to catch their Vegas flights, an interesting new security and debugging tool was dropped for Windows 10.

Alex Ionescu’s r0ak utility lets users with an admin-level account get past all of Microsoft’s pesky security controls and execute code in ring 0, kernel-mode.

Ionescu, who has made something of a habit of cracking open Windows protections, says the tool is designed to help admins get a better handle on system-level events.

“For advanced troubleshooting, IT experts will typically use tools such as the Windows Debugger (WinDbg), SysInternals Tools, or write their own,” the guru explained. “Unfortunately, usage of these tools is getting increasingly hard, and they are themselves limited by their own access to Windows APIs and exposed features.”

Speaking of Windows, a desktop sandboxing feature may have been spotted in a public Insider beta this week.

There’s a new bug market in town

Researchers looking to make a living from bug discoveries will have one more place to do business this fall.

Exploit-brokers Crowdfense announced plans to launch a new service called the Vulnerability Research Platform. The program will look to streamline the process of testing, building, and selling proof of concepts for both individual and chained exploits.

“Through the VRP, Crowdfense experts work in real time with researchers to evaluate, test, document and refine their findings,” said Crowdfense director Andrea Zapparoli Manzoni.

“The findings can be both within the scope of Crowdfense public Bug Bounty Program or freely proposed by researchers (for a specific set of key targets).”

From the sound of things, Crowdfense wants to make it easier for researchers to report and get top dollar for their discoveries. While this might bring to mind images of covert government exchanges, more likely the buyers will be the companies themselves or security firms looking to tout protection from the latest high-profile security holes.

Comcast (again) irks customers (again), this time with a data leak

Stop us if you’ve heard this one: Comcast has done something else to piss off customers who will have little recourse.

This time, the cable giant has managed to let slip portions of customers’ home addresses or the last four digits of their social security numbers thanks to flaws in both its customer and dealer web log-in portals.

Apparently the holes have both been patched, and even when open an attacker would have been unable to get anything more than partial data for either the address or the social security number. But this is going to be yet another bit of bad press for a company that already has an awful reputation with customers.

The Pentagon’s latest security menace: fitness trackers

If you’re stationed abroad, you may no longer be able to post humblebrags about your daily workouts.

That’s because the Department of Defense has issued a directive that troops and department personnel in sensitive areas (i.e. any place where you won’t want to be tracked) quit sharing their fitness data.

The reason is easy enough to understand: Trackers and exercise apps will often share GPS coordinates and other location that would possibly allow a hostile party to track, in some cases even pinpoint, a person’s location at any given time.

The Pentagon says discretion on the ban will be given to military commanders and department heads, who will get to determine just how much info about the day’s workout their subordinates can safely share.

Que malo! MongoDB screw-up by Mexican health provider exposes patient data

This week in “forgot to set any sort of security on the cloud database”, we have Hova Health, a medical provider and unwitting records dealer from Mexico.

According to researcher Bob Diachenko, someone at Hova neglected to restrict access to the company’s MongoDB records database, leaving the entire collection exposed to the open internet (Diachenko discovered the cache via Shodan).

Diachenko estimates that, in total, around 2.3 million people in Mexico had their name, gender, national ID number, insurance details, date of birth, and home address left sitting out in the open. Diachenko said he notified the company and it is reviewing the incident. Hopefully that includes learning how to set access policies on its databases. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/11/security_roundup/

Hackers can cook you alive using ‘microwave oven’ sat-comms – claim

Black Hat Four years ago, IOActive security researcher Ruben Santamarta came to Black Hat USA to warn about insecurities in aircraft satellite-communication (SATCOM) systems. Now he’s back with more doom and gloom.

During a presentation at this year’s hacking conference in Las Vegas this week, he claimed he has found a host of flaws in aircraft, shipping, and military satellite comms equipment.

These security shortcomings can, it is alleged, be exploited to snoop on transmissions, disrupt transportation, infiltrate computers on military bases, and more – including possibly physically directing antennas at nearby fleshy humans and using the high-frequency microwave-band electronics to bathe them in unwanted amounts of electromagnetic radiation.

“It’s pretty much the same principle as a microwave oven,” he told The Register. “The flaws allow us to ramp up the frequency.”

The vulnerabilities stem from a variety of blunders made by SATCOM hardware manufacturers. Some build backdoors into their products for remote maintenance, which can be found and exploited, while other equipment has been found to be misconfigured or using hardcoded credentials, opening them up to access by miscreants. These holes can be abused by a canny hacker to take control of an installation’s antenna, monitor the information the data streams contain, and in some cases change where it is pointing.

“Some of the largest airlines in the US and Europe had their entire fleets accessible from the internet, exposing hundreds of in-flight aircraft,” according to Santamarta. “Sensitive NATO military bases in conflict zones were discovered through vulnerable SATCOM infrastructure. Vessels around the world are at risk as attackers can use their own SATCOM antennas to expose the crew to radio-frequency radiation.”

Essentially, think of these vulnerable machines as internet-facing or network-connected computers, complete with exploitable remote-code-execution vulnerabilities. Once you’ve been able to get control of them – and there are hundreds exposed to the internet, apparently – you can disrupt or snoop on or meddle with their communications, possibly point antennas at people, and attack other devices on the same network.

Oblivion, the movie comms officer desk

Sat comms kit riddled with backdoors for hackers – researcher

READ MORE

This is all particularly worrying for military antennas. Very often these are linked to GPS units, and an intruder could use this data to divine the location of military units, as well as siphon off classified information from the field. Similar SATCOM systems are often used by journalists in trouble spots; unwelcome press interest could be targeted, perhaps terminally.

In satellite-communications units for the shipping industry, Santamarta said he found flaws that could be used to identify where a particular vessel was, and also damage installations by overdriving the hardware. Malicious firmware could be installed to interfere with positioning equipment, and lead ships astray, it was claimed.

Santamarta also postulated crews and passengers on container and cruise ships could be harmed by directing microwave-band antennas at them. There are safeguards to stop equipment from being pointed at people and effectively used as radio-frequency weapons, but those could be overridden, he claimed. The amount of harm caused, if any, of course, depends on the power of the system.

Mitigations

Some of these software flaws remain unpatched, as manufacturers continue to develop updates, while others privately disclosed to vendors have been fixed.

He also claimed it is possible to take over an aircraft’s satellite-communications system from the ground, depending on the model, and then potentially not only commandeer the in-flight Wi-Fi access point but also menace devices of individual passengers. The in-flight wireless network could also be hacked while onboard the airplane, we’re told, if you’d rather not go the SATCOM route.

It would not be possible for him to hijack the aircraft’s core control systems, though, as these are kept strictly separate and locked down. The aircraft SATCOM holes have since been fixed, he told the conference. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/10/satellite_communications_microwave_oven_hacking/

Google Spectre whizz kicked out of Caesars, blocked from DEF CON over hack ‘attack’ tweet

Updated At midnight on Thursday, Matt Linton, a senior Google bod who was one of the key players in sorting out the Spectre CPU security hole mess, went to his hotel room in Caesars Palace, Las Vegas – and found his key no longer worked.

When he went to reception to find out what the problem was, he was met by two security guards who took him to the room, told him pick up his stuff, and escorted him off the premises. He was also given a written warning that he would be prosecuted if he stepped foot in the hotel again, which, considering it’s the venue for this year’s DEF CON hacking conference in the US, is an almighty embuggerence. The event is the one conference all hardcore hackers try to get to, and now he’s effectively barred from attending.

According to the hotel’s security director, “they don’t take kindly to threats,” according to Linton. “Sir, your speech has consequences, so you better think about that in the future before you threaten,” was another comment from the security team, the Googler recalled.

This all stemmed from a jokey tweet emitted by Linton on Wednesday, which you can read in full below:

While somewhat off-color, anyone with an ounce of sense could see that this was a joke about how hackers prefer to go after lucrative targets. And by “attack,” the Googler – whose job title is “chaos specialist” – meant “hack,” not physically beat someone up. Yet, it was enough to earn him a visit from the Las Vegas Police Department the next day, hours before his eviction from Caesars.

By the account of one person who was there, the matter was quickly and amicably resolved between Linton and the cops. Once the techie explained the context of the quip, the officers were completely satisfied, and even liked and retweeted his tweet clarifying his earlier comment.

So the matter appeared settled. Then, at around 12am on Friday, Linton was booted out by Caesars. He was charged the half-a-day rate for his room before being unceremoniously ejected onto the Las Vegas Strip in the early hours of the morning, with little hope of finding another room.

Linton told The Register that “[the hotel] definitely told me that the conference organizers were worried about my ‘threat to their venue’.” This seems highly unlikely: you’d have thought the DEF CON organizers would be able to see the tweet for what it was, and understand the joke even though it was poking fun at DEF CON attendees.

What’s more likely is that the tweet collided with tensions in Las Vegas over gun violence. On October 1 last year, the city suffered one of the worst mass shootings in American history when a scumbag whose name isn’t worth remembering shot and killed 58 people and injured 851 others from his window in the Mandalay Bay hotel – which coincidentally hosts the Black Hat USA conference Linton spoke at earlier this week.

The atrocity hit Sin City hard, and inspired the #vegasstrong movement. It also put the police on high alert to prevent any repeat of the slayings. Noted security journalist and author Kim Zetter, who was also attending this year’s conferences, had her room at the Mandalay Bay forcibly searched by hotel security after she declined to allow housekeeping in.

Blowback

Given it’s the wee hours of the morning here in Las Vegas, at time of writing, there has been no response from the hotel’s PR bods about the situation.

DEF CON organizers told El Reg they haven’t seen nor made any complaints about Linton’s tweet. “I don’t actually think anyone at DEF CON complained – I think [the hotel employee] was just trying to make me feel like nobody was on my side so I would stop asking for escalations,” Linton told The Reg.

In the opinion of your humble vulture, someone at the Caesars probably panicked, and decided to kick Linton out just to be on the safe side. This is, after all, the land of the lawsuit, and corporations are risk adverse.

After the Mandalay Bay mass-murder, the litigation paperwork started flying, and MGM, which runs the hotel, actually sued the survivors to shield itself from liability – the first time such a tactic had been seen.

Linton, a member of Google’s security incident response and forensics team, is respected in the infosec industry – not just for his Spectre cleanup effort, but also because he does important work mentoring younger security talent. He is also is a volunteer emergency medical technician who heads to disaster zones when the need is there. Locking him out of DEF CON threatens to cast a shadow over the conference, and won’t help convince those who attend that they are in a friendly environment. ®

Updated to add at 2230 UTC (1530 PT)

Linton has been unbanned from Caesars’ properties, and thus will be allowed to attend DEF CON within the hotel complex, The Register understands.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/10/google_matt_linton_caesars_def_con/

The off-brand ‘military-grade’ x86 processors, in the library, with the root-granting backdoor

Black Hat A forgotten family of x86-compatible processors still used in specialist hardware, and touted for “military-grade security features,” has a backdoor that malware and rogue users can exploit to completely hijack systems.

The vulnerability is hardwired into the silicon of Via Technologies’ C3 processors, which hit the market in the early to mid-2000s.

Specifically, the chip-level backdoor, when activated, allows software to feed instructions to a hidden coprocessor that has total control over the computer’s hardware. This access can be exploited by normal programs and logged-in users to alter the operating system kernel’s memory, gain root, or administrator-level, permissions, and cause other mischief.

This weird and wonderful piece of semiconductor history was uncovered by Christopher Domas, an adjunct instructor at Ohio State University in the US, who presented his findings on Thursday at the 2018 Black Hat USA security conference in Las Vegas. He offered further in-depth details in a companion GitHub repository that also contains code for detecting and closing the backdoor if found.

“The backdoor allows ring 3 (userland) code to circumvent processor protections to freely read and write ring 0 (kernel) data,” according to Domas. “While the backdoor is typically disabled (requiring ring 0 execution to enable it), we have found that it is enabled by default on some systems.”

Here’s a demonstration of an exploit executing a special sequence of instructions to make the coprocessor alter kernel memory and escalate a program’s privileges to root on a Linux-flavored vulnerable machine:

If the backdoor is enabled, when the x86 CPU encounters two particular bytes, it passes a payload of non-x86 instructions, pointed to in the eax register, to the coprocessor to execute. This code reaches into kernel memory and upgrades the running program’s access rights to superuser status.

Domas codenamed the backdoor “Rosenbridge,” and described the coprocessor as a non-x86 RISC-like CPU core embedded alongside the x86 core in the processor package. He differentiates it from other coprocessors where vulnerabilities have been identified, such as Intel’s Management Engine, by noting that it is more deeply embedded. It has access to not just the CPU’s main memory, but also to the register file and execution pipeline, he said.

In theory, backdoor access should require kernel-level privileges, but according to Domas, it is available by default on some systems, which means userland code can use the feature to tamper with the operating system.

Intel left a fascinating security flaw in its chips for 16 years – here’s how to exploit it

READ MORE

Not everyone agrees “backdoor” is the right term. Thilo Schumann, an electrical engineer based in Germany, in a tweet argued the exceptional access is a documented feature of the Via C3 in that it allows non-x86 software instructions to be executed alongside x86 code. In other words, it is used to extend the x86 core’s instruction set with bonus instructions, which are executed by the coprocessor.

Bit 0 in the C3’s Feature Control Register (FCR) can be set to enable an alternate instruction set, according to the C3 Nehemiah data sheet. The default setting uses the x86 instruction set; setting the bit to 1 enables the alternate instruction set (ALTINST).

“This alternate instruction set includes an extended set of integer, MMX, floating-point, and 3DNow! instructions along with additional registers and some more powerful instruction forms over the x86 instruction architecture,” the data sheet explained. “For example, in the alternate instruction set, privileged functions can be used from any protection level, memory descriptor checking can be bypassed, and many x86 exceptions such as alignment check can be bypassed.”

The data sheet stated this is intended for testing, debugging, and special applications. It advises customers who need access to contact Via, because the coprocessor’s instruction set appears not to be publicly documented. Therefore, while you can enable the hidden CPU yourself, you’ll need help writing code for it. Enabling it also means programs can bypass the x86 core’s security mechanisms, so it’s not ideal for general-purpose systems.

The access technique described by Domas works with Via C3 Nehemiah chips, which were made in 2003. The C3 line was aimed at industrial hardware, healthcare equipment, ATMs, sales terminals, and the like, yet also powered some consumer desktop and mobile computers.

The data sheet, however, says special access is available in all C3 processors, not just the Nehemiah family.

“While all VIA C3 processor processors contain this alternate instruction feature, the invocation details (e.g., the 0x8D8400 ‘prefix’) may be different between processors,” the docs explain.

Domas downplayed the impact of his findings, noting that subsequent generations of the fifteen-year-old chip don’t have the backdoor. He considers the work primarily of interest to researchers. But for those who happen to know where a cash machine running a 15-year-old C3 might be found, the flaw might merit more than academic interest.

Via Technologies, a Taiwan-based chip designer, did not respond to a request for comment. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/10/via_c3_x86_processor_backdoor/