STE WILLIAMS

Apple iOS iBoot Secure Bootloader Code Leaked Online

Lawyers for Apple called for the source code to be removed from GitHub.

Apple has taken legal action to remove from GitHub its source code for iBoot after the boot ROM chip code showed up there this week, Naked Security reports.

iBoot, which is enabled when an iOS device is powered on and prior to the OS kernel load, ensures the integrity of the iOS device software.

Lawyers for Apple issued a takedown order under the Digital Millennium Copyright Act (DMCA), and the DCMA notice now replaces the code dump on GitHub. It’s unclear how or why the source code was uploaded online.

Read more about the leak here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/apple-ios-iboot-secure-bootloader-code-leaked-online/d/d-id/1331015?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New POS Malware Steals Data via DNS Traffic

UDPoS is disguised to appear like a LogMeIn service pack, Forcepoint says.

Researchers at Forcepoint have discovered new point-of-sale (POS) malware disguised as a LogMeIn service pack that is designed to steal data from the magnetic stripe on the back of payment cards.

The malware, which Forcepoint is calling UDPoS, is somewhat different from the usual POS tools in that it uses UDP-based DNS traffic to sneak stolen credit and debit card data past firewalls and other security controls. It is also one of the few new POS malware tools to surface in some time, according to the company.

In recent years, the US, like many other countries, has switched from magnetic cards to chip and PIN cards based on the Europay, Mastercard, and Visa (EMV) standard. The transition has made it harder for criminals to steal payment card data using POS malware—like they did with the massive theft at Target in 2013.

However, malware like UDPoS suggests that criminals still see an opportunity to steal data from POS systems. For instance, Trend Micro last year reported on MajikPOS, a POS malware family that was used to steal data on more than 23,300 payment cards. Retailer Forever 21, which is investigating a data breach first reported last November, recently disclosed finding malware on some of its POS systems.

Luke Somerville, head of special investigations at Forcepoint, says there’s no evidence to show that UDPoS is currently being used to steal credit or debit card data. But Forcepoint’s tests have shown that the malware is indeed capable of doing so successfully.

In addition, one of the command and control servers with which the malware communicates was active and responsive during Forcepoint’s investigation of the threat. “[This] implies that the authors were at least prepared to deploy this malware in the wild,” Somerville says.

Among the likely targets of the malware are POS systems in hotels and restaurants and any other location with handheld devices for swiping credit and debit cards.

“This malware targets Windows-based systems,” Somerville notes. “Legacy POS systems are often based on variations of the Windows XP kernel. Large retailers who have not recently updated their systems could potentially have hundreds or even thousands of vulnerable machines.”

Forcepoint discovered the malware when investigating an apparent LogMeIn service pack that was generating a notable amount of unusual DNS requests. The company’s analysis of the malware showed it contacting a command and control server that also had a LogMeIn-themed identity.

There is no evidence that LogMeIn’s remote access service or products have been abused in any way as part of the malware deployment process, says Somerville. Instead, the authors of UDPoS appear to be simply using the LogMeIn brand as a sort of camouflage. “Using the name of a legitimate product as the theme of the file and service names is effectively an attempt to limit suspicion over the presence of these artifacts on infected machines,” he says.

Forcepoint itself has no insight into the process that the malware authors have used or plan to use to deliver UDPoS on point-of-sale systems. But the use of the LogMeIn brand to disguise the malware is not accidental. Many retailers and other organizations use LogMeIn’s software to enable remote management of their POS systems.

Given the filenames that have been chosen, it is clear that the authors of the malware are hoping to sneak their malware into these systems in the guise of a LogMeIn software update, Somerville says.

LogMeIn itself issued an alert this week warning its users not to fall for the scam. “According to our investigation, the malware is intended to deceive an unsuspecting user into executing a malicious email, link or file, possibly containing the LogMeIn name,” the company noted.

The alert reminded organizations of the LogMeIn’s policy of never using an attachment or a link for updating its software.

“UDPoS appears to have drawn inspiration from several other POS malware families,” Somerville says. “While none of the individual features are entirely unique, the combination of them appears to be a deliberate attempt to draw together successful elements of other campaigns.”

Related content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/new-pos-malware--steals-data-via-dns-traffic/d/d-id/1331022?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Uber data breach aided by lack of multi-factor authentication

For Uber CISO John Flynn, having to explain the company’s massive 2016 data breach to a senate hearing was never going to be an easy day out.

There are two strands to this incident – the company’s handling of the breach of 57 million customer and driver records once it found out about it, and the technical failings that allowed it to happen in the first place.

The first we already know a bit about, principally that the company realised it had been breached in November 2016 when it was sent a $100,000 ransom note. That ransom was paid through the company’s bug bounty programme, allegedly in the hope nobody would notice.

Uber then failed to tell anyone about the breach for a year, finally owning up to what had happened last November.

In an attempt to limit further criticism of Uber’s ethical failings, Flynn tried in his testimony this week to come clean about the security weaknesses that led to the breach.

According to Flynn, the hackers were able to access backup files on an Amazon AWS bucket after finding the credentials to access it inside code that had been posted to a weakly-secured GitHub repository.

But how had they accessed the repository?

Presumably by brute-forcing the password, which was a viable attack method because multi-factor authentication (which GitHub offers in several forms) had not been turned on.

As Flynn states:

We immediately took steps to implement multi-factor authentication for GitHub and rotated the AWS credential used by the intruder.

Standard practice, of course, although to anyone listening this will have sounded like the perfect definition of how to bolt a stable door with the horse miles down the road.

He concludes:

Despite the complexity of the issue and the limited information with which we started, we were able to lock down the point of entry within 24 hours.

As far as Flynn and Uber is concerned, its technical reflexes were good. Its security people got busy, closed the weakness, locked out the bad guys and stopped using GitHub except to post open source code.

And yet in the same testimony, Flynn admits that “to the best of Uber’s knowledge” the hackers gained access to the AWS bucket on October 13, 2016, a full month before Uber received the ransom note telling it the bad news.

This means that the only reason the company knew it had suffered a calamitous data breach was because the hackers told it so.

We don’t know what sort of logging Uber was using to track access to its external data, but this might have provided an early warning had it been in use.

Ditto, multi-factor authentication, which should have been implemented at the highest level available across all external services (with someone checking to make sure this was being done correctly).

Companies reading of Uber’s woes would do well not to gloat, however. The correct reaction should be to learn from its public mistakes to avoid making the same ones.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jtWJhZtSBF8/

From July, Chrome will name and shame insecure HTTP websites

Three years ago, Google’s search engine began favoring in its results websites that use encrypted HTTPS connections.

Sites that secure their content get a boost over websites that used plain-old boring insecure HTTP. In a “carrot and stick” model, that’s the carrot: rewarding security with greater search visibility.

Later this year comes the stick. This summer, Google will mark non-HTTPS websites as insecure in its Chrome browser, fulfilling a plan rolled out in September 2016.

Starting with Chrome 68, due to hit the stable distribution channel on July 2018, visiting a website using an HTTP connection will prompt the message “Not secure” in the browser’s omnibox – the display and input field that accepts both URLs and search queries.

“Chrome’s new interface will help users understand that all HTTP sites are not secure, and continue to move the web toward a secure HTTPS web by default,” Google explained in a draft blog post due to be published today and provided in advance to The Register.

Warnings … How users will be alerted

Beware the looming Google Chrome HTTPS certificate apocalypse!

READ MORE

Because Chrome holds something like 56 per cent of the global browser market share across mobile and desktop platforms, Google’s name-and-shame label is likely to be noticed by a great many Chrome users and by any websites those fans no longer visit due to security concerns.

While many websites will be affected, plenty are already in compliance. According to Google, 81 of the top 100 websites use HTTPS by default, over 68 per cent of Chrome traffic on Android and Windows occurs over HTTPS, and over 78 per cent of Chrome traffic on Chrome OS and macOS and iOS travels securely.

Google offers a free security auditing tool called Lighthouse that can help developers identify which website resources still load using insecure HTTP.

The Chocolate Factory’s shunning scheme follows a similar tack the company has taken to issue warnings to websites that rely on dodgy Symantec digital certificates. ®

PS: You can get free legit SSL/TLS certificates to make your site HTTPS from Let’s Encrypt.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/08/google_chrome_http_shame/

Now that’s taking the p… Sewage plant ‘hacked’ to craft crypto-coins

Infosec bods say they have uncovered what’s thought to be the first case of a major industrial control system network infected with cryptocurrency-mining malware.

SCADA security outfit Radiflow claimed today it found the software nasty lurking in computer systems at a water treatment facility. Several servers used to monitor and regulate critical water supplies were found to have been infected with code that quietly harvested Monero cyber-dosh and sent the coins over the internet to its masterminds, we’re told.

The malicious software was, we’re told, chewing up processor time, noisily shifting data over network, and exploiting the fact that industrial networks tend not to be running the latest security patches – typically because they oversee critical processes that cannot be interrupted or knocked out by bad updates.

“Cryptocurrency malware attacks involve extremely high CPU processing and network bandwidth consumption, which can threaten the stability and availability of the physical processes of a critical infrastructure operator,” said Yehonatan Kfir, chief tech officer at Radiflow.

“PCs in an OT [operational technology] network run sensitive HMI [human-machine interface] and SCADA [supervisory control and data acquisition] applications that cannot get the latest Windows, antivirus and other important updates and will always be vulnerable to malware attacks.”

A handful of euro 1 cent coins

More and more websites are mining crypto-coins in your browser to pay their bills, line pockets

READ MORE

The malware family caught on the water utility’s equipment wasn’t named, and it sounds relatively sophisticated – more than a JavaScript miner running on a webpage on someone’s laptop. It used obfuscation techniques, we’re told, such as shutting down any installed antivirus tools, and was designed to be stealthy to maximize its moneymaking before it could be discovered.

The software nasty was apparently spotted thanks to researchers noticing unusual spikes in unexpected HTTP communications from the infiltrated hardware, and after the computers tried to send data to servers already identified as malware command-and-control machines. The hidden miner has since been removed from the sewage plant’s systems, it is claimed.

Currency mining infections are fast becoming the preferred method for online scumbags to make a fast buck. Even ransomware is losing ground to mining infections, thanks in part to people keeping better backups and antivirus tools blocking extortionware.

There’s no word on how the malware got onto the SCADA network in the first place. It was either placed there by a rogue employee, via an open hardware port, or possibly through a network service left open by a careless admin.

We’ve pinged Radiflow, based in New Jersey, USA, for more information – we’ll let you know if they get back to us. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/08/scada_hackers_cryptocurrencies/

20 Signs You Need to Introduce Automation into Security Ops

Far too often, organizations approach automation as a solution looking for a problem rather than the other way around.

I’ve always been a big fan of efficiency. During the period in my career where I built a number of different security operations and incident response programs, I always tried to identify areas where valuable analyst cycles were being spent on time sinks. These time sinks typically involved manual, labor-intensive, often repetitive tasks. If a time sink did not add value to security operations, its value was not worth the time investment, and it could be eliminated. If a time sink was deemed to be essential to security operations, it was an ideal candidate for automation.

Today, talk of automation is everywhere in security. Automation can be a great way to bring about efficiencies, but only if it is done right. Far too often, organizations approach automation as a solution looking for a problem rather than the other way around. How can organizations approach automation intelligently and identify areas that are good candidates for automation? To answer this question, I offer 20 additional questions:

Image Credit: DuMont Television/Rosen Studios. Public domain, via Wikimedia Commons.

  1. Does your strategic intelligence come in the PDF format? We all need to understand the risks and threats to our organizations at a strategic level, but combing through pages and pages of free-form text isn’t the best way to arrive at something actionable that adds value.
  2. Do you make the rounds of several different security news and information sites each day? A noble effort for sure. But first take time to understand exactly what you are looking for and how to integrate into the security workflow.
  3. Where do you get your intelligence? If you mainly source it from emails and portals that require a manual login, it’s time to examine what can be done to automate this activity.
  4. How much time do you spend manually cutting and pasting indicators and other such data each day? 
  5. Do you constantly flip between multiple tools and screens? Sometimes this is unavoidable. But other times, technology can help alleviate the “swivel chair” effect.
  6. Do you find yourself manipulating data in Excel (or another such tool) when vetting, qualifying, and/or investigating alerts? 
  7. How often do you cut and paste queries between tools? This is one of the classic time sinks.
  8. How often do you cut and paste query results into and out of incident tickets? 
  9. Do you need to visit numerous log or data sources to get the visibility you need when investigating an alert? This is probably a good opportunity to think about consolidation.
  10. What do I mean by question 9? Is there a more generalized data source that can subsume the data provided by numerous highly specialized data sources?
  11. Do you find yourself running the same queries over and over again? 
  12. Do you know what is on your network, and how it is expected to communicate? Or do you find yourself needing to identify that manually, time and time again?
  13. When a vulnerability is announced, do you find yourself manually performing a set intersect between the vulnerability announcement, your asset database, endpoint data, log data, and network data?
  14. Do you find yourself repeatedly running manual reports for management and executives? Better to take time to understand what they are looking for and to figure out how to get it to them automatically.
  15. Do you find yourself perpetually renewing that consulting contract? Technology has its limitations, and consultants can certainly help in certain situations. But these should be stopgap measures rather than permanent solutions.
  16. When performing vendor risk assessment, do you find yourself buried under a pile of spreadsheets? 
  17. When customers attempt to assess the risk you as a vendor introduce to them, do you find yourself filling out what seems to be the same spreadsheet over and over again?
  18. Do standards and regulations compel you to sit for days on end with auditors and consultants? Perhaps it’s worth thinking about automating the way in which you manage and navigate this process.
  19. When wrapping up an incident investigation, do you have to manually check if systems have been remediated and that the issue has indeed been eradicated?
  20. Do you find yourself manually generating post-incident reports on a continual basis? 

There is certainly no shortage of opportunity to introduce automation into the security operations and incident response workflow. Although a blanket approach to automation is seldom productive, targeting automation to specific manual, labor-intensive, and repetitive processes can introduce large efficiencies into a security organization. If done right, the effort will pay large dividends in time and cost savings.

 

Josh is an experienced information security leader with broad experience building and running Security Operations Centers (SOCs). Josh is currently co-founder and chief product officer at IDRRA. Prior to joining IDRRA, Josh served as vice president, chief technology officer, … View Full Bio

Article source: https://www.darkreading.com/operations/20-signs-you-need-to-introduce-automation-into-security-ops/a/d-id/1331005?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Tennessee Hospital Hit With Cryptocurrency Mining Malware

Decatur County General Hospital is notifying 24,000 patients of cryptocurrency mining software on its EMR system.

Decatur County General Hospital (DCGH) in Parsons, Tennessee, recently discovered cryptocurrency mining malware on its its Electronic Medical Record (EMR) server. The hospital began informing 24,000 patients of the attack on January 26.

On November 27, 2017, the hospital received a security incident report from its EMR system vendor, which said unauthorized software, designed to mine cryptocurrency, had been installed on the server supported by the vendor. An ongoing investigation has indicated an unauthorized attacker accessed the server with the EMR system and injected the software.

The hospital’s EMR server contained data including patient names, addresses, birthdates, and social security numbers, as well as diagnosis and treatment data. There is no evidence either type of data was taken or viewed, and so far it doesn’t seem data theft was the attacker’s goal. However, the hospital cannot definitively prove data was not compromised and is therefore notifying patients.

DCGH has not named the EMR system vendor and is offering patients the myTrueIdentity online credit monitoring service for one year. Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/tennessee-hospital-hit-with-cryptocurrency-mining-malware/d/d-id/1331014?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook HOAX! New algorithm will NOT only show you 26 friends

Have you noticed something different about your Facebook newsfeed lately? Have you wondered if it has a new algorithm?

As in, a new algorithm that features your friends’ frantic posts about a new algorithm that throttles Newsfeed posts down to only those from 26 Facebook-selected friends?

Guess what, friends…. Facebook’s algorithm now chooses your 26 FB friends. If you can read this, please leave me a “hi,” whatever, so you will appear in my news feed.

Feel free to copy and paste on your wall, too, if you want to see more than FB’s algorithmic selection. FB shouldn’t choosing my friends.

Don’t worry. There’s no new algorithm, at least not that we know of. Your friends have just been taken in by the latest Facebook hoax. Don’t give in to their pleas to just post a quick “hi, whatever” – anything, really, just because they MISS YOU! and don’t want Facebook to choose which friends they see.

The viral rumor has been debunked by Snopes.

Snopes had skin in the game with this one: As the hoax slayers described on Monday, one version of the rumor added another layer of flame-broiled bogus onto the whopper – that a friend had checked Snopes and found that “yes it’s TRUE,” (no, it’s NOT), that Newsfeed was showing only posts from the “same few people, around 25, repeatedly the same, because Facebook has a new algorithm.”

As Snopes points out, the algorithm hoax followed on the heels of a real Facebook announcement from 11 January about a major overhaul in how newsfeed works.

It wasn’t about squeezing out your friends, though. In fact, Facebook had the opposite in mind: it said it was working on turning the tables when it comes to personal content from friends and family being squeezed out by an explosion of corporate posts, be they from corporations, businesses or media.

Snopes contacted Facebook to ask whether the claim of limiting personal interactions had merit. A representative said no, it does not. In fact, the rumor, which has gone viral by lying its way to the top of newsfeeds, has done so by tricking users into liking and sharing it, not because it has even a vague aroma of truth about it.

Don’t like the hoax posts. Don’t share them. Don’t add any comments to these posts, either: that will just increase their ranking. Rather, send a private message to those who’ve fallen for the hoax, gently talking friends and family out of sharing or participating in any way.

Let’s not be pawns in these pranksters’ games.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wVBoHUNb3TE/

Deepfake porn videos banned by Reddit, Twitter, Pornhub

The home of deepfakes – videos with celebrities’ or civilians’ faces stitched in via artificial intelligence (AI) – has kicked them out.

After keeping mum while the issue exploded over the past few weeks, on Wednesday Reddit banned deepfakes.

Or, rather, Reddit clarified that its existing ban on involuntary pornography, which features people whose permission has neither been sought nor granted, covers faked videos.

Up until Wednesday, Reddit had a single policy for two rule breakers: involuntary pornography and sexual or suggestive content involving minors. It’s split that single policy into two distinct policies, setting the involuntary pornography ban to stand on its own and adding language that specifies deepfakes:

Reddit prohibits the dissemination of images or video depicting any person in a state of nudity or engaged in any act of sexual conduct apparently created or posted without their permission, including depictions that have been faked.

The platforms that put up with the involuntary induction of people into becoming porn stars are steadily dwindling in number: both Twitter and the giant pornography site Pornhub have also banned deepfakes, likewise calling them nonconsensual porn that violates their terms of service (TOS).

From an email statement a Pornhub spokesperson sent to Motherboard:

We do not tolerate any nonconsensual content on the site and we remove all said content as soon as we are made aware of it. Nonconsensual content directly violates our TOS and consists of content such as revenge porn, deepfakes or anything published without a person’s consent or permission.

Pornhub previously told Mashable that it’s already taken down deepfakes flagged by users.

Corey Price, PornHub’s vice president, told Mashable that users have started to flag deepfakes and that the platform is taking them down as soon as it encounters the flags. Price encouraged anyone who finds nonconsensual porn to visit Pornhub’s content removal page to lodge an official takedown request.

As for Twitter, a spokesperson said that the platform isn’t just banning deepfakes; it’s also going to suspend user accounts of those identified as original posters of nonconsensual porn or accounts dedicated to posting the content.

You may not post or share intimate photos or videos of someone that were produced or distributed without their consent.

We will suspend any account we identify as the original poster of intimate media that has been produced or distributed without the subject’s consent. We will also suspend any account dedicated to posting this type of content.

Reddit, Twitter and Pornhub join earlier deepfake bans by chat service Discord and Gfycat, a former favorite for posting deepfakes from the Reddit crowd. Reddit is where the phenomenon first gained steam. Make that a LOT of steam: the r/deepfakes subreddit had over 90,000 subscribers as of Wednesday morning before it was taken down.

It’s not only celebrities who can be cast as porn stars, of course, and it’s not just porn that’s being fabricated. We’ve seen deepfakes that cast Hitler as Argentina President Mauricio Macri, plus plenty of deepfakes featuring President Trump’s face on Hillary Clinton’s head.

As far as deepfakes legality goes, Charles Duan, associate director of tech and innovation policy at the advocacy group R Street Institute thinktank, told Motherboard that the videos infringe on copyrights of both the porn performers whose bodies are used in the underlying videos and the celebrities whose faces, taken from interviews or copyrighted publicity photos, are glued onto those bodies. These groups could seek protection by filing a Digital Millennium Copyright Act (DMCA) report to initiate a takedown notice.

But he says it’s the video makers whose work is being appropriated, not the celebrities, who would have the solid copyright claim:

The makers of the video would have a perfectly legitimate copyright property case against the uploader, and they would be able to take advantage of the DMCA to have the website take the vid down. Chances are, a similar practice would work as well for these sorts of videos … Not on behalf of the victim [whose face is being used] but the maker of the video. Which is the weird thing about that whole situation.

Reddit said that this is about making Reddit a “more welcoming environment for all users.”

In Reddit’s announcement of the deepfakes ban, one admin, landoflobsters, praised the r/deepfakes moderators for already having their own verification process in place to prevent people posting images without permission.

Earlier on Wednesday, r/deepfakes moderators informed subscribers that deepfakes featuring the faces of minors would be banned without warning and that the posters would be reported to admins. The subreddit’s rules had also included a ban on using the faces of non-celebrities.

Good! But given that there was already a user-friendly app released that would generate deepfakes with a few button presses, the subreddit’s rules didn’t mean that non-celebrities’ and children’s faces wouldn’t be used to generate the videos. They just wouldn’t be tolerated on r/deepfakes… which itself is no longer tolerated on Reddit.

…or on Discord, Twitter, Pornhub, Gfycat, Imgur (which is also known to remove deepfakes, according to what was a running list on r/deepfakes), and… well, there will likely be more deepfake-banning services by the time this article is posted, at the rate this is going.

Is the genie back in the bottle? Doubtful! R/deepfakes members had already been discussing taking the whole kit and kaboodle off Reddit et al. and setting up a distributed platform where they could happily AI-stitch away.

Reddit’s move is likely just a bump in the road. This technology isn’t going anywhere: it’s here to stay. And that’s a not necessarily a bad thing.

The main problem here hasn’t been the technology; it’s the fact that the video creators are using it to produce explicit content without the permission of those involved.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/-P9QRX1uiLU/

Apple’s top-secret iBoot firmware source code spills onto GitHub for some insane reason

The confidential source code to Apple’s iBoot firmware in iPhones, iPads and other iOS devices has leaked into a public GitHub repo.

The closed-source code is top-secret, proprietary, copyright Apple, and yet has been quietly doing the rounds between security researchers and device jailbreakers on Reddit for four or so months, if not longer. Where exactly it came from, no one is sure for now.

Crucially, within the past day or so, someone decided to dump a copy of this secret sauce on popular developer hangout GitHub for all to find. Links to the files began circulating on Twitter in the past few hours.

The source was swiftly taken down following a DMCA complaint by Apple, which means the code must be legit or else Cupertino would have no grounds to strip it from the website. However, at least one clone of the software blueprints has remerged on GitHub, meaning you can find it if you look hard enough.

We’re not going to link to it. Also, downloading it is not recommended. Just remember what happened when people shared or sold copies of the stolen Microsoft Windows 2000 source code back in the day.

According to those who have looked through the leaked iBoot source, the blueprints look legit. They include low-level system code written in 32 and 64-bit Arm assembly, drivers, internal documentation, operating system utilities, build tools, and more. Every file of the code is marked “this document is the property of Apple Inc,” and: “It is considered confidential and proprietary. This document may not be reproduced or transmitted in any form, in whole or in part, without the express written permission of Apple Inc.”

programming scene

Heaps of Windows 10 internal builds, private source code leak online

READ MORE

iBoot is a second-stage bootloader that’s responsible for providing iOS’s Recovery Mode to fix kit that gets screwed up. It runs on-screen, and over a physical USB or serial interface. When not in the recovery position, it verifies that a legit build of iOS is present, and if so, it starts up the software when the iThing is powered on or rebooted. The bootloader is highly protected, is stored in an encrypted form on devices, and is key to maintaining the integrity of the operating system.

It can be abused to jailbreak iOS devices to install unofficial customizations and applications. Thus, releasing the source code does not put people directly at risk – it simply makes it easier for folks to find exploitable bugs, and leverage them to hijack iBoot and jailbreak iThings.

Specifically, what we’re talking about here is the source code to iOS 9’s iBoot, which was first released in 2015, although some of the files have a 2016 date in them. How this affects today’s devices running the latest software is unclear – some parts of the code may possibly linger on in iOS 11. No cryptographic keys or secrets have been found so far in the spilled documents.

For now, don’t panic.

No one’s going to hack your iPhone or iPad over the air, nor via a webpage or an app, from this leak. The code is useful for the tight-knit crowd of eggheads who like rummaging through firmware code looking for holes to exploit to jailbreak devices using a tethered physical connection to a computer. Apple has recently-ish stepped up its security, with its secure coprocessors and other measures, to thwart jailbreaks. Perhaps now fans will be able to find new ways to jailbreak and customize their iGear, now the blueprints for the bootloader are sitting on the internet in plain sight.

Instead, we recommend you just sit back, relax, and marvel at how Apple somehow managed to lose control of such a central, critical and hush-hush component of its software stack. And wonder what else has leaked from Cupertino’s highly secretive idiot-tax operations.

Apple could not be reached for immediate comment. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/08/apple_iboot_source_code_leaked/