STE WILLIAMS

Facebook facial recognition: class action suit gets court’s go ahead

Yes, yet another US court has reaffirmed, Facebook users can indeed sue the company over its use of facial recognition technology.

The US Court of Appeals for the Ninth Circuit on Thursday affirmed the district court’s certification of a class action suit – Patel v. Facebook – that a steady progression of courts has allowed to proceed since it was first filed in 2015.

Though a stream of courts has refused to let Facebook wiggle out of this lawsuit – and boy oh boy, has it tried – this is the first decision of an American appellate court that directly addresses what the American Civil Liberties Union (ACLU) calls the “unique privacy harms” of the ever-more ubiquitous facial recognition technology, that’s increasingly being foisted on the public without our knowledge or consent.

The lawsuit was initially filed by some Illinois residents under Illinois law, but the parties agreed to transfer the case to the California court.

What the suit claims: Facebook violated Illinois privacy laws by “secretly” amassing users’ biometric data without getting consent from the plaintiffs, Nimesh Patel, Adam Pezen and Carlo Licata, collecting it and squirreling it away in what Facebook claims is the largest privately held database of facial recognition data in the world.

Specifically, the suit claims that Facebook didn’t do any of the following:

  • Properly inform users that their biometric identifiers (face geometry) were being generated, collected or stored.
  • Properly inform them, in writing, what it planned to do with their biometrics and how long the company planned to collect, store and use the data.
  • Provide a publicly available retention schedule and guidelines for permanently destroying the biometric identifiers of users who don’t opt out of “Tag Suggestions”.
  • Receive a written release from users to collect, capture, or otherwise obtain their biometric identifiers.

The Illinois law in question – the Illinois Biometric Information Privacy Act (BIPA) – bans collecting and storing biometric data without explicit consent, including “faceprints.” This is one of the first tests of the powerful biometrics privacy law. Another test of BIPA is a class action suit, proposed in September 2018, brought against the US fast-food chain Wendy’s over its use of biometric clocks that scan employees’ fingerprints to track them at work.

Nathan Freed Wessler, staff attorney with the ACLU Speech, Privacy, and Technology Project, had this to say about the court’s decision to let the Facebook facial recognition class action go ahead:

This decision is a strong recognition of the dangers of unfettered use of face surveillance technology.

The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale. Both corporations and the government are now on notice that this technology poses unique risks to people’s privacy and safety.

In her opinion, Judge Sandra Segal Ikuta wrote that the court concludes that Facebook’s development of a “face template” using facial recognition, allegedly without consent, could well invade an individual’s privacy rights:

The facial-recognition technology at issue here can obtain information that is ‘detailed, encyclopedic, and effortlessly compiled,’ which would be almost impossible without such technology.

In short, yes, the court concluded: the plaintiffs have made a case for having allegedly suffered sufficient privacy injuries to have standing to sue.

Rebecca Glenberg, senior staff attorney at the ACLU of Illinois, said that with this court go-ahead, Illinois’s BIPA law has passed legal muster. Citizens can let the lawsuits fly for having their faceprints taken without consent, even if nobody’s actually stolen it:

BIPA’s innovative protections for biometric information are now enforceable in federal court. If a corporation violates a statute by taking your personal information without your consent, you do not have to wait until your data is stolen or misused to go to court.

As our General Assembly understood when it enacted BIPA, a strong enforcement mechanism is crucial to hold companies accountable when they violate our privacy laws. Corporations that misuse Illinoisans sensitive biometric data now do so at their own peril.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/En8PlzNDHD0/

Apple will hand out unlocked iPhones to vetted researchers

It’s been called an iPhone jailbreaker’s golden egg: a so-called “dev-fused” iPhone created for internal use at Apple in order to extract and study the Secure Enclave Processor (SEP).

That golden yolk of a processor handles data encryption on the device that oh so many law enforcement and hacker types spend so much time, respectively, complaining about or cracking for fun, fame and profit.

Those rare, developer-only, “pre-jailbroken” iPhones have many security features disabled – a convenient feature for researchers looking to see how they tick and to discover iPhone zero days, which can be worth millions of dollars.

Well, here’s some good news for a select group of researchers: at the Black Hat 2019 security conference on Thursday, Apple’s head of security, Ivan Krstic, unveiled a new program through which the company is offering some form of pre-dev iPhones, specifically for security researchers.

CNet quoted Krstic:

This is an unprecedented, fully Apple-supported iOS security research platform.

As CNet reports, Apple is calling it the iOS Security Research Device Program. The program will launch next year.

Apple’s only handing out a limited amount of the iPhones, and only to qualified researchers.

These are not exactly like the phones that Apple gives its own security researchers. They’re going to come with what Krstic said are advanced debugging capabilities, but they won’t be as wide open as the jailbroken phones Apple insiders use or which sometimes wind up on the black market, in the form of iPhones that either haven’t completed the production process or which have been reverted to a development state.

Krstic said that the iPhones, while not being that open, will still provide ample details that can be used to hunt for vulnerabilities.

Sources told Forbes that one of the things that may turn these iPhones into a “lite” version of the jailbroken pre-dev phones is that Apple’s not likely to let researchers decrypt the iPhone’s firmware.

The vetted researchers who wind up getting their hands on one of the phones will, however, be able to do a whole lot more than they could with the commercially available version of Apple’s famously locked-down operating system. Forbes’s sources told the publication that one possible feature would be the ability to stop the phone’s processor and inspect memory for vulnerabilities, enabling researchers to see what’s going on at the code level when they attempt an attack.

This might not just be about boosting iPhone security. This could be an attempt to stem the black market trade in dev-fused iPhones: a market that came to light after Motherboard conducted a months-long investigation into how security researcher Mathew Solnik (presumably) got his hands on a dev-fused phone. Motherboard’s curiosity had been piqued after Solnik teased his 2016 Black Hat talk by tweeting a screenshot of a terminal window that showed that he’d obtained the SEP firmware. Motherboard’s sources had said that Solnik must have gotten a dev-fused iPhone to get at the SEP.

In other bug-hunting news

Speaking of very valuable bugs, also on Thursday, Apple announced that it’s now offering up to $1 million for a vulnerability that’s persistent, can get kernel code execution, and doesn’t require victims to click on anything.

It’s about time.

For quite a while, Apple ran an invitation-only bug bounty program for iOS, but not for Mac OS. It was a baffling approach to bugs, and one that miffed German bug hunter Linus Henze: he whom Apple didn’t reward when he found, and published, a proof of concept he called KeySteal. KeySteal was a zero-day bug that could be exploited by attackers using a malicious app to drain passwords out of Apple’s Keychain password manager.

Henze initially refused to give Apple bug details, in protest of the company’s invite-only/iOS-only bounties. He eventually relented because, as he said, he cared about the security of macOS users.

This is a no-brainer. Mårten Mickos, CEO of bug bounty program platform HackerOne, says that it will garner attention and respect from ethical hackers:

Apple is known for its solid security practices. Increasing the bug bounties and broadening the scope is a natural step in strengthening their security posture and making it attractive for security researchers to spend time looking for vulnerabilities in Apple’s products (essentially their operating systems). Across the industry, we consistently see more engagement from ethical hackers when higher bounties are offered.

Getting on the million-dollar bandwagon

After all, when Apple isn’t handing out $1m bills, others step in to do the job and grab the goodies.

In 2015, a company called Zerodium offered up to $3,000,000 for iOS 9 jailbreak exploits. Within weeks, it reportedly paid $1,000,000 to a team that accomplished one of the remote browser-based iOS 9.1/9.2b jailbreaks that Zerodium wanted to buy.

Then, in August 2016, exploit broker Exodus Intelligence offered 2.5 times the bounty ($500,000 for major exploits in iOS 9.3 and above) that Apple was promising (up to $200,000) for serious iOS bugs.

Now, macOS bugs are up there in the seven-figure range for bug hunters. Good!

Kudos to Linus Henze for calling out this discrepancy: he made the right choice in the end by giving Apple details of his KeySteal attack, but at the same time, he managed to call attention to the puzzling lack of a bug bounty program for one of the world’s most ubiquitous operating systems.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SFMBPoyjIsk/

Apple will hand out unlocked iPhones to vetted researchers

It’s been called an iPhone jailbreaker’s golden egg: a so-called “dev-fused” iPhone created for internal use at Apple in order to extract and study the Secure Enclave Processor (SEP).

That golden yolk of a processor handles data encryption on the device that oh so many law enforcement and hacker types spend so much time, respectively, complaining about or cracking for fun, fame and profit.

Those rare, developer-only, “pre-jailbroken” iPhones have many security features disabled – a convenient feature for researchers looking to see how they tick and to discover iPhone zero days, which can be worth millions of dollars.

Well, here’s some good news for a select group of researchers: at the Black Hat 2019 security conference on Thursday, Apple’s head of security, Ivan Krstic, unveiled a new program through which the company is offering some form of pre-dev iPhones, specifically for security researchers.

CNet quoted Krstic:

This is an unprecedented, fully Apple-supported iOS security research platform.

As CNet reports, Apple is calling it the iOS Security Research Device Program. The program will launch next year.

Apple’s only handing out a limited amount of the iPhones, and only to qualified researchers.

These are not exactly like the phones that Apple gives its own security researchers. They’re going to come with what Krstic said are advanced debugging capabilities, but they won’t be as wide open as the jailbroken phones Apple insiders use or which sometimes wind up on the black market, in the form of iPhones that either haven’t completed the production process or which have been reverted to a development state.

Krstic said that the iPhones, while not being that open, will still provide ample details that can be used to hunt for vulnerabilities.

Sources told Forbes that one of the things that may turn these iPhones into a “lite” version of the jailbroken pre-dev phones is that Apple’s not likely to let researchers decrypt the iPhone’s firmware.

The vetted researchers who wind up getting their hands on one of the phones will, however, be able to do a whole lot more than they could with the commercially available version of Apple’s famously locked-down operating system. Forbes’s sources told the publication that one possible feature would be the ability to stop the phone’s processor and inspect memory for vulnerabilities, enabling researchers to see what’s going on at the code level when they attempt an attack.

This might not just be about boosting iPhone security. This could be an attempt to stem the black market trade in dev-fused iPhones: a market that came to light after Motherboard conducted a months-long investigation into how security researcher Mathew Solnik (presumably) got his hands on a dev-fused phone. Motherboard’s curiosity had been piqued after Solnik teased his 2016 Black Hat talk by tweeting a screenshot of a terminal window that showed that he’d obtained the SEP firmware. Motherboard’s sources had said that Solnik must have gotten a dev-fused iPhone to get at the SEP.

In other bug-hunting news

Speaking of very valuable bugs, also on Thursday, Apple announced that it’s now offering up to $1 million for a vulnerability that’s persistent, can get kernel code execution, and doesn’t require victims to click on anything.

It’s about time.

For quite a while, Apple ran an invitation-only bug bounty program for iOS, but not for Mac OS. It was a baffling approach to bugs, and one that miffed German bug hunter Linus Henze: he whom Apple didn’t reward when he found, and published, a proof of concept he called KeySteal. KeySteal was a zero-day bug that could be exploited by attackers using a malicious app to drain passwords out of Apple’s Keychain password manager.

Henze initially refused to give Apple bug details, in protest of the company’s invite-only/iOS-only bounties. He eventually relented because, as he said, he cared about the security of macOS users.

This is a no-brainer. Mårten Mickos, CEO of bug bounty program platform HackerOne, says that it will garner attention and respect from ethical hackers:

Apple is known for its solid security practices. Increasing the bug bounties and broadening the scope is a natural step in strengthening their security posture and making it attractive for security researchers to spend time looking for vulnerabilities in Apple’s products (essentially their operating systems). Across the industry, we consistently see more engagement from ethical hackers when higher bounties are offered.

Getting on the million-dollar bandwagon

After all, when Apple isn’t handing out $1m bills, others step in to do the job and grab the goodies.

In 2015, a company called Zerodium offered up to $3,000,000 for iOS 9 jailbreak exploits. Within weeks, it reportedly paid $1,000,000 to a team that accomplished one of the remote browser-based iOS 9.1/9.2b jailbreaks that Zerodium wanted to buy.

Then, in August 2016, exploit broker Exodus Intelligence offered 2.5 times the bounty ($500,000 for major exploits in iOS 9.3 and above) that Apple was promising (up to $200,000) for serious iOS bugs.

Now, macOS bugs are up there in the seven-figure range for bug hunters. Good!

Kudos to Linus Henze for calling out this discrepancy: he made the right choice in the end by giving Apple details of his KeySteal attack, but at the same time, he managed to call attention to the puzzling lack of a bug bounty program for one of the world’s most ubiquitous operating systems.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SFMBPoyjIsk/

Apple will hand out unlocked iPhones to vetted researchers

It’s been called an iPhone jailbreaker’s golden egg: a so-called “dev-fused” iPhone created for internal use at Apple in order to extract and study the Secure Enclave Processor (SEP).

That golden yolk of a processor handles data encryption on the device that oh so many law enforcement and hacker types spend so much time, respectively, complaining about or cracking for fun, fame and profit.

Those rare, developer-only, “pre-jailbroken” iPhones have many security features disabled – a convenient feature for researchers looking to see how they tick and to discover iPhone zero days, which can be worth millions of dollars.

Well, here’s some good news for a select group of researchers: at the Black Hat 2019 security conference on Thursday, Apple’s head of security, Ivan Krstic, unveiled a new program through which the company is offering some form of pre-dev iPhones, specifically for security researchers.

CNet quoted Krstic:

This is an unprecedented, fully Apple-supported iOS security research platform.

As CNet reports, Apple is calling it the iOS Security Research Device Program. The program will launch next year.

Apple’s only handing out a limited amount of the iPhones, and only to qualified researchers.

These are not exactly like the phones that Apple gives its own security researchers. They’re going to come with what Krstic said are advanced debugging capabilities, but they won’t be as wide open as the jailbroken phones Apple insiders use or which sometimes wind up on the black market, in the form of iPhones that either haven’t completed the production process or which have been reverted to a development state.

Krstic said that the iPhones, while not being that open, will still provide ample details that can be used to hunt for vulnerabilities.

Sources told Forbes that one of the things that may turn these iPhones into a “lite” version of the jailbroken pre-dev phones is that Apple’s not likely to let researchers decrypt the iPhone’s firmware.

The vetted researchers who wind up getting their hands on one of the phones will, however, be able to do a whole lot more than they could with the commercially available version of Apple’s famously locked-down operating system. Forbes’s sources told the publication that one possible feature would be the ability to stop the phone’s processor and inspect memory for vulnerabilities, enabling researchers to see what’s going on at the code level when they attempt an attack.

This might not just be about boosting iPhone security. This could be an attempt to stem the black market trade in dev-fused iPhones: a market that came to light after Motherboard conducted a months-long investigation into how security researcher Mathew Solnik (presumably) got his hands on a dev-fused phone. Motherboard’s curiosity had been piqued after Solnik teased his 2016 Black Hat talk by tweeting a screenshot of a terminal window that showed that he’d obtained the SEP firmware. Motherboard’s sources had said that Solnik must have gotten a dev-fused iPhone to get at the SEP.

In other bug-hunting news

Speaking of very valuable bugs, also on Thursday, Apple announced that it’s now offering up to $1 million for a vulnerability that’s persistent, can get kernel code execution, and doesn’t require victims to click on anything.

It’s about time.

For quite a while, Apple ran an invitation-only bug bounty program for iOS, but not for Mac OS. It was a baffling approach to bugs, and one that miffed German bug hunter Linus Henze: he whom Apple didn’t reward when he found, and published, a proof of concept he called KeySteal. KeySteal was a zero-day bug that could be exploited by attackers using a malicious app to drain passwords out of Apple’s Keychain password manager.

Henze initially refused to give Apple bug details, in protest of the company’s invite-only/iOS-only bounties. He eventually relented because, as he said, he cared about the security of macOS users.

This is a no-brainer. Mårten Mickos, CEO of bug bounty program platform HackerOne, says that it will garner attention and respect from ethical hackers:

Apple is known for its solid security practices. Increasing the bug bounties and broadening the scope is a natural step in strengthening their security posture and making it attractive for security researchers to spend time looking for vulnerabilities in Apple’s products (essentially their operating systems). Across the industry, we consistently see more engagement from ethical hackers when higher bounties are offered.

Getting on the million-dollar bandwagon

After all, when Apple isn’t handing out $1m bills, others step in to do the job and grab the goodies.

In 2015, a company called Zerodium offered up to $3,000,000 for iOS 9 jailbreak exploits. Within weeks, it reportedly paid $1,000,000 to a team that accomplished one of the remote browser-based iOS 9.1/9.2b jailbreaks that Zerodium wanted to buy.

Then, in August 2016, exploit broker Exodus Intelligence offered 2.5 times the bounty ($500,000 for major exploits in iOS 9.3 and above) that Apple was promising (up to $200,000) for serious iOS bugs.

Now, macOS bugs are up there in the seven-figure range for bug hunters. Good!

Kudos to Linus Henze for calling out this discrepancy: he made the right choice in the end by giving Apple details of his KeySteal attack, but at the same time, he managed to call attention to the puzzling lack of a bug bounty program for one of the world’s most ubiquitous operating systems.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SFMBPoyjIsk/

SELECT code_execution FROM * USING SQLite: Eggheads lift the lid on DB security hijinks

DEF CON At the DEF CON hacking conference in Las Vegas on Saturday, infosec gurus from Check Point are scheduled to describe a technique for exploiting SQLite, a database used in applications across every major desktop and mobile operating system, to gain arbitrary code execution.

In a technical summary provided to The Register ahead of their presentation, Check Point’s Omer Gull sets out how he and his colleague Omri Herscovici developed techniques referred to as Query Hijacking and Query Oriented Programming, in order to execute malicious code on a system. Query Oriented Programming is similar in a way to return oriented programming in that it relies on assembling malicious code from blocks of CPU instructions in a program’s RAM. The difference is that QOP is done with SQL queries.

SQLite is built into all sorts of things, from web browsers to embedded devices to Android, Windows, iOS, various BSDs, and commercial software. An exploitable security hole found in SQLite would therefore be rather bad news because it could open up lots of stuff to potential attack.

It must be stressed, though, that to pull off Check Point’s techniques to hack a given application via SQLite, you need file-system access permissions to alter that app’s SQLite database file, and that isn’t always possible. If you can change a program’s database file, you can probably get, or already have achieved, code execution on the system by some other means anyway.

Nonetheless, it’s a fascinating look into modern methods of code exploitation, and a neat set of discoveries.

Inside the hack

SQLite databases include a master table that describes the database and its objects. One of the fields in the master table is the Data Definition Language, or DDL, that defines the structure of the SQLite database. And because these DDL statements exist as text in a database file, they can be easily replaced if the file is accessible.

Master table DDL statements, Gull explains in the paper, have to begin with the CREATE command. With that limitation in mind, the researchers found that they could change the CREATE command to a CREATE VIEW command and hijack any future queries. CREATE VIEW, essentially, can be used to trap an app’s legit queries and inject extra commands into them.

With the ability to patch the DDL and have it fire off extra subqueries, there’s the opportunity to interact with vulnerable code within SQLite. In other words, it is possible to alter an SQLite database file so that when it is accessed by an application or operating system, the SQL queries the software wanted to run are intercepted, due to CREATE VIEW, and instead, arbitrary queries that exploit holes within SQLite can trigger instead.

Seeing as SQLite is loaded as a library, or statically built in, once you have code execution in SQLite, you can hijack the running program or operating system component.

Demonstration

Gull and Herscovici chose to focus on Web SQL, an abandoned web API for interacting with client-side databases via a variant of SQL and JavaScript that can still be found in browsers. To demonstrate code execution, they turned to a still-unfixed four-year-old bug, CVE-2015-7036, an untrusted pointer dereference, in SQLite that can be achieved by abusing the fts3_tokenizer() function. Basically, they found it was possible to abuse this function from a hijacked SQL query to defeat ASLR, and hijack the CPU to make it execute arbitrary code.

In another demonstration of the potential of their approach, Gull describes how they replaced the iOS Contacts database, “AddressBook.sqlitedb,” with a malicious version that crashes. Another version could be crafted to potentially achieve code execution within the applications querying the address book, if you’re able to to replace the address book SQLite file.

“Contacts, Facetime, Springboard, WhatsApp, Telegram and XPCProxy are just some of the processes querying it,” Gull explains in his paper. “Some of these processes are more privileged than others. Once we proved that we can execute code in the context of the querying process, this technique also allows us to expand and elevate our privileges.”

The findings, he said, were responsibly disclosed to Apple, which were assigned CVEs (CVE-2019-8600, CVE-2019-8598, CVE-2019-8602, and CVE-2019-8577) and patched in May to close the holes. The SQLite team also patched its software in April. App developers should ensure they push a build of their software that includes the updated database code to users in order to protect them.

“Given the fact that SQLite is practically built in to almost any platform, we think that we’ve barely scratched the tip of the iceberg when it comes to its exploitation potential,” Gull concludes. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/10/memory_corruption_sqlite/

SELECT code_execution FROM * USING SQLite: Eggheads lift the lid on DB security hijinks

DEF CON At the DEF CON hacking conference in Las Vegas on Saturday, infosec gurus from Check Point are scheduled to describe a technique for exploiting SQLite, a database used in applications across every major desktop and mobile operating system, to gain arbitrary code execution.

In a technical summary provided to The Register ahead of their presentation, Check Point’s Omer Gull sets out how he and his colleague Omri Herscovici developed techniques referred to as Query Hijacking and Query Oriented Programming, in order to execute malicious code on a system. Query Oriented Programming is similar in a way to return oriented programming in that it relies on assembling malicious code from blocks of CPU instructions in a program’s RAM. The difference is that QOP is done with SQL queries.

SQLite is built into all sorts of things, from web browsers to embedded devices to Android, Windows, iOS, various BSDs, and commercial software. An exploitable security hole found in SQLite would therefore be rather bad news because it could open up lots of stuff to potential attack.

It must be stressed, though, that to pull off Check Point’s techniques to hack a given application via SQLite, you need file-system access permissions to alter that app’s SQLite database file, and that isn’t always possible. If you can change a program’s database file, you can probably get, or already have achieved, code execution on the system by some other means anyway.

Nonetheless, it’s a fascinating look into modern methods of code exploitation, and a neat set of discoveries.

Inside the hack

SQLite databases include a master table that describes the database and its objects. One of the fields in the master table is the Data Definition Language, or DDL, that defines the structure of the SQLite database. And because these DDL statements exist as text in a database file, they can be easily replaced if the file is accessible.

Master table DDL statements, Gull explains in the paper, have to begin with the CREATE command. With that limitation in mind, the researchers found that they could change the CREATE command to a CREATE VIEW command and hijack any future queries. CREATE VIEW, essentially, can be used to trap an app’s legit queries and inject extra commands into them.

With the ability to patch the DDL and have it fire off extra subqueries, there’s the opportunity to interact with vulnerable code within SQLite. In other words, it is possible to alter an SQLite database file so that when it is accessed by an application or operating system, the SQL queries the software wanted to run are intercepted, due to CREATE VIEW, and instead, arbitrary queries that exploit holes within SQLite can trigger instead.

Seeing as SQLite is loaded as a library, or statically built in, once you have code execution in SQLite, you can hijack the running program or operating system component.

Demonstration

Gull and Herscovici chose to focus on Web SQL, an abandoned web API for interacting with client-side databases via a variant of SQL and JavaScript that can still be found in browsers. To demonstrate code execution, they turned to a still-unfixed four-year-old bug, CVE-2015-7036, an untrusted pointer dereference, in SQLite that can be achieved by abusing the fts3_tokenizer() function. Basically, they found it was possible to abuse this function from a hijacked SQL query to defeat ASLR, and hijack the CPU to make it execute arbitrary code.

In another demonstration of the potential of their approach, Gull describes how they replaced the iOS Contacts database, “AddressBook.sqlitedb,” with a malicious version that crashes. Another version could be crafted to potentially achieve code execution within the applications querying the address book, if you’re able to to replace the address book SQLite file.

“Contacts, Facetime, Springboard, WhatsApp, Telegram and XPCProxy are just some of the processes querying it,” Gull explains in his paper. “Some of these processes are more privileged than others. Once we proved that we can execute code in the context of the querying process, this technique also allows us to expand and elevate our privileges.”

The findings, he said, were responsibly disclosed to Apple, which were assigned CVEs (CVE-2019-8600, CVE-2019-8598, CVE-2019-8602, and CVE-2019-8577) and patched in May to close the holes. The SQLite team also patched its software in April. App developers should ensure they push a build of their software that includes the updated database code to users in order to protect them.

“Given the fact that SQLite is practically built in to almost any platform, we think that we’ve barely scratched the tip of the iceberg when it comes to its exploitation potential,” Gull concludes. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/10/memory_corruption_sqlite/

SELECT code_execution FROM * USING SQLite: Eggheads lift the lid on DB security hijinks

DEF CON At the DEF CON hacking conference in Las Vegas on Saturday, infosec gurus from Check Point are scheduled to describe a technique for exploiting SQLite, a database used in applications across every major desktop and mobile operating system, to gain arbitrary code execution.

In a technical summary provided to The Register ahead of their presentation, Check Point’s Omer Gull sets out how he and his colleague Omri Herscovici developed techniques referred to as Query Hijacking and Query Oriented Programming, in order to execute malicious code on a system. Query Oriented Programming is similar in a way to return oriented programming in that it relies on assembling malicious code from blocks of CPU instructions in a program’s RAM. The difference is that QOP is done with SQL queries.

SQLite is built into all sorts of things, from web browsers to embedded devices to Android, Windows, iOS, various BSDs, and commercial software. An exploitable security hole found in SQLite would therefore be rather bad news because it could open up lots of stuff to potential attack.

It must be stressed, though, that to pull off Check Point’s techniques to hack a given application via SQLite, you need file-system access permissions to alter that app’s SQLite database file, and that isn’t always possible. If you can change a program’s database file, you can probably get, or already have achieved, code execution on the system by some other means anyway.

Nonetheless, it’s a fascinating look into modern methods of code exploitation, and a neat set of discoveries.

Inside the hack

SQLite databases include a master table that describes the database and its objects. One of the fields in the master table is the Data Definition Language, or DDL, that defines the structure of the SQLite database. And because these DDL statements exist as text in a database file, they can be easily replaced if the file is accessible.

Master table DDL statements, Gull explains in the paper, have to begin with the CREATE command. With that limitation in mind, the researchers found that they could change the CREATE command to a CREATE VIEW command and hijack any future queries. CREATE VIEW, essentially, can be used to trap an app’s legit queries and inject extra commands into them.

With the ability to patch the DDL and have it fire off extra subqueries, there’s the opportunity to interact with vulnerable code within SQLite. In other words, it is possible to alter an SQLite database file so that when it is accessed by an application or operating system, the SQL queries the software wanted to run are intercepted, due to CREATE VIEW, and instead, arbitrary queries that exploit holes within SQLite can trigger instead.

Seeing as SQLite is loaded as a library, or statically built in, once you have code execution in SQLite, you can hijack the running program or operating system component.

Demonstration

Gull and Herscovici chose to focus on Web SQL, an abandoned web API for interacting with client-side databases via a variant of SQL and JavaScript that can still be found in browsers. To demonstrate code execution, they turned to a still-unfixed four-year-old bug, CVE-2015-7036, an untrusted pointer dereference, in SQLite that can be achieved by abusing the fts3_tokenizer() function. Basically, they found it was possible to abuse this function from a hijacked SQL query to defeat ASLR, and hijack the CPU to make it execute arbitrary code.

In another demonstration of the potential of their approach, Gull describes how they replaced the iOS Contacts database, “AddressBook.sqlitedb,” with a malicious version that crashes. Another version could be crafted to potentially achieve code execution within the applications querying the address book, if you’re able to to replace the address book SQLite file.

“Contacts, Facetime, Springboard, WhatsApp, Telegram and XPCProxy are just some of the processes querying it,” Gull explains in his paper. “Some of these processes are more privileged than others. Once we proved that we can execute code in the context of the querying process, this technique also allows us to expand and elevate our privileges.”

The findings, he said, were responsibly disclosed to Apple, which were assigned CVEs (CVE-2019-8600, CVE-2019-8598, CVE-2019-8602, and CVE-2019-8577) and patched in May to close the holes. The SQLite team also patched its software in April. App developers should ensure they push a build of their software that includes the updated database code to users in order to protect them.

“Given the fact that SQLite is practically built in to almost any platform, we think that we’ve barely scratched the tip of the iceberg when it comes to its exploitation potential,” Gull concludes. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/10/memory_corruption_sqlite/

I could throttle you right about now: US Navy to ditch touchscreens after kit blamed for collision

The US Navy is ditching touchscreens and going back to physical throttles after an investigation into the USS John S McCain collision partly blamed poor design of control systems for the incident.

The first throttles will be fitted to DDG-51 class destroyers from next summer. Contracting for the new kit is already under way and the equipment is expected to come in a kit form and not require major maintenance. New ships will be built with physical throttles already in place.

Navy personnel from Shutterstock

What’s long, hard, and full of seamen? The US Navy’s latest cybersecurity war gaming classes

READ MORE

Sailors told the probe that they found touchscreen control systems overly complex.

The review is also pushing for more commonality between ships of the same class so that functions and information on screens appears in the same place and on the same menus.

The John S McCain helm included a physical wheel but also had two touchscreens to run other functions.

The McCain crashed into a chemical tanker in a shipping lane off Singapore in August 2017. The investigation found multiple causes but among them was confusion created when throttle and steering functions were split between two different consoles. Control of the port and starboard throttles was split between two helm stations so when a helmsman thought he was slowing both throttles in fact he was only slowing one causing a sharp turn into the tanker.

Another issue raised was ships’ AIS (Automatic Identification Systems) receivers. These are currently based on laptops relying on a cable connection to other systems. Sailors complained that the laptops were often stuck behind other equipment and hard to access.

As part of wider helm design changes, this should be addressed too. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/12/us_navy_ditching_touchscreens/

I could throttle you right about now: US Navy to ditch touchscreens after kit blamed for collision

The US Navy is ditching touchscreens and going back to physical throttles after an investigation into the USS John S McCain collision partly blamed poor design of control systems for the incident.

The first throttles will be fitted to DDG-51 class destroyers from next summer. Contracting for the new kit is already under way and the equipment is expected to come in a kit form and not require major maintenance. New ships will be built with physical throttles already in place.

Navy personnel from Shutterstock

What’s long, hard, and full of seamen? The US Navy’s latest cybersecurity war gaming classes

READ MORE

Sailors told the probe that they found touchscreen control systems overly complex.

The review is also pushing for more commonality between ships of the same class so that functions and information on screens appears in the same place and on the same menus.

The John S McCain helm included a physical wheel but also had two touchscreens to run other functions.

The McCain crashed into a chemical tanker in a shipping lane off Singapore in August 2017. The investigation found multiple causes but among them was confusion created when throttle and steering functions were split between two different consoles. Control of the port and starboard throttles was split between two helm stations so when a helmsman thought he was slowing both throttles in fact he was only slowing one causing a sharp turn into the tanker.

Another issue raised was ships’ AIS (Automatic Identification Systems) receivers. These are currently based on laptops relying on a cable connection to other systems. Sailors complained that the laptops were often stuck behind other equipment and hard to access.

As part of wider helm design changes, this should be addressed too. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/12/us_navy_ditching_touchscreens/

I could throttle you right about now: US Navy to ditch touchscreens after kit blamed for collision

The US Navy is ditching touchscreens and going back to physical throttles after an investigation into the USS John S McCain collision partly blamed poor design of control systems for the incident.

The first throttles will be fitted to DDG-51 class destroyers from next summer. Contracting for the new kit is already under way and the equipment is expected to come in a kit form and not require major maintenance. New ships will be built with physical throttles already in place.

Navy personnel from Shutterstock

What’s long, hard, and full of seamen? The US Navy’s latest cybersecurity war gaming classes

READ MORE

Sailors told the probe that they found touchscreen control systems overly complex.

The review is also pushing for more commonality between ships of the same class so that functions and information on screens appears in the same place and on the same menus.

The John S McCain helm included a physical wheel but also had two touchscreens to run other functions.

The McCain crashed into a chemical tanker in a shipping lane off Singapore in August 2017. The investigation found multiple causes but among them was confusion created when throttle and steering functions were split between two different consoles. Control of the port and starboard throttles was split between two helm stations so when a helmsman thought he was slowing both throttles in fact he was only slowing one causing a sharp turn into the tanker.

Another issue raised was ships’ AIS (Automatic Identification Systems) receivers. These are currently based on laptops relying on a cable connection to other systems. Sailors complained that the laptops were often stuck behind other equipment and hard to access.

As part of wider helm design changes, this should be addressed too. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/12/us_navy_ditching_touchscreens/