STE WILLIAMS

Blindly accepting network update texts could have pwned your mobe, say researchers

Over-the-air provisioning is the latest attack vector threatening your innocent Android mobe, according to Check Point today.

The Israeli threat intel biz reckons that a single malicious SMS can pwn a targeted device, allowing an attacker to do such nefarious things as intercepting emails, text messages and so on.

“Given the popularity of Android devices, this is a critical vulnerability that must be addressed,” thundered Slava Makkaveev, a Check Point researcher. “Without a stronger form of authentication, it is easy for a malicious agent to launch a phishing attack through over-the-air (OTA) provisioning.”

OTA provisioning, in Gemalto’s explanation of the term, is used to “communicate with, download applications to, and manage a SIM card without being connected physically to the card”. If you’ve ever received a text message from your mobile network telling you to reboot your phone or that new settings have been applied to your SIM, you’ve received an OTA update.

Java microservice, photo via Shutterstock

Security storm brewing for Oracle Java-powered smart cards: More than a dirty dozen flaws found, fixes… er, any fixes?

READ MORE

Check Point reckons that malicious folk could spoof these OTA provisioning updates. By exploiting the simple authentication measures in the industry-standard Open Mobile Alliance Client Provisioning (OMA-CP) spec, Check Point said, certain mobile handsets from Samsung could be targeted with a legit-looking message that appears to come from one’s mobile network – without requiring any authentication at all.

“When the user receives an OMA CP message, they have no way to discern whether it is from a trusted source. By clicking ‘accept’, they could very well be letting an attacker into their phone,” Makkaveev said.

Provided an attacker has the target’s IMSI (perhaps through deploying an IMSI catcher), the researchers claimed that Huawei, LG and Sony phones could also be pwned in the same way, as they authenticate the received message with the handset’s IMSI number.

Samsung patched the flaw after Check Point disclosed it in March, along with LG in July. Sony said its phones follow the published OMA-CP specs, according to the infosec biz, while Huawei is said to be pondering it.

Check Point claimed the vulns affected billions of devices. While possibly true from a theoretical point of view back in March when discovered, the majority of those will have incorporated the patches, either through routine updates or updates pushed (legitimately) from mobile networks. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/04/android_mobe_ota_sim_vuln/

Splunk Buys Microservices Monitoring Firm Omnition

The purchase is intended to boost Splunk’s capabilities in microservices architectures.

Splunk has announced its acquisition of Omnition, which provides monitoring for microservices applications. Financial terms of the deal were not released.

Still in stealth mode, Omnition has built its distributed tracing applications as software-as-a-service (SaaS) offerings. According to the blog post announcing the acquisition, Omnition’s technology is expected to provide Splunk with additional capabilities for customers deploying their applications with a combination of Docker, Kubernetes, Terraform, Envoly/Istio, and other microservices-enabling technologies.

The acquisition also adds more than a dozen engineers with microservices expertise to Splunk’s development team. The purchase of Omnition follows Splunk’s August acquisition of cloud-monitoring company SignalFX for just over $1 billion in cash and stock.

Read more here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “‘It Takes Restraint’: A Seasoned CISO’s Sage Advice for New CISOs

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/analytics/splunk-buys-microservices-monitoring-firm-omnition/d/d-id/1335725?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

A Tale of Two Buzzwords: ‘Automated’ and ‘Autonomous’ Solutions Aren’t the Same Thing

Enterprises must learn the difference between the two and the appropriate use cases for each.

There are many buzzwords used to describe various technologies marketed as tools that will make our lives easier. For describing security solutions, two words that come up often are “automated” and “autonomous.” These words sound similar but have very different meanings. Often, confusion about the differences between the two types of tools lead IT professionals to mistrust both concepts, and they avoid using them even in instances where they can provide great value.

Let’s explore the differences between automated and autonomous technologies, why so many IT pros are wary of solutions that tout these capabilities, and what specific applications actually warrant their use.

Autonomous Solutions
An autonomous system learns and adapts to dynamic environments and makes decisions (or takes actions) based on ever-changing data. Such systems use machine learning (ML) and artificial intelligence (AI) to learn from data, and the more data they ingest, the better they learn. In certain applications, autonomous systems eventually will become more reliable than humans and will perform tasks at an efficiency level not humanly possible.

Automated Solutions
Automated systems run within a well-defined set of parameters that consistently execute the steps as defined. The decisions made or actions taken by an automated system are based on predefined rules, and the system will perform those decisions/actions perfectly every time, eliminating the possibility of human error.

Fear of Autonomous Systems
The biggest issue with autonomous systems is when they’re deployed for the wrong purpose. For example, if you’re building a system that’s highly predictable and performs the same function repeatedly, then an automated system provides value because it is simpler, easier to maintain, and requires fewer resources to continue working. Leveraging autonomous systems for these types of solutions may wind up being overly complex relative to the job being performed and introduces unnecessary risks, such as the systems learning incorrectly and performing the wrong action in the future. The possibility that an autonomous system will make the wrong call and implement a change in the company’s IT environment on its own is terrifying.

For example, an autonomous system that checks for improperly configured storage instances, such as S3 buckets, may not have the proper insight into compensating controls and incorrectly quarantine or remove the instances. The downstream effects could involve applications that are no longer able to run, causing a widespread outage. This is not a flaw in the way the autonomous system runs per se but an error by the developer who created the system.

The possible repercussions (misconfigurations, data breaches, fines for falling out of compliance, numerous false positives resulting in service outages, etc.) are so great that many companies have decided not to implement autonomous or automated systems in any form because of the widely held misconception that autonomous and automated systems are synonymous.

Companies that write off autonomous and automated solutions entirely are missing out on significant benefits. When used in the right environment and for the proper tasks, these solutions greatly increase efficiency and eliminate human error.

When to Use Autonomous Systems
Autonomous solutions are best used when the full spectrum of possible scenarios is unknown, and therefore there are no predefined rules for how to respond to new situations. Self-driving cars are the go-to example of why autonomous solutions are necessary, because there are too many different variants for a rules-based approach.

In the world of cybersecurity, these solutions are important because hackers are constantly coming up with new attack methods. Suspicious activity that has never been seen before (and therefore no rules exist for it) could slip by an automated system, but this is what autonomous solutions are built to identify and respond to.

Specific examples of use cases for autonomous systems include:

● Detecting anomalous activity in very large, complex data streams (e.g., network intrusion detection)

● Identifying unknown threats (e.g., zero-day exploits)

When to Use Automated Systems
Automated systems are best used in highly predictable scenarios and tasks for which a best practice already exists. A company can easily leverage its own talented IT team to build a perfect process for performing certain tasks, and then implement automated tools that will perform those tasks precisely, every time. Automation is especially needed in cloud environments, where the rate of change in configurations is immense. In an hour, it’s not uncommon for there to be a million changes in a company’s cloud services.

Human IT teams know how to determine whether a change is harmless or if it needs to be corrected, but they can’t keep up with the rate of change. An automated solution can take the knowledge of the IT teams and apply it instantaneously across the cloud environment and determine which of those million changes per hour are harmless, which require an easily automated remediation, and which are perhaps so far outside the normal expectations that they require a human to review and address.

Specific examples of use cases for automated systems include:

● Correlating data streams to provide actionable guidance (e.g., unified visibility)

● Implement protections consistently, in real time, at any scale (e.g., policy-driven automation)

● Infrastructure and application-level compliance checks within a corporation’s environment

It’s important to remember that with automated solutions, companies maintain full control over their environments because their IT teams set the rules for how those solutions will perform certain tasks. With autonomous solutions, companies relinquish much of that control and trust that the AI/ML capabilities of that tool are learning from the constantly changing variables in their environments and making the best decision possible when faced with new scenarios.

While automated and autonomous solutions have distinct differences, and unwise deployments of each have sparked uncertainty around their use in IT, both types of systems can provide immense value if used appropriately. Additionally, both types of solutions will continue to advance and become more intelligent, and thus offer increased benefits to enterprises that are using automated and autonomous solutions in the proper settings.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “‘It Takes Restraint’: A Seasoned CISO’s Sage Advice for New CISOs.”

Scott Totman brings more than two decades of experience in enterprise application development to DivvyCloud.  As VP of engineering, he is responsible for the ongoing development and delivery of DivvyCloud’s software. Prior to joining DivvyCloud, Totman was the vice … View Full Bio

Article source: https://www.darkreading.com/cloud/a-tale-of-two-buzzwords-automated-and-autonomous-solutions-arent-the-same-thing/a/d-id/1335661?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Rising Fines Will Push Breach Costs Much Higher

The cost of breaches will rise by two-thirds over the next five years, exceeding an estimated $5 trillion in 2024, primarily driven by higher fines as more jurisdictions punish companies for lax security.

Equifax, $700 million. British Airways, $221 million. Marriott, $120 million. Companies are seeing much heftier fines in 2019, and the near future holds little respite, according to experts.

The pace and amount of fines are predicted to continue and will result in rising costs for companies for any data loss, according to a market forecast published by Juniper Research. The analyst firm predicts that global companies will face $5 trillion in breach costs, recovery fees, and damages in 2024, up from approximately $3 trillion today.

The impact on businesses will be extreme, says James Moar, a lead analyst with the firm.

“It will change the way that smaller businesses operate, as the fines will likely be ruinous and they will be unable to afford insurance to cover the cost of the fines,” he says. “Larger businesses will make moves to comply, but so far they have been measures that are required by law or regulation. Where they are not specified, there will be enough room to argue that breaches occurred in spite of strong security measures being in place.”

This year has already seen some record-setting fines. Equifax settled with the US Federal Trade Commission to pay up to $700 million for allowing online attackers to access sensitive information in 2017 on more than 148 million people. The UK’s Information Commissioner’s Office notified British Airways of its intent to fine the company £183 million (approximately US$221 million) under the European Union’s General Data Protection Regulation. And the same authority is planning to fine hotelier Marriott £99 million (US$120 million) for the breach of its Starwood Hotels subsidiary.

The fines are putting increasing pressure on companies to get their data security right, says Chris Scott, global remediation lead at IBM.

“As those fines increase, I think you are going to see a lot more board awareness of the need to secure data,” he says. “The executives will become much more involved in understanding the security of the information their company retains, and that is a good thing.”

Juniper predicts that data breach costs will grow at 11% each year. The Ponemon Institute’s “Cost of a Data Breach” report, sponsored by IBM, pegs growth at 12% between 2014 and 2019. Rising fines will likely cause that increasing trend to accelerate, says IBM’s Scott.

“You add the fines in there, and I can see that increasing significantly,” he says. “I just don’t know yet where those fines will play in that process.”

In addition, most companies do not take into account the multiyear nature of breach costs, Scott says. Only about half the costs of a data breach are seen in the first year — a third of the costs come in the second year, as companies rearchitect and pay for monitoring and other remediation, and a sixth of the costs come in the third year, he says. Unaccounted-for costs include, for example, paying lawyers to make sure that the company is in compliance with the more than 50 jurisdictions’ breach laws, paying for monitoring, forensic analysis, and a variety of steps necessary to fully remediate the risk.

“All of that takes time to implement, and that is where the costs start to extend into the second and third year,” he says.

The increasing fines will drive companies to improve their cybersecurity. Because cybersecurity professionals are still in short supply, the main beneficiaries of the higher breach costs will be managed security service providers (MSSPs), especially for smaller businesses, which otherwise run the risk of being driven out of business by one bad breach, Juniper Research’s Moar says.

The MSSPs “can manage all elements of cybersecurity and make the patchwork comprehensible for businesses,” he says. “There are also promising moves in security awareness training to combat the human element of cybersecurity, which will go a long way to ensuring secure networks.”

Not all companies will increase their cybersecurity spending, because the businesses themselves are typically not at risk from data breaches, Juniper’s Moar says.

“This element of moral hazard will incline several companies, particularly large ones, to continue to underinvest in cybersecurity unless they are compelled by regulation and compliance that makes cybersecurity a condition of doing business at all,” he says, “rather than a cost that can be weighed against the cost of a data breach fine.”

Related Content

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “‘It Takes Restraint’: A Seasoned CISO’s Sage Advice for New CISOs.”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/rising-fines-will-push-breach-costs-much-higher/d/d-id/1335726?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cartoon Contest: Bedtime Stories

Feeling creative? Submit your caption in the comments, and our panel of experts will reward the winner with a $25 Amazon gift card.

We provide the cartoon. You write the caption!

Submit your caption in the comments, and our panel of experts will reward the winner with a $25 Amazon gift card. The contest ends Oct. 4. If you don’t want to enter a caption, help us pick a winner by voting on the submissions. Click thumbs-up for those you find funny; thumbs-down, not so. Editorial comments are encouraged and welcomed.

Click here for contest rules. For advice on how to beat the competition, check out How To Win A Cartoon Caption Contest.

John Klossner has been drawing technology cartoons for more than 15 years. His work regularly appears in Computerworld and Federal Computer Week. His illustrations and cartoons have also been published in The New Yorker, Barron’s, and The Wall Street Journal.
Web site: … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/cartoon-contest-bedtime-stories/b/d-id/1335720?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

EFF and Mozilla scold Venmo over app’s privacy failings

The increasingly tense stand-off between privacy campaigners and the popular mobile payment app Venmo has taken another turn for the worse.

The latest salvo is an open letter by the Electronic Frontier Foundation (EFF) and Firefox makers The Mozilla Foundation to Dan Schulman and Bill Ready, respectively the CEO and COO of Venmo owner, PayPal.

Their complaint has three strands to it, the first of which is the long-running gripe that transactions made using Venmo are still not private by default.

The second worry is that anyone using the app can see who someone is connected to through their friends’ list.

Together these create the third problem – it’s likely that many Venmo users don’t realise the privacy effect of these settings, which means they might be giving away data about their personal habits they’d rather not. As the EFF/Mozilla letter puts it:

It appears that your users may assume that, like their other financial transactions, their activity on Venmo is both private and secure.

How we got here

Founded a decade ago, people use Venmo’s digital app wallet to send money to other users, for example conveniently splitting restaurant bills or bar tabs. It can also be used to buy things from participating merchants.

In practice, Venmo is also used to pay for everything from rent and personal debts to illegal drugs and prostitutes.

We know this because Venmo transactions between friends (with helpful user descriptions) are public through the software’s developer API, which has allowed researchers to deploy scraping tools to infer how it’s used.

Why is Venmo so fixated on making transactions public? Because it’s really a kind of social network and the whole point of such platforms is to do things in the open. As a company official told CNET last year:

We make it default because it’s fun to share with friends in the social world.

Oversharing

To critics, this is akin to asking users to willingly participate in a breach of their own data going back to the first transaction they sent or received on the app:

As a result, they are vulnerable to stalking, snooping, or hacking with so much of their data available to anyone on the web.

Venmo users bothered by this can turn off ‘public by default’ through Settings Privacy, select Private Change All to Private (not forgetting to do the same for Past Transactions).

However, as last week’s EFF/Mozilla letter also points out, there is still currently no way to turn off public access to friends’ lists.

So far, neither Venmo or PayPal has responded to the EFF/Mozilla complaint.

Until that setting changes, the best advice is for Venmo users to be careful how they describe transactions and to limit who they use the app to transact with.

Paying towards a meal with friends is probably not going to be a big privacy reveal – doing the same after a visit to a therapist, doctor, or clinic might be very different.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/dIdmnOeegug/

YouTube reportedly to be fined up to $200m over COPPA investigation

Google has reportedly agreed to pay between $150 million and $200 million to resolve the FTC’s investigation into YouTube and its allegedly illegal tracking and targeting of kids who use the video streaming service.

In June, people familiar with the matter told news outlets that the Federal Trade Commission (FTC) was nearing the end of an investigation into YouTube’s alleged failure to protect the kids who use the Google-owned service.

That was followed by letters sent to the FTC about the matter from children’s privacy law co-author Senator Edward Markey and two consumer privacy groups. They urged the FTC to do whatever it takes to figure out if YouTube has violated the law protecting children and, if so, to make it shape up and stop it.

That “stop it” recommendation included Markey’s request that the FTC force Google to establish “a $100 million fund to be used to support the production of noncommercial, high-quality and diverse content for children.”

In July, the Washington Post was the first to report on the finalization of the settlement. Sources familiar with the issue told the newspaper that the FTC’s investigation concluded that Google hasn’t properly protected kids who use YouTube and has suctioned up their data, in violation of the Children’s Online Privacy Protection Act (COPPA), which outlaws tracking and targeting kids younger than 13.

Now, sources have put forward a number: they told Politico that Google has indeed agreed to pay between $150 million and $200 million to resolve the FTC’s investigation into YouTube.

News outlets including Politico and the New York Times last week reported that their own sources had confirmed that the FTC has voted 3-2 along party lines to approve the settlement. Next step is on to the Department of Justice (DOJ) for its review and approval of the settlement.

If the reported amount of the fine turns out to be accurate, it will dwarf the FTC’s largest fine to date for COPPA violations: $5.7 million announced in February 2019 for video streaming app TikTok, for allegedly collecting names, email addresses, pictures and locations of children younger than 13.

Though TikTok’s fine was much smaller than the one Google is reportedly facing, it set off ripples: in July, UK information commissioner Elizabeth Denham told a parliamentary committee that the FTC’s fine had triggered a UK probe into how TikTok handles the safety and personal data of underage users.

The wrath of regulators when it comes to technology companies bungling user data and privacy is coming with ever steeper price tags: also in July, the FTC announced a $5 billion fine for Facebook for losing control of users’ data.

Sure, $5 billion may sound like a lot of money to you and me, but a chorus of people called it a slap on the wrist. An early Christmas present. A drop-in-the-bucket penalty. Chump change. A mosquito bite.

Senator Elizabeth Warren, for one, pointed out that Facebook made $5 billion in profits in just the first three months of last year.

Critics say a fine in the range of $150 million to $200 million for YouTube breaking children’s privacy law is more like a gnat bite: change that’s even chumpier.

Several of the groups behind the original COPPA complaint against YouTube said that the reported amount of the fine wouldn’t even be a bump in the road for the financial behemoth that is Google.

From a statement from Josh Golin, executive director of coalition leader the Campaign for a Commercial-Free Childhood:

They should levy a fine which both levels the playing field, and serves as a deterrent to future COPPA violations. This fine would do neither. [A fine in this range is] the equivalent of two to three months of YouTube ad revenue.

Here’s Golin telling Yahoo Finance that his organization is hoping not just for a stiff penalty, but most importantly, for changes in how YouTube operates. Otherwise, we “need legislative solutions, because clearly, children need to be protected in this environment.”

Jeff Chester, executive director of the Center for Digital Democracy, told Politico that the punishment should be at least half a billion dollars. A lesser penalty is “scandalous,” he said, and sends the message that “you in fact can break a privacy law and get away largely scot-free.”

Marc Rotenberg, president of the Electronic Privacy Information Center (EPIC) – one of the groups listed on the original complaint – said the key will be the terms the FTC imposes on YouTube under the settlement:

The critical challenge for the FTC is whether it has the ability to restrain business practices that violate privacy. Imposing large fines does not address that problem.

This is the full list of things that Senator Markey has said he wants the FTC to get YouTube to do:

  • Order Google to immediately stop collecting data on users it knows are under 13, and delete any data it’s collected on kids, even if those kids are now over the age of 13.
  • Set up a way to tell if a user is under 13, and deny them access until Google updates its processes to be compliant with COPPA.
  • Get rid of targeted marketing on the YouTube Kids platform, and tell users what data it’s collecting, what it’s doing with it, and who it’s sharing it with.
  • Subject Google to a yearly audit.
  • Keep Google from rolling out any new child-focused products or services until they’re approved by an independent panel that includes FTC-appointed experts on child development and privacy.
  • Require Google to conduct a consumer education campaign that warns parents that kids shouldn’t use YouTube.
  • Require Google to retain documentation about its compliance with any consent decree that comes out of this investigation.
  • Require Google to establish a fund to produce non-commercial, quality content for children.

We haven’t yet received official news, or details, about the settlement. Markey issued a statement saying he’s looking forward to those details, but that for now, he finds the reported settlement disappointing:

I look forward to reviewing the requirements placed upon Google in this settlement, but I am disappointed that the Commission appears poised to once again come out with a partisan settlement that falls short of the Commission’s responsibility to consumers and risks normalizing corporate bad behavior.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/shNXqHcY-RY/

QR codes need security revamp, says creator

Museums use them to bring their paintings to life. Restaurants put them on tables to help customers pay their bills quickly. Tesco even deployed them in subway stations to help create virtual stores. QR codes have been around since 1994, but their creator is worried. They need a security update, he says.

Engineer Masahiro Hara dreamed up the matrix-style barcode design for use in Japanese automobile manufacturing, but, as many technologies do, it took off as people began using it in ways he hadn’t imagined. His employer, Denso, made the design available for free. Now, people plaster QR codes on everything from posters to login confirmation screens.

If you thought QR codes were just a passing marketing gimmick, think again. They’re hugely popular in China, where people used them to make over $1.65 trillion in payments in 2016 alone, and Hong Kong too has just launched a QR code-based faster payments system.

The codes generated enough interest that Apple even began supporting them natively in iOS 11’s camera app, removing the need for third-party QR scanning apps.

Hara is a little spooked by all these new uses for a design that originally just helped with production control in manufacturing plants. In a Tokyo interview in early August, he reportedly said:

Now that it’s used for payments, I feel a sense of responsibility to make it more secure.

He’s right to be concerned. Attackers could compromise people in various ways using QR codes.

One example is QRLjacking. Listed as an attack vector by the Open Web Application Security Project (OWASP), this attack is possible when someone uses a QR code as a one-time password, displaying it on a screen. The organisation warns that an attacker could clone the QR code from a legitimate site to a phishing site and then send it to the victim.

Another worry is counterfeit QR codes. Criminals can place their own QR codes over legitimate ones. Instead of directing the user’s smartphone to the intended marketing or special offer page, the fake code could take users to phishing websites or those that then deliver JavaScript-based malware.

They could also exploit the growing use of QR codes for payments. A fraudster could replace a QR code taking people to a legitimate payment address with their own fake payment URL.

There have already been some proposals for security measures in QR codes, as laid out in an MIT course document by researchers there. One suggestion uses encryption to stop a third-party from snooping and cloning QR codes used for logging people in. To do this, the online app would send an encrypted QR code to an already-logged in (and therefore trusted) mobile device. Only the logged-in device can decrypt the QR code, which it then displays for the second device to read. The QR code contains a URL which logs them into the app. There are also several encrypted QR code login systems now in production.

Another proposal embeds digital signature information into the code to confirm its authenticity but uses more of the code’s available space for the extra data.

These are all great ideas, and perhaps Hara has some more. But he’d better move fast. As QR codes catch on, the widely deployed design will become increasingly difficult to change.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/lNHS17cnf14/

Bus pass or bus ass? Hackers peeved about public transport claim to have reverse engineered ticket app for free rides

A hacker collective has said that it found the private keys for a Manchester bus company’s QR code ticketing app embedded in the app itself – and has now released its own ride-buses-for-free code.

In an interview with The Register, the hacker claiming to be behind the breach of First Buses’ ticketing app said he had noticed how it “would let you purchase a ticket and activate it offline later”.

The hacker, who would only identify himself as “Buspiraten”, said he had become “pissed off with how expensive and messed up the public transport was” and “wanted to do something about it”.

He described how he used Titanium Backup to make a copy of the bus ticket app’s data, which eventually led him down the path of reverse engineering the app – where he discovered “the entire thing was client side”.

Buspiraten told El Reg: “The RSA private keys to sign the QR code were right there as PEM files in the APK.”

In a public statement posted on a Tor site (here, for the curious), the “Public Transport Pirate Association of the United Kingdom” declared that they had released the whole ticket generation routine in JavaScript. Rather than going down the responsible disclosure route and telling app developers Corethree about it, Buspiraten told The Register: “The code is a political statement for public transport reform.”

Buspiraten said he hoped releasing the ticketing app’s innards to world+dog would “accelerate undoing the harms that private control of public transport has done in UK cities… public transport free at the point of use for everybody.” He told El Reg that he had been using the code for over a year before releasing it publicly earlier this week.

“We might do a larger release for more UK cities in time for EMF next year,” Buspiraten added, referring to the Electromagnetic Field Festival, an outdoors hacker festival.

Duncan Brown, Chief Security Strategist EMEA at Forcepoint, told us:

“Our view is that this is symptomatic of the deprofessionalisation of the development community over the last ten years, and the lack of emphasis on security and testing in today’s appdev world. Last year we saw over 21,000 vulnerabilities registered in the CVE database: the industry is churning out poorly tested and poorly secured code, especially in mobile and IoT platforms. The RSA private key inclusion is no worse that hard-coding passwords into set-top boxes and home routers.”

Corethree, developers of First Buses’ app, told The Register: “We are aware of the story and are working with Transport for Greater Manchester, First Bus Manchester and the police to address the issue. As you will understand with a situation like this, we are unable to comment further at this time.”

Transport for Greater Manchester shrugged off our request for comment by batting it to First Buses.

First Buses told us: “We are aware of this claim and are investigating with our suppliers as a matter of priority.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/04/corethree_baked_private_rsa_key_first_bus_ticket_app/

Red flag: Home Office inks £45m border tech extension with IBM

The Home Office has inked a £45m 33-month contract extension with IBM for its creaking Semaphore border technology contract.

The Semaphore database, which has been running since 2004, collects data from airline carriers about persons of interest and those who are on watch lists before they arrive at the border.

It was created ahead of the troubled e-borders contract with Raytheon, which was cancelled in 2010. In the process the Home Office had to settle the dispute relating to the cancellation of the £750m contract: paying £150m to the supplier Raytheon and £35m on legal fees.

However, the latest extension suggests its successors, the Border Systems Programme and Digital Services at the Border, will not provide a replacement for the database for several years.

According to the tender note, there had not been a prior publication of a call for competition because for “technical reasons, only the incumbent can deliver the service for the required term.”

The notice said this is a “short term interim contract with flexible termination options with the incumbent supplier, whilst longer term re-procurement/reorganisation activity is taking place to deliver the replacement systems and services.”

A 2018 report (PDF) from the chief inspector of borders and immigration, David Bolt, found that the processes within the Home Office are “unable to fully exploit the potential of the data it is receiving”. At that point there still seemed to be plans in motion to replace Semaphore by March 2019.

It found the details of 600,000 foreign visitors have slipped through the cracks of the Home Office’s database thanks to its “shambolic” exit checks system.

Between April 2014 and April 2015, as part of the Exit Checks Programme, the Home Office developed the Initial Status Analysis database, which matches inbound and outbound travel data received via Semaphore with data recorded on its other immigration related systems.

The report recommended: “Plans for the replacement of Semaphore are revisited and firmed up and, pending its replacement, maintenance and support for Semaphore is prioritised.”

Georgina O’Toole, analyst at TechMarketView, noted: “The IBM e-Borders programme has faced ongoing difficulties, and that was without bringing Brexit into the equation.

“Adding the fact that the department is faced with uncertainty, and the knowledge that systems will need to cope with new rules post-Brexit, an extension like this was, arguably, inevitable.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/04/home_office_inks_45m_border_tech_extension_with_ibm/