STE WILLIAMS

You get a lawsuit! And you get a lawsuit! And you! Now Apple sued over CPU security flaws

Add Apple to the list of companies facing a legal backlash in the US over the Spectre and Meltdown CPU security fiasco.

A 17-page class-action complaint [PDF] – filed earlier this month in a San Jose district court in California – accuses the Cupertino iGiant of failing to keep the Arm-compatible processors in iPhones, iPads, and Apple TVs secure as advertised.

The claim does not cover the Intel-powered Macs Apple sells, which are also affected by Meltdown and Spectre. Chipzilla is separately fending off class-action claims over those CPU blunders.

The complaint, which cites The Register‘s original report on the processor industry’s design cockups, claims Apple withheld information on the flaws from customers for months, selling products it knew to be vulnerable to data-theft attacks. Starting with the A6 chip in 2012’s iPhone 5, Apple has designed and shipped its own custom Arm-compatible processor cores, meaning Cupertino had a hand in introducing insecurities into its silicon.

AMD CEO Lisa Su speaking at the firm's 2015 financial analyst day

Sueball smacks AMD over processor chip security flaw silence

READ MORE

On January 9, Apple finally confirmed that its Arm and x86-64 based gizmos and Macs running iOS, macOS, and tvOS were affected by Meltdown and Spectre to varying degrees. These security holes in the chip hardware can be exploited by malware and hackers to extract passwords and other sensitive information out of at-risk computers and handhelds.

Apple, and other manufacturers, were privately tipped off about the design weaknesses by Google researchers during summer last year.

“Based upon information and belief, defendant [Apple] has known about the design defect giving rise to the security vulnerabilities since at least June, 2017,” the suit – filed by Anthony Bartling and Jacqueline Olson – claimed.

“Defendant has admitted that it released an update to its iOS operating system software to address the Meltdown technique in December, 2017, but Apple knew or should have known of the design defect much earlier and could have disclosed the design defect more promptly.

“Even after it was aware of the security vulnerabilities, Apple continued to sell and distribute iDevices without a repair or having made a disclosure about the Apple processor security vulnerabilities. The iDevices it sold and distributed were not of the quality represented and were not fit for their ordinary purposes.”

The suit seeks a damage payout for everyone who purchased an iPhone, iPad, or AppleTV since 2007, with two named plaintiffs in New Hampshire and New York also seeking damages under state laws.

The filing alleges two counts of breach of warranty (implied and express), one count of negligence, one count of unjust enrichment, and violations of New Hampshire’s Consumer Protection Act and New York’s General Business Law.

Apple did not return a request for comment on the suit. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/18/apple_spectre_class_action_lawsuit/

Tax Reform, Cybersecurity-Style

How the security industry can be more effective and efficient by recognizing four hidden “taxes” in the buying and selling process.

In the political world, taxes are an incredibly divisive, contested, and complicated issue. In everyday life, taxes are a staple, the more frequent visitor of Benjamin Franklin’s adage that “nothing can be said to be certain, except death and taxes.” Regardless of the time or place, if taxes come up in discussion, it’s likely to be with a negative tone. That’s why we hear recurring calls for tax reform.

The cybersecurity world has its own form of taxes, and it too is in need of a reform. What do I mean by that? Let’s dive in.

The Procurement Tax
One would think that having a popular product or addressing a major security gap would result in a quick transaction between a buyer and seller. The reality is that it often takes multiple pitches and discussions just to get to the proof-of-concept stage. Even this is only possible if there’s already a project for this type of solution. If not, the cards are stacked in favor of friction, of taxing all those involved such as value-added resellers and others, just to get into a proper evaluation. In this scenario, we might as well call meetings taxation. If you had to go through multiple demos, meetings, and paperwork before you could buy a car or TV, would you still want it?

The Implementation Tax
Let’s assume you successfully procure the product or service. From here, the new capability must be deployed in the environment, taxing internal teams. The implementation phase often requires dedicated resources to get new capability to anything comparable to what was pitched during the demo.

The coordination of getting assets, like space on the ESX server or a place to drop hardware, involves a procurement and implementation process of its own. Next companies must determine who has ownership of the product and empower that team to ramp quickly, which often equates to training. This means less time is spent defending and more time is spent on forming new processes. And finally, in the modern security tech stack, if you’re not integrating, automating, and orchestrating your capabilities across the existing technologies, you’re playing from behind.

If you’re a vendor, think about how much time it takes to close the sale, and then understand that it is after the purchase order is issued when most of the actual work for your buyer begins. Vendors would do well to think about how to reduce as much of the implementation tax as possible.

The Care-and-Feeding Tax
When the new capability is procured and implemented, are we good? Did we pay the rhetorical sales tax and are now in the clear? Sadly, no.

One of the top challenges in cybersecurity today is the shortage of skilled professionals. There simply aren’t enough qualified individuals sitting in the right seats who are able to maintain the products monitoring their environments. According to a report made by Gartner last year, by 2022, there will be 1.8 million unfilled positions in cybersecurity, which means many fewer human resources are available for the care and feeding that these products require.  

The second challenge is what I like to call the deploy-and-decay problem. Deploy and decay indicates that technology and capabilities actually become worse over time rather than improve. Security requires proper, consistent care — like brushing your teeth every day — except that with large teams, cyber hygiene involves changing toothbrushes, more and different teeth, and bureaucracy.

Vendors need to understand that there are almost exclusively two kinds of users of their technology: those who do not live and breathe security, and those who do but have no time. So the actual human expertise being thrown at the products is often low, simply due to minimal experience or minimal time. And yet products continue to require a tremendous amount of care and feeding — tuning rules, playbooks, and policies. The environment is shifting and dynamic, and so are the attackers, so therefore if the landscape and the adversaries are both in motion, the defensive capabilities also need to be. This taxes the security team tremendously.

The Consulting Service Tax
If you outsource or largely leverage services, you might be thinking that the tax analogy doesn’t apply. But let’s say you use a managed security service provider that rarely talks to you and tries to take as much of the burden as possible. The tax there is a lack of understanding and a lack of context, so how effective is that service really? Or, if there are lots of interactions between the outsourced team and your team, then you’re both paying for the service and paying in time to educate that service. So there’s still a large tax to keep defenses up to par.

Now the Good News
First, like most challenges, there must be general awareness. The security industry seems to be waking up. As companies move through the process of acquiring new security capabilities, awareness will grow. It’s the responsibility for customers and vendors to work together to reform the process and reduce taxes, particularly when we face challenges such as skill shortages and evolving threats.  

Secondly, some trends are inherently reducing taxes. Software-as-a-service (SaaS) products provide an easier, faster procurement and implementation process. The taxes around care and feeding go down because with cloud back ends, the vendors gain visibility into how the solutions are performing, which allows for faster feedback loops and further refinement. Maintenance pain points such as patching and performing other system administration on self-hosted solutions also are greatly reduced with a SaaS approach.

Thirdly, with cloud-based back ends and data sets, it’s often easier to share information, either inside a particular vendor across its customer base or between organizations that want to utilize the collective expertise to improve threat intelligence. So there’s more collaboration in less time, which should be a net positive.

Finally, we need to grasp advancements in machine intelligence and automation to help make a dent in the tuning process. By observing events within a particular solution and understanding how humans interact with them, tools should adapt to optimize the human-machine interactions. Teams can become more effective through self-optimizing technology.

We used to have a saying that each attack should make the entire community stronger — does each interaction with a product make it stronger? We can hope. And we can act. By recognizing the hidden costs of cybersecurity, we can begin the work toward reclaiming time and money. The burden is on all of us to come together to improve, so let’s make 2018 a year where cybersecurity tax reform starts to take hold.

Related Content:

 

Ben Johnson is CTO and co-founder of Obsidian Security. Prior to founding Obsidian, he co-founded Carbon Black and most recently served as the company’s chief security strategist. As the company’s original CTO, he led efforts to create the powerful capabilities that helped … View Full Bio

Article source: https://www.darkreading.com/cloud/tax-reform-cybersecurity-style-/a/d-id/1330830?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

BEC Attacks to Exceed $9B in 2018: Trend Micro

Business email compromise is projected to skyrocket as attackers adopt sophisticated techniques to dupe their victims.

Business email compromise (BEC) attacks are projected to exceed $9 billion in 2018. To put that number in context, it has been less than a year since the FBI reported BEC attacks had become a $5.3 billion industry. Attacks have become more sophisticated as hackers improve their game.

BEC has grown among threat actors due to “its relative simplicity,” according to a new Trend Micro report “Tracking Trends in Business Email Compromise (BEC) Schemes.” Researchers analyzed BEC as a cybercriminal operation from January through September 2017, dissecting tools and strategies commonly used in these attacks to predict activity for this year.

“This particular type of attack is not going away — it’s only increasing,” says Ed Cabrera, chief cybersecurity officer at Trend Micro.

The Internet Crime Complaint Center (IC3) puts BEC attacks in five categories: Bogus Invoice Schemes, CEO Fraud, Account Compromise, Attorney Impersonation, and Data Theft. In this case, researchers split them in two: Credential-grabbing and Email-only. Attackers must be proficient in at least one of these methods for the scheme to work, researchers report.

Method 1: Snatching credentials

This tactic leverages keyloggers and phishing kits to steal credentials and access an organizations email. Researchers noticed an uptick in phishing HTML pages sent as spam attachments which, while not new, is still effective against unsuspecting users.

Phishing is one of the primary methods used to steal email login data for BEC attacks. Once an attacker compromises a Gmail account, for example, they can impersonate its owner or use personal information or credentials they find in the account.

The other credential-grabbing technique uses malware, which continues to be a problem for targets using antivirus tools because some attackers use crypter services to evade AV detection. Researchers note BEC actors are more frequently using phishing attacks over keyloggers because they’re simpler and cheaper; actors don’t need to shell out for builders and crypters.

Keyloggers and remote access tools (RATs) are the most common types of malware used for BEC because they’re effective and inexpensive. Unlike attacks that rely on phishing to steal a single set of credentials, malware can collect all stored credentials on an infected machine.

Ardamax is one example of a keylogger found in recent BEC attacks. It’s cheap (under $50), can send stolen data via SMPT or FTP, has webcam and microphone recording, and an option to export encrypted logs that users can browse in its log viewer. Lokibot, password stealer and coin wallet stealer, is another commonly used malware. A new version of Lokibot was found in 2017 with new features like the ability to capture screenshots on the target machine.

Method 2: Targeting inboxes

Email-only BEC attacks, which rely on social engineering, are getting more sophisticated as attackers get smarter. This tactic involves sending an email to someone in the target company’s finance department, requesting an exec to transfer money as payment or as a personal favor. Usually, a spoofed email from the CEO is sent to the head of finance.

“The CFO has the authority and ability to request last-minute money transfers within the organizations,” says Cabrera. “[Attackers] are trying to capitalize on the relationship between the CEO and CFO.”

Cybercriminals launching BEC attacks carefully research their victims. “It’s usually the advanced groups, but it’s also almost akin to cyberespionage,” he continues. “They have a healthy knowledge of who they’re targeting, and who in the organization they’re going to target.”

This research is what makes them successful. Attackers want to know about the organization and its executives: who’s on vacation, typical work hours, business travel. They want to know news surrounding the business and operations such as MA activity and corporate events. Oftentimes actors target ADP credentials and payment/benefits information so they can better understand the employees they’re targeting. All of this data, both public and private, leads to success.

“We’re seeing a shift: ‘How do we compromise email infrastructure and dig even deeper?'” Cabrera notes.

Social engineering scams can be tough to spot. Sometimes the subject line will give an attacker away; based on analysis of BEC email samples, more than two-thirds had subjects containing terms “request,” “payment,” or “urgent.” Many said “wire transfer request” and “wire request.”

In the “Reply To” line, many attackers add their email addresses so they can view replies from target recipients. Most email clients don’t show the reply-to addresses, so they get away with it. If they don’t do this, they create a legitimate-looking email address to impersonate a corporate executive. These usually involve free webmail services like “accountant.com” and “workmail.com.”

“From a user side, awareness and training is critical,” says Cabrera. “From the boardroom down to the server room, make sure [employees] know this is actually happening.” He also advises taking a close look at gateway tools, what they’re deploying, and how they can protect email.

“You need to understand the gateway is a critical line of defense and we need to be able to defend it,” he adds.

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/bec-attacks-to-exceed-$9b-in-2018-trend-micro/d/d-id/1330853?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Rogue Chrome, Firefox Extensions Hijack Browsers; Prevent Easy Removal

Malwarebytes describes malicious extensions as ‘one of a kind’

Any malware that hijacks your browser to serve up ads or to redirect you to random websites can be annoying. Even more so are extensions that take control of your browser and prevent you from landing on pages that can help you get rid of them.

Security researchers at Malwarebytes recently discovered extensions for Chrome and Firefox that display precisely that behavior. According to the security vendor, the extensions are designed to hijack browsers and then block users from removing them by closing out pages with information on extensions and add-ons, or by steering users to pages where extensions aren’t listed. Rogue extensions like these are often an overlooked attack vector that can leave organizations exposed to serious threats.

News of the rogue extensions follows a report from the ICEBRG Security Research team just this week about several malicious Chrome extensions in Google’s Chrome store that has impacted some 500,000 users around the world, including many organizations.

“The Chrome extension is a one-of-a-kind so far,” says Pieter Arntz, malware intelligence researcher at Malwarebytes. The code that forces the extension to install on a victim’s browser itself looks re-used from another family of forced extensions, he says. “But the code to take users away from the extensions list in Chrome, I’ve never seen before.”

The Firefox extension was a first as well when Malwarebytes initially spotted it, Arntz says. But researchers have already spotted a second version of it since then, he said.

The Chrome extension seems targeted at a specific demographic since it is in Spanish and promises to give users the weather in Colombia. But when installed, it opens a minimized Chrome window to the side of the screen that then accesses dozens of YouTube videos every minute, Arntz says. “So, we assume it was designed to quickly drive up the number of views for those videos.” The extension has been around for several weeks and is available in the Chrome Web Store, he notes.

The Firefox extensions meanwhile are being pushed by cryptocurency faucets and similar websites that reward visitors with free content or other incentives for completing tasks like watching ads or completing captchas.

One of the ways users can be trapped into doing forced installs of malicious browsers is by landing on websites designed solely for that purpose. Users can often end up on these sites via redirects from adult, keygen, and software cracking sites, according to Malwarebytes.

“What we call a forced install is that when a website is designed to keep the user there until he decides to install the extension,” Arntz says. Such websites employ javascripts, login prompts and various HTML5 tricks to essentially lock down the browser and prevent a user from browsing to another site or even closing down the tab until the extension is installed.

Chrome users have an easier time escaping such sites, by simply opening a new tab and then shutting down the offending tab, while Firefox users can only close them out via the TaskManager.

However, compared to Chrome users, Firefox users can disable the rogue extension more easily once it is actually installed simply by running the browser in Safe Mode, Arntz says. Firefox’s Safe Mode allows users to see a list of all browser extensions, even when the extensions are not active, making it relatively simply to uninstall unwanted ones. Chrome in contrast, does not allow users to see any installed extensions when it is started with the extensions disabled.

“In Chrome, you will have to figure out the name of the extension folder and make some significant change there before you can access the list of extensions. Chrome not showing the extensions when you start it with the extensions disabled [has] a big handicap there,” Arntz says.

Related content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/rogue-chrome-firefox-extensions-hijack-browsers-prevent-easy-removal/d/d-id/1330854?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

And Oracle E-biz suite makes 3: Package also vulnerable to exploit used by crytpo-currency miner

A third Oracle enterprise package has been patched against a crypto-mining exploit.

Security outfit Onapsis warns that Oracle E-Business Suite (EBS) is vulnerable to the cryptocurrency miner exploit that was recently used to hack Oracle’s PeopleSoft and WebLogic servers. Campaigns based on these security shortcomings have netted crooks $250K in digital currency, according to some estimates.

Onapsis is warning of two highly critical vulnerabilities affecting Oracle EBS, released in Oracle’s latest quarterly patch batch on Tuesday. Both were SQL injection vulnerabilities, one of the most common class of web application security flaws.

The January patch batch collectively tackles 237 security vulnerabilities.

“While PeopleSoft contains sensitive HR information, Oracle E-Business Suite can potentially host HR, Finance, Purchase and other types of critical information to the business making the risk to these systems even greater,” Onapsis warns. “Enterprises that fail to install Oracle’s critical WebLogic patch from last October could now find their EBS, PeopleSoft and cloud-based servers churning out cryptocurrency – and even worse allowing attackers to gain access into the Oracle ERP system.”

A representative of Oracle responded promptly to El Reg‘s query to say the firm had no immediate comment on Onapsis’s findings. We’ll update this story as and when any new information comes to hand.

An Oracle WebLogic vulnerability fixed last October abused an unpatched server to mine Monero and other lesser-known crypto-currencies, the SANS Technology Institute warned earlier this month.

Poor input sanitisation in a WebLogic component created a means for an unauthenticated attacker to run arbitrary commands. The vulnerability also affects Oracle’s PeopleSoft software, which can include WebLogic as a server, as previously reported by El Reg. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/18/oracle_app_crypto_mining_vuln/

Google fuels up Chromecast Wi-Fi flooding fix

Google has confirmed plans to issue a patch for Chromecast and Google Home aimed at resolving a traffic flooding problem that was swamping home networks.

Chromecast

OK, Google: Why does Chromecast clobber Wi-Fi connections?

READ MORE

The fix – due later today – follows a series of work-around updates from router manufacturers aimed at containing the problem, which flared up earlier this week. “In certain situations, a bug in the Cast software on Android phones may incorrectly send a large amount of network traffic which can slow down or temporarily impact Wi-Fi networks,” Google admitted in a customer advisory.

The issue stems from a bug in Android software used in conjunction with the affected kit. Google plans to deal with the issue via a Google Play services update this Thursday, January 18.

Cast sends multicast DNS (MDNS) packets as a keep-alive for net connections to products like Google Home and Chromecast. But programming errors meant the feature wasn’t turned off when devices were sleeping. Once devices awake from this sleep mode, they end up sending a large volume of data in a short amount of time, flooding wireless connections with junk traffic.

MDNS uses UDP, a technology that lacks congestion control. That’s bad enough by itself, but the longer a device “slept”, the bigger the data burst it sent once it woke up, a factor that compounded the problem, as previously reported. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/18/chromecast_flooding_fix/

F-35 ‘incomparable’ to Harrier jump jet, top test pilot tells El Reg

Interview What’s it like to fly an F-35 fighter jet? We interviewed the chief British test pilot on a uniquely British flying technique – and then had a play with a full cockpit simulator to find out for ourselves.

Squadron Leader Andy Edgell is the Royal Air Force’s top test pilot for the F-35 flight trials programme. A former Harrier pilot with sea time on both of the UK’s previous aircraft carriers, Her Majesty’s Ships Ark Royal and Illustrious, as well as operational deployments to Kandahar, Afghanistan, he is now based at the US Navy’s test base at Patuxent River. He spoke to The Register in London yesterday at an F-35 press event.

In his view the F-35 and the Harrier, despite broadly doing the same thing (landing vertically) are “almost incomparable” in flying terms: “The design principle of the F-35 is ‘low effort’ while the Harrier is a challenge to fly.”

Andy explained: “The human brain has a finite capacity and we don’t want to use that on flying… we want to concentrate on being an operator of sensors.”

The theory behind the F-35’s “sensor fusion” concept is that by putting some of the world’s most advanced radars and other sensors on it, and then networking those with other F-35s, the unparalleled situational awareness this gives the pilots makes them a far more formidable fighting unit than other current frontline fighter jets.

But does the high level of automation leave you “vulnerable” to the aircraft’s whims while the pilot pores over his screens, we wondered? “You are in charge, if you choose to use it. Additional automation is there too – height, speed, heading hold. If you need to be hands-on with the throttle and stick, that’s available. If you had a dynamic flight, you can dial that down.”

SRVL – a thoroughly British bit of innovation

Andy also talked about the “uniquely British” manoeuvre that the UK team at Pax River developed, the shipborne rolling vertical landing (SRVL). For a jet fighter like the Harrier or the F-35, the normal landing technique on an aircraft carrier is to fly over the designated spot, hover and gently set down. But, as Andy explained, this reduces the amount of what he described as “Bernoulli lift” generated by the aircraft’s wings. With less lift available, you reduce the maximum landing weight (too heavy and you break the undercarriage during the thump of touchdown) – and therefore the pilot may have to jettison expensive missiles and fuel to bring the aircraft back within safe vertical landing limits.

Youtube Video of VAAC in action on an aircraft carrier

With the SRVL technique, however, the pilot combines the vertical landing and a traditional horizontal landing like you’d see at an airport. By doing this the amount of Bernoulli lift available is increased – and, in naval aviation terms, the number of unused missiles that can be brought home to fight again with is increased.

“It’s a 35-knot overtaking speed at a seven-degree angle relative to the boat,” Andy said. “You’re literally coming down at the perfect speed and the perfect angle. This is British, utterly British,” he enthused. “Everything we’ve done with the VAAC Harrier at places like Boscombe [Down, home of British military aviation research], stuff with modelling on how aircraft flies, it’s brilliant.”

“The VAAC Harrier developed this years ago, with landings on [French aircraft carrier] Charles de Gaulle and the principles behind it were invented by the British,” said Andy. The VAAC (Vectored thrust Advanced Aircraft Control) system, developed over the 1980s and 1990s by the British aeronautical industry, was eventually incorporated in the production F-35B, as is being flown by the RAF, the Royal Navy, the US Marines and Italy.

That theme of automation also plays into the training for operating the F-35. According to both Andy and BAE Systems, the biggest sub-contractor on the F-35 project, around 3,000 hours of test flying have been completed on the full-motion simulator at BAE’s Warton plant. Faith in the fidelity of the simulators is critical for the “flight” trials taking place in the UK, which includes both test flying and the training of landing signals officers (LSOs), who are F-35 pilots tasked with talking their comrades safely down to the deck. The simulators for both are linked, meaning the trainee pilot and trainee LSO can interact.

Andy praised the dedication of the BAE team working on the trials, joking: “Every time I see them I’ll say, how’s the marriage going?”

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/18/f35_uk_test_pilot_interview_sim_flight/

Someone is touting a mobile, PC spyware platform called Dark Caracal to governments

An investigation by the Electronic Frontier Foundation and security biz Lookout has uncovered Dark Caracal, a surveillance-toolkit-for-hire that has been used to suck huge amounts of data from Android mobiles and Windows desktop PCs around the world.

Dark Caracal [PDF] appears to be controlled from the Lebanon General Directorate of General Security in Beirut – an intelligence agency – and has slurped hundreds of gigabytes of information from devices. It shares its backend infrastructure with another state-sponsored surveillance campaign, Operation Manul, which the EFF claims was operated by the Kazakhstan government last year.

Crucially, it appears someone is renting out the Dark Caracal spyware platform to nation-state snoops.

“This is definitely one group using the same infrastructure,” Eva Galperin, the EFF’s director of cybersecurity, told The Register on Wednesday. “We think there’s a third party selling this to governments.”

Dark Caracal has, we’re told, been used to siphon off information from thousands of targets in over 21 countries – from private documents, call records, audio recordings, and text messages to contact information, and photos from military, government, and business targets, as well as activists and journalists.

map

Dark Caracal has an impressive geographical reach … Each dot marks the general location of an infected victim

After the EFF published its dossier on the Operation Manul cyber-snooping program in 2016, Lookout went looking through its database of collected malware samples to hunt down the spyware responsible. Lookout found the code nasty, a custom-made piece of Android evilware dubbed Pallas, which appears to be a component of the Dark Caracal toolkit.

In other words, Pallas is used to hijack targets’ smartphones, and is distributed and controlled via the Dark Caracal platform rented out to governments.

Speaking of govt malware…

At the end of last year, infosec outfit Dragos published details of Trisis, aka Triton: a software nasty that infects Schneider Electric’s Triconex industrial safety systems that protect critical processes in factories and similar environments. It has invaded at least one organization, which is based in the Middle East.

FireEye also spotted the code in the wild, and said it appeared to be government-grade malware “preparing for an attack.”

The primary way to pick up Pallas on your gadget is by installing infected applications – such as WhatsApp and Signal ripoffs – from non-official software souks. Pallas doesn’t exploit zero-days to take over a device, but instead relies on users being tricked into installing booby-trapped apps, and granting the malicious software a large variety of permissions. Once in place, it can thus surreptitiously record audio from the phone’s microphone, reveal the gizmo’s location to snoops, and leak all the data the handset contains to its masters.

In addition, the Dark Caracal platform offers another surveillance tool: a previously unseen sample of FinFisher, the spyware package sold to governments to surveil citizens. It’s not known if this was legitimately purchased, or a demo version that was adapted.

On the desktop side, Dark Caracal provides a Delphi-coded Bandook trojan, previously identified in Operation Manul, that commandeers Windows systems. Essentially, marks are tricked into installing and running infected programs signed with a legitimate security certificate. Once up and running, the software nasty downloads more malware from command-and-control servers. The code pest can also be stashed in Microsoft Word documents, and executed using macros – so beware, Office admins.

The EFF and Lookout are trying to find out who exactly is running and using the Dark Caracal network. An update is expected in the summer, once attribution can be made with some certainty. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/18/dark_caracal_malware/

The Startup Challenge: Safe in the Cloud from Day One

What’s This?

How a Seattle travel company built a rock-solid mobile app without sacrificing performance or security.

Some startups see security as a nice-to-have that can be added months or years after launch. The smart ones realize that dependable security from the beginning means solid performance, satisfied customers, and no precious startup dollars wasted on fraud or incidents. F5 Labs decided to peek under the hood of one of these smart startups: Wanderlust Society. This Seattle-based company was created by a team of Amazon veterans looking to reduce the hassle while increasing the enjoyment of travel planning. Wanderlust Society created a Web application that wrangles the long tail of personalized recommendations and the online community for travelers looking to take and share highly curated trips.

Before you begin building your architecture, it’s a good idea to have a well-defined idea of what you want. Wanderlust Society thought this through and set the following as the primary goals for their Web application:

  • Mobile optimized
  • Secure
  • Fast
  • Highly available
  • Easily scalable

Understand Security Risks
To build a security risk model, you use these goals to anticipate potential threats. It’s not enough to just say things like, “we want our site to be secure.” Security can mean different things to different organizations, so risks needs to be detailed. The risk model will be used by developers and architects to make tradeoffs.  Wanderlust Society did an excellent job of defining these:

Unauthenticated users should only be able to read or write data and APIs that are explicitly marked publicly available.

  • Authenticated users should only be able to see and change their own data.
  • Authenticated users should see shared data from other users.
  • Authenticated users should not be able to read or write system data.
  • Attackers should not be able to access the system by stealing an authenticated user’s credentials.
  • Attackers should not be able to steal/scrape Wanderlust Society data.
  • Attackers should not be able to intentionally crash, degrade, or modify site functionality.

This list is by no means carved in stone. It can and should be reviewed periodically and updated as conditions changed.

Architect to Meet Goals and Address Risks
Once Wanderlust Society figured out goals and risks, they worked out architecture and security controls, including the following:

Mobile Optimized
For a powerful mobile experience, the site needed to be super-fast to load (ties to Fast goal), so the core JavaScript is only 90 KB (compressed). This means that the site works great even on a slow 3G mobile connection.

Secure
Wanderlust Society built their application in the cloud and they also correctly realized that application security in the cloud is their responsibility. That means they had to build and configure the proper tools to lock things down to their specific risks.

First, the application was designed to respond only to HTTPS requests, so all communication is encrypted. Second, the application was partitioned with firewalls and rules locking down traffic in both directions (to reduce attack surface and exfiltration). Databases are in a restricted, non-public subnet and firewalled to a single port. This reduces the risk of attackers stealing data from users.

Passwords are a common way to authenticate, but they are also fragile and a burden to manage properly. Wanderlust Society chose an alternate method and went with Federated Identity. This means their Web application pulls from another trusted authentication repository such as a third-party website where a user is already registered. Wanderlust Society chose to federate from Facebook because most people already have a Facebook account. Also, Facebook’s infrastructure and platform are used by billions daily, so they’ve been proven reliable and secure.

To securely track users, Wanderlust Society used a request/access token system for all service calls. When a user authenticates, he or she is granted a token tied to the originating client device. Because you never want to trust user input, the token is constantly verified at the server side.

Since Wanderlust Society recognized that user input can never be trusted, they also built in server-side data validation checks and parameterized SQL statements to prevent injection attacks.

Fast
As described in the mobile optimized goal, the Wanderlust Society application was designed to be fast. In addition to the app design, they also leveraged cached content delivery networks at the edge for all site images as well as frequently used data.

Highly Available
Being Amazon veterans, the app developers are experts at leveraging Amazon Web Services (AWS). The server instances run in Elastic Compute Cloud (EC2) behind load balancers while CloudWatch is used for monitoring and alarming. Multi-availability zones are also deployed.

Easily Scalable
Wanderlust Society application services are based on the microservices architecture model where applications are small and generally focused around a small set of closely related tasks. This allows services to be independently deployable and expanded. Code is hosted in Docker containers within the EC2 instances, which is scalable to meet Wanderlust Society requirements.

Tradeoffs
There are always tradeoffs. One big one was using Facebook to federate identity. A minority of people don’t trust Facebook and refuse to use their service, and some people are just not interested in signing up with Facebook. Those potential customers will probably not choose to join for now. Supporting federated identities allowed Wanderlust Society to push the development work of building their own secure account creation and login functionality to a future time when they have more resources. A worthwhile tradeoff, since building an authentication system from scratch requires expertise and thorough testing.

The second tradeoff was using the cloud versus an on-premises solution. Here, Wanderlust Society went back to its core mission: building software that helps people travel, not IT operations. So off to the cloud.

Wanderlust Society is off to a strong start with shrewd practices, including articulating their goals, doing a risk analysis against those goals, and choosing appropriate responses to counter those risks while weighing the tradeoffs.

Get the latest application threat intelligence from F5 Labs.

 

Raymond Pompon is a Principal Threat Researcher Evangelist with F5 labs. With over 20 years of experience in Internet security, he has worked closely with Federal law enforcement in cyber-crime investigations. He has recently written IT Security Risk Control Management: An … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/f5/the-startup-challenge-safe-in-the-cloud-from-day-one/a/d-id/1330829?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Schneider Electric: TRITON/TRISIS Attack Used 0-Day Flaw in its Safety Controller System, and a RAT

ICS/SCADA vendor discloses in-depth analysis of a recent targeted attack against one of its customers.

[UPDATED 12:50pmET with information from Schneider’s customer advisory issued today]

S4x18 CONFERENCE – Miami – Industrial control systems giant Schneider Electric discovered a zero-day privilege-escalation vulnerability in its Triconex Tricon safety-controller firmware which helped allow sophisticated hackers to wrest control of the emergency shutdown system in a targeted attack on one of its customers.

Researchers at Schneider also found a remote access Trojan (RAT) in the so-called TRITON/TRISIS malware that they say represents the first-ever RAT to infect safety-instrumented systems (SIS) equipment. Industrial sites such as oil and gas and water utilities typically run multiple SISes to independently monitor critical systems to ensure they are operating within acceptable safety thresholds, and when they are not, the SIS automatically shuts them down. 

Schneider here today provided the first details of its investigation of the recently revealed TRITON/TRISIS attack that targeted a specific SIS used by one of its industrial customers. Two of the customer’s SIS controllers entered a failed safe mode that shut down the industrial process and ultimately led to the discovery of the malware.

“Once the malware was inside the controller, it injected the RAT into memory by exploiting a zero-day vulnerability in the firmware, and escalating its privileges” to do so, says Paul Forney, global cybersecurity architect for Schneider Electric’s product security office in North America, in an interview.

Schneider plans to release an update for the entire Version 10X firmware family. In the meantime the company has issued advisories for its affected customers as well as tools to detect and mitigate the attack.

Forney, here today at the S4x18 ICS/SCADA conference, publicly shared the first details of just how TRITON/TRISIS operated on the company’s Triconex Tricon safety-instrumented systems (SIS), resulting in the rare shutdown of the live safety systems. SIS systems are not typically under the domain of security teams, and operate under triple redundancy in case one system fails.

The victim organization was running Verison 10.3 of the Triconex firmware, according to Schneider, which declined to specify the name, location, or industry sector of the victim organization. But one security firm – Dragos Inc. – that studied the malware, says the victim is an industrial customer in the Middle East.

Teams of researchers from Dragos and FireEye’s Mandiant last month each published their own analysis of the malware used in the attack, noting that the smoking gun – a payload that would execute a cyber-physical attack – had not been found.

But it turns out TRITON/TRISIS was literally a fail and didn’t make it to an actual cyber-physical attack phase, according to Schneider’s analysis. “We now know a real attack probably never took place. There was a mistake in the development of the malware that accidentally caused the Triconex to … be tripped and taken to a safe state. As a result, this malware that was in development was uncovered,” says Andrew Kling, director of cyber security and software practices for Schneider Electric.

FireEye also noticed that there was a “bug” in the payload delivery system. The script was successful, but it backed itself out. We don’t believe that was supposed to happen,” explains Blake Johnson, a consultant with Mandiant, a FireEye company. 

Johnson says there was no other payload found that would have led to a full-blown attack. “They [the attackers] either had it or didn’t deploy it because they screwed it up … or they hadn’t created the capability yet.”

The researchers weren’t able to pinpoint the attacker’s ultimate goal, either. “The ultimate intent is speculation. We simply don’t know,” Kling says. “You can go from simple intellectual property theft, all the way up to Hollywood script material,” he says, alluding to a worst-case cyber-physical attack.

TRITON/TRISIS Up Close

Schneider obtained the malware from the victim’s infected controller system, and studied its behavior on the proprietary Triconex Tricon controller. The attackers had specifically targeted the victim’s controller and older firmware configuration, indicating that they were intimately familiar with the system. But still unknown is just how the attackers conducted the reconnaissance phase prior to the malware infection.

Schneider’s controller is based on proprietary hardware that runs on a PowerPC processor. “We run our own proprietary operating system on top of that, and that OS is not known to the public. So the research required to pull this [attack] off was substantial,” including reverse-engineering it, Forney says. “This bears resemblance to a nation-state, someone who was highly financed.”

The attackers also had knowledge of Schneider’s proprietary protocol for Tricon, which also is undocumented publicly, and used it to create their own library for sending commands to interact with Tricon, he says.

Forney points out that the malware technically had infected the safety controller, and the “attack itself would come much later” if it had not been found out.

TRITON/TRISIS is an attack framework made up of the two programs: one exploits the Triconex zero-day flaw to escalate user privileges and allowed the attacker to manipulate the firmware in RAM and then implant the RAT, the second program, according to Schneider.

“It’s running in the highest privilege of the machine, and that’s going to allow an attacker to interface with that RAT to do what it wants,” Forney explains.

The RAT basically is there awaiting instructions: to read or write in memory or a control program, for example. “Once it’s set up and ready to go the very moment [the attacker] wants the [safety] controller to not do what it’s intended to do,” Kling says.

For an attacker to leverage TRITON/TRISIS, he or she would need access to the safety network and control of a workstation on the network – such as the Triconex TriStation Terminal – and the Tricon memory-protection mode must be set to “PROGRAM,” he explains.

Schneider’s Forney and Kling say they have no knowledge of any other victims of the malware.

Game-Changer

The attack represents the first such incident to affect the OT engineering department, notes Rob Lee, CEO and founder of Dragos. “It’s not targeting the operational level of HMIs or SCADA devices,” he says, but instead at the targeting engineering systems to change the logic on a system dedicated to protecting physical environments and people.

“You’re going to see TRISIS have a longer-term impact than probably anything else for the engineering community,” he says.

The attack was a wakeup call for other ICS/SCADA vendors as well; their safety controller systems, too, can be juicy targets of sophisticated attackers. An attacker with this level of skill is now an industry-wide problem, notes Schneider’s Kling. “It could be any of our competitors,” targeted this way as well, he says.

An attack that manipulates the memory of a controller is something “no one saw” coming, adds Forney.

Schneider has shifted gears internally in the wake of the attack, updating its threat model for safety system attacks, and memory injection. “We need to adapt our procedures and development processes to adapt to this new reality, and we are actively doing that now,” Kling says.

Defense

While the Triconex Tricon firmware update is being “fast-tracked” by Schneider, the vendor in the meantime is providing defense and mitigation strategies for customers to thwart the attack. Once the firmware is ready, the vendor will send tech support teams onsite to “re-burn and re-flash” the firmware, Forney says.

Schneider has built TRISIS/TRITON detection tools for its support teams, and is providing customers detection and cleanup recommendations in new advisories issued today. Among the recommendations: ensure the physical memory-protection switch is in RUN mode and not PROGRAM mode (except during scheduled programming), which could leave it vulnerable to malicious code. 

In its customer advisory, Schneider recommends:

  • Ensure the cybersecurity features in Triconex solutions are always enabled.
  • Safety systems must always be deployed on isolated networks.
  • Physical controls should be in place so that no unauthorized person would have access to the safety controllers, peripheral safety equipment or the safety network.
  • All controllers should reside in locked cabinets and never be left in the “PROGRAM” mode.
  • All Tristation engineering workstations should be secured and never be connected to any network other than the safety network.
  • All methods of mobile data exchange with the isolated safety network such as CDs, USB drives, DVD’s, etc. should be scanned before use in the Tristation engineering workstations or any node connected to this network.
  • Laptops and PCs should always be properly verified to be virus and malware free before connection to the safety network or any Triconex controller.
  • Operator stations should be configured to display an alarm whenever the Tricon key switch is in the “PROGRAM” mode.

But Reid Wightman, a vulnerability analyst at Dragos, warns that if an attacker can upload logic to the controller firmware, he or she can override the behavior of that physical switch. “Even if it’s in RUN mode, it can be tricked into believing it’s in PROGRAM mode and allowed to accept code.”

He says he’s studied multiple vendors’ embedded controllers, and most have security weaknesses  in the firmware, including the use of third-party libraries. “You can’t trust a controller anymore,” he notes.

Related Content:

 

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/schneider-electric-triton-trisis-attack-used-0-day-flaw-in-its-safety-controller-system-and-a-rat/d/d-id/1330845?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple