STE WILLIAMS

Serious Security: When cryptographic certificates attack

Artificial intelligence, fuzzy logic, neural networks, deep learning…

…any tools that help computers to behave in a way that’s closer to what we could call “thinking” are immensely useful in fighting cybercrime.

That’s because what’s generally known today as machine learning is good at dealing quickly with immense amounts of threat-related data, pruning out the many irrelevancies to leaving the interesting and important stuff in clear sight.

But don’t knock human savvy just yet!

Sometimes, a single, informed glance by a human expert is more than enough, like this great tweet from last week by computer security practitioner Paul Melson:

Do you see what I see? —-BEGIN CERTIFICATE—– UEsDBBQ…

Melson didn’t say exactly how or where he came across the file mentioned in his tweet. Given that he describes himself as an “unrepentant blue teamer” – someone whose job is to keep unwanted visitors out of a network, or to find and eject those who have already sneaked in – it’s reasonable to infer that he oughtn’t, and isn’t planning, to tell us. Let’s just assume he spotted the file as part of ruining some malicious hacker’s sneaky experiments.

If you’re a security researcher yourself, you’re probably going, “Hey, that’s cool!” (Or, perhaps more appropriately, “That’s very uncool.”)

But if you aren’t a sysadmin, you might be wondering what the fuss is about – so we figured it would be informative to dig into the story behind the story.

Why does the text ----BEGIN CERTIFICATE---- UEsDBBQ... ring all sort of alarm bells, and what do those bells tell you?

Here goes.

Why the alarm bells?

If you’ve ever dealt with public key cryptography – for example, setting up web servers to accept HTTPS connections – you’ll know you need a public/private keypair and a cryptographically signed certificate that vouches for your public key.

HTTPS relies on an underlying protocol called TLS, short for Transport Layer Security, and most TLS systems use a file format called X.509 to store their cryptographic material.

X.509 is a product of the 1980s telecommunications world, and follows the fashions of that era – its native representation relies based on a rather complicated binary file storage system called DER (Distinguished Encoding Rules), which is, in turns, based on the resoundingly-named Abstract Syntax Notation One (ASN.1).

Let’s make a self-signed certificate of our own to play with (we used Lua code here, linked with LibreSSL, but you don’t need to reproduce what we did, so don’t worry if you aren’t a programmer):

We saved the raw binary data of the certificate as a DER file, giving us a decidedly text-unfriendly certificate that looks like this when dumped:

Even with the careful help of LibreSSL’s built-in ASN.1 parser, we get the still-not-very-readable:

To make X.509 certificates more robust – so you can add them to emails, keep them in text files and so on without risk of corruption – they are usually saved in Privacy-Enhanced Mail format, or PEM for short:

PEM files consist of the raw DER data converted into base64, a text-only format in which four text characters are used to encode every three bytes of binary data, thus sticking to plain ASCII and avoiding control characters, risky punctuation marks and so on.

So, as a security practitioner, you’ll quickly get used to seeing -----BEGIN CERTIFICATE----- in security-related files.

In fact, you sort-of stop noticing certificates after a while – they aren’t supposed to be secret, and any modification, whether by accident or design, automatically renders them useless.

So why would Melson’s rogue certificate stand out?

Certificates are there to share. Every time someone connects to your website, you send them a copy of your certificate, which contains your public key and a digital signature by which a trusted third party vouches for the fact that it really is your public key, issued for your website.

What are certificates supposed to look like?

Let’s dump the first few bytes of the 150 certificates that are built into Mozilla products as officially-trusted certificate authorities – these are the trusted third parties that Mozilla currently accepts as fit to vouch for websites that you visit with Firefox:

Note that every certificate starts with the bytes 30 82 0x.

That’s because the X.509 encoding always kicks off like this:

30 - what follows is an X.509 SEQUENCE of objects
82 - the next 2 bytes encode the length of the rest of the objects
HH - the high 8 bits of the 2-byte length
LL - the low 8 bits of the 2-byte length

The lengths of all these Mozilla “root” certificates range from 442 to at most 2007 bytes, so their encoded lengths are never lower than 0x0100 (256 in decimal), and never bigger than 0x07FF (2047), so their X.509 encodings always start with one of these sequences:

30 82 01 00
30 82 01 01
30 82 01 02
. . .
30 82 07 fe
30 82 07 fe
30 82 07 ff

(We’ve printed the hexadecimal length of each DER file in the chart above – note how it always comes out as 4 plus the length encoded into the SEQUENCE mark, which denotes the four bytes used for the sequence mark itself, plus the length of the sequence.)

Now, when you convert three bytes starting 30 82 0x into base64 notation, you end up with four encoded bytes like this…

Raw          Base64
------       ------
30 82 00     MIIA
30 82 01     MIIB
30 82 02     MIIC
30 82 03     MIID
30 82 04     MIIE
. . . 

…and so on.

In fact, you get this pattern all the way to 80 82 3f:

Raw          Base64
------       ------
. . . 
30 82 3d     MII9
30 82 3e     MII+
30 82 3f     MII/
30 82 40     MIJA
30 82 41     MIJB

In other words, for any X.509 certificate that is less than 0x3FFF (16,383) bytes long, the first three base64 characters of the corresponding PEM data will always be MII, just as you saw in the example PEM certificate we created earlier.

And therefore –---BEGIN CERTIFICATE---- UEsDBBQ smells plain wrong!

Know your base64 magic

Actually, there’s more to it that that.

The first two characters of a base64 sequence decode to the first 12 bits of the original content, and many popular file types aways start with the same two or more bytes.

Constant bytes at the start of files are known in the jargon as magic numbers, because they magically tell you the type of the file.

More precisely, magic numbers give you a strong hint that a file will turn out to be of type X, or tell you that it can’t be of type Y or Z.

Here are some common examples:

Here are those sequences converted into base64 strings, shortened to just two or three characters for easy recognition:

Experienced security researchers will readily recognise dozens of these base64 “hints”, but even if you only memorise TV (which stands for MZ) and UE (for PK), you will be able to spot loads of potentially malicious files at a glance.

The MZ file marker covers a whole range of Windows executable files, including both EXEs and DLLs, which share the same format.

And ZIP archives are used for many purposes, including by Android apps, which come with a .APK extension but are stored in ZIP format, and Microsoft Word files, which use a variety of different internal layouts, including ZIP.

All of this raises the question, “What had Melson spotted?”

From the UE at the start, you can tell at once it was a ZIP file, but what was inside?

Because it’s in ZIP format, we can present it to the unzip program to list what’s inside it, revealing the tell-tale internal structure of an Excel spreadsheet file:

The presence of a component called xl/vbaProject.bin suggests that this spreadsheet contains macros (embedded program code).

Extracting the VBA (Visual Basic for Applications) macro code from the vbaproject file reveals an Auto_Open function.

As its name suggests, this function is intended to execute automatically when the document is opened:

By default, Office won’t automatically run macros, so a little caution will keep you safe – crooks trying to spread malware this way not only have to persuade you to open the document but also have to trick you into clicking a button to [Enable macros].

Fortunately, it looks as though Melson spotted this one while the crooks were still messing around with the idea, because the embedded Auto_Open macro simply tries to run a program called C:shell.exe that has already been installed in the root directory of the C: drive.

Given that the root directory is not writable by a regular user, any crook hiding malware there probably already has sysadmin powers, so this attack really doesn’t add much.

But it’s a neat way of hiding in plain sight – and it means that the crooks can use the inconspicuous and official Windows utility CERTUTIL.EXE to decode the file.

CERTUTIL knows to expect the -----BEGIN CERTIFICATE----- marker and therefore automatically strips it off before un-base64ing the enclosed data:

Of course, if you use CERTUTIL to verify the extracted “certificate” afterwards, you will immediately realise that it is bogus, because it isn’t in DER format at all…

…but the crooks already know that and can therefore use CERTUTIL as an attack utility rather than as a security tool.

What to do?

  • Don’t trust files based only on filenames or headers they include. The crooks don’t play by the rules so they routinely mis-label data in the hope of staying off your radar a bit longer.
  • Take care when you verify downloaded data. Make sure that the scripts or tools you use to validate untrusted files don’t themselves try to “assist” you by choosing a helper app based on what the files seem to be.

The second point sounds above obvious but it is easily overlooked.

Notably, never double-click a file just to see what the operating system makes of it – you will often end up launching it into an application you weren’t expecting, with all the security risks that entails, instead of merely viewing it.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jgeQ59eyULY/

Chill, it’s not WikiLeaks 2: Pile of EU diplomatic cables nicked by hackers

The New York Times has published what it says are excerpts from hacked EU diplomatic cables that a cybersecurity company apparently made available to reporters.

The US newspaper said 1,100 diplomatic cables were handed to it by infosec startup Area 1, which it described as “a firm founded by three former officials of the National Security Agency”.

Last time the NSA was in the news in connection with hacking of state secrets was when its former sysadmin contractor, Edward Snowden, revealed the American state agency’s ongoing mission to compromise the world’s internet communications.

According to the newspaper, the cables were posted online in plain text by hackers who successfully phished diplomats in Cyprus, discovering passwords that let them into a low-level EU database of diplomatic messages and cables.

Though the NYT quoted Area 1 researcher Blake Darche as attributing the hack to “the Chinese government”, later drilling this down to “the Strategic Support Force of the People’s Liberation Army”, the question of attribution (ie, “who should we blame for this?”) is a thorny one. Usually, the main method of attribution is to study the attack methodology and code used, which can reveal similarities with known previous attacks. State-sponsored hackers, however, have grown adept at borrowing each other’s techniques to deflect blame.

A selection of the cables was released online by the NYT as a carefully sanitised PDF. They did not appear to contain anything of immediate interest from the UK point of view, consisting mostly of summaries of diplomatic meetings that appeared to have been circulated around personnel of the European External Action Service, the EU’s quasi-national diplomatic corps.

In brief:

  • Afghanistan is unstable and produces lots of drugs, which means the US, Russia and the EU broadly agree that peace is needed in the region.
  • Everyone agrees sanctions should remain on North Korea until it drops its nuclear programme.
  • China’s desire to claim the South China Sea as its own territory, in violation of international treaties, is being thwarted by US, British and French warships patrolling the area.
  • Routine visits and trade negotiations between countries and political blocs are largely continuing as they always have done.

As a rather red-faced NYT admitted, the hack “also revealed the huge appetite by hackers to sweep up even the most obscure details of international negotiations”.

The EU Council issued a meaningless statement that failed to answer the question of whether the leak was genuine, merely saying it was “aware” of “allegations” and “does not comment” on them.

Quoting the usual nameless sources, the American newspaper also said that the EU “had been warned, repeatedly”, by Americans, “that its ageing communications system was highly vulnerable to hacking by China, Russia, Iran and other states”. This, it claimed, was usually shrugged off.

The Register is yet to speak to Area 1 about its discoveries and will have a more infosec-focused analysis of the breach online soon. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/19/nyt_eu_diplomatic_cables_hacked_area_1/

US told to appoint a damn Privacy Shield ombudsperson already or EU will take action

The US has been told once again to appoint a permanent ombudsperson to oversee the deal governing transatlantic data flows, but this time has been given a deadline.

The European Commission’s second annual review of the Privacy Shield agreement, published today, made similar noises to last year’s, concluding the deal does the trick but could be better.

It said the US ensures an adequate level of protection for personal data transferred under the deal, and has made some improvements, but progress is slow and there is more work to do.

Someone checking the time

Two years later and it still sucks: Privacy Shield progress panned

READ MORE

The Privacy Shield agreement was rushed through in the summer of 2016 after its predecessor Safe Harbor was scrapped, and although widely thought to be better, privacy bods still derive pleasure from poking holes in it.

The commission, though, is in a tough spot: ditching Privacy Shield would damage businesses on both sides of the pond and mean starting negotiations all over again, which makes plugging away at the existing agreement more palatable.

The result is that the review has to strike a broadly positive tone, while making nudges it hopes the US will take notice of – although many of the recommendations in the 2017 review were only implemented in the backend of this year.

The delays have frustrated data protection experts and raised questions about how seriously the US is taking the terms of the agreement.

This argument is exemplified by the fact the main issue identified in this year’s review is the same as last year: the lack of a permanent ombudsperson (the current role-holder is only acting).

In 2017’s review, the commission didn’t set a deadline; this time it said an appointee must be identified by 28 February 2019. If it hasn’t happened by then, the commission will “consider taking appropriate measures”. Stern words.

However, the ombudsperson has yet to receive any requests – something the commission has acknowledged, though it revealed a complaint has been submitted to the Croatian data protection agency and is currently under review.

Beyond the ombudsperson, the review praised the US for the improvements based on previous recommendations, despite the fact these will need further evaluation as they have only been implemented recently.

The improvements include that the US Department of Commerce has “strengthened” the certification process, introduced new oversight procedures and was trying to spot dodgy claims proactively.

This includes requiring that first-time applicants don’t publicise their certification until the review is finalised, and random spot checks on companies to detect possible compliance issues.

The US government is also praised for “actively using a variety of tools” to seek out companies that are falsely claiming to be certified, such as online text and image searches.

The government is also carrying out a quarterly review of companies identified as more likely to make such claims, which has led to 50 cases being referred to the Federal Trade Commission.

However, the FTC has only recently begun to proactively monitor compliance, and the commission said it “regrets that at this stage it was not possible to provide further information on its recent investigations”.

Issues around legislative changes in the US and developments on surveillance activities also received a broad thumbs-up.

Calendar

United States, you have 2 months to sort Privacy Shield … or data deal is for the bin – Eurocrats

READ MORE

The commission had asked that the Presidential Policy Directive 28 – which states surveillance activities need to safeguard personal information regardless of where the person resides – be incorporated into section 702 of the Foreign Intelligence Surveillance Act when it was reauthorised earlier this year.

This didn’t happen, but the commission’s positive spin was that at least the reauthorised Act didn’t restrict any safeguards that were in place when Privacy Shield was adopted.

It also noted that the directive had been confirmed as being in place across US spy activities by the Privacy and Civil Liberties Oversight Board, which now has enough members to function.

The commission emphasised that recent efforts to implement previous recommendations would need to be “closely monitored” since they relate to elements “that are essential for the continuity of the adequacy finding”.

These are: the effectiveness of the handling and resolution of complaints by the ombudsperson; the effectiveness of tools used to detect false claims of participation; the progress of FTC investigations to detect violations of the deal; the effectiveness of the ombudsperson in handling complaints; and of course the appointment of a permanent person.

“The EU and the US are facing growing common challenges, when it comes to the protection of personal data,” said justice commissioner Věra Jourová.

“The Privacy Shield is also a dialogue that in the long term should contribute to convergence of our systems, based on strong horizontal rights and independent, vigorous enforcement.

“Such convergence would ultimately strengthen the foundation on which the Privacy Shield is based. In the meantime, all elements of the Shield must be working at full speed, including the Ombudsperson.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/19/eu_pleads_with_us_to_name_a_permanent_privacy_shield_ombudsperson/

Cybersecurity in 2019: From IoT & Struts to Gray Hats & Honeypots

While you prepare your defenses against the next big thing, also pay attention to the longstanding threats that the industry still hasn’t put to rest.

Will 2019 be the year we see a nation-state take down a large-scale industrial installation? How much of the world’s cryptocurrency will be mined by hackers using unsuspecting endpoints? What kind of damage can we expect from well-intentioned but misguided vigilantes? And what does it all mean to you?

As you read this season’s crop of forward-looking cybersecurity articles, it’s worth keeping in mind that many of the biggest challenges that companies will face next year are the same usual suspects we’ve been dealing with for years: phishing, social engineering, credential reuse, and web app attacks — all in the midst of the perennial head-count shortage in most security teams. Leading-edge exploits make for exciting reading, but while you’re preparing your defenses against the next big thing, make sure you’re also paying attention to the kind of longstanding threats the industry still hasn’t put to rest effectively.

That being said, I do think these developments will be worth watching in the year ahead.

Targets:  Industrial IT in the Crosshairs, Cryptominers Lurking in the Network
While familiar targets like major retailers, small businesses, healthcare organizations, financial institutions, and consumers will remain popular, the rapid proliferation of the Internet of Things (IoT) means we’ll also see increasing attacks on industrial control systems and supervisory control and data acquisition systems. Relatively overlooked in the past, these systems now will see more attacks that cause more material damage to people, facilities, and businesses. In some cases, these attacks will be financially motivated, with attackers demanding a ransom to release control of their target’s systems. In others, nation-states will become bolder and more  ambitious in extending cyberwarfare to the physical landscape.

Cryptojacking, a relatively nondisruptive form of cybercrime, can be especially insidious since it so easily evades detection, as we’ll see more often in 2019. While it’s less flashy than a ransomware attack or data breach, cryptojacking can be highly damaging to its targets. As hackers use hijacked machines to mine cryptocurrency, they slow and prematurely age business’s computers while degrading network performance. We’ve already seen the rise of cryptomining malware designed specifically to target corporate networks and exploit vulnerabilities in enterprise software. As organizations continue to shift their legacy apps to the cloud, hackers will follow, making both migrated and web-native apps prime attack vectors. Security teams must be especially vigilant to ensure that surreptitious cryptominers aren’t quietly undermining their IT assets.

Tactics: Open Source Opens Doors, Containers Contain More Than They Should
Hackers thrive by staying one step ahead of security teams — both by discovering new vulnerabilities in established technologies and by targeting new technologies that have yet to be effectively secured.

Apache Struts vulnerabilities illustrate this durable work ethic. In 2017, an unpatched Struts vulnerability played a central role in the Equifax breach — but hackers didn’t stop there. As companies worked to close the kind of gaps that Equifax had left open, a new generation of attacks took aim at a deeper level within the core Struts framework. By the end of last summer, a proof-of-concept exploit of a new, much more challenging vulnerability — CVE-2018-11776 — was posted on GitHub. The Apache Foundation had by then released an update for the vulnerability, but a patch can only protect you if it’s deployed. And there’s never a shortage of appealing targets that haven’t got around to it yet.

In other areas of innovation, the ongoing shift to cloud will see containers, Kubernetes, and other channels and frameworks come under attack as well. It’s safe to assume that each new infrastructure technology will be closely followed by new ways to exploit them as this endless game of cyber cat and mouse goes on.

Actors: Vigilantes Wreak Havoc
The vulnerability of electronic election systems has been obvious for many years. Now, as nation-states seek new ways to gain advantage over their rivals, we can expect more meddling. 

And bad actors aren’t the only ones we have to worry about. Vigilante hackers have begun patching vulnerable IoT systems without their owners’ knowledge. One such “gray hat” recently claimed to already have broken into 100,000 MikroTik routers to protect their owners from attack. It’s a nice thought, and it can be much faster than waiting for a vendor to issue a patch of its own, but it’s far from reassuring to think that the security and stability of our connected devices are in the hands of unknown people with unknown motives — and unknown competence. What if a well-intentioned fix ends up disabling a device instead? What if that was the device responsible for keeping a medical patient alive — and the botched fix is now spreading to disable other similar devices?

Defensive Measures: Deception Solutions on the Rise
Hackers aren’t the only ones thinking creatively. I’ve been seeing more organizations incorporate deceptive solutions into their cyber-defense strategy. By placing honeypots and other traps in their apps and networks, they can expose attackers that have already obtained a foothold in their network. Some organizations also use deception to lure attackers to a honeypot in a safely controlled manner in order to gain information about the attacker’s targets and tactics — though this obviously calls for a careful approach to avoid unintended consequences.

While organizations will need to determine what their goals are with deception — high-fidelity alerts to immediately invoke a response or counterintelligence to track an attacker’s objectives and tactics — it will be an effective part of a multilayered defense strategy.

At the end of the day, though, the core of cyber defense will remain the same in 2019 as it has been for years: security fundamentals like timely patching, decreasing your attack surface, and monitoring your environment for any signs of illicit activity. Measures like these, implemented in the context of an organizationwide culture of security, can go a long way to ensure a safe and prosperous new year.

Related Content:

Phillip Maddux is Principal Application Security Researcher and Advisor at Signal Sciences. Previously, Phillip was a Vice President at Goldman Sachs, where he ran application security programs and was a driver for implementing DevSecOps within the firm. In his spare moments, … View Full Bio

Article source: https://www.darkreading.com/endpoint/cybersecurity-in-2019-from-iot-and-struts-to-gray-hats-and-honeypots/a/d-id/1333490?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

DOJ Announces Indictment in Nigerian Banking Scam

International investment scam laundered funds through US bank accounts before being sent to Nigeria.

It turns out that the fake Nigerian prince/government official/businessman who wants you to invest in a sure thing has a real name. Osondu Victor Igwilo, 49, of Lagos, Nigeria, was charged in a federal complaint filed in the Southern District of Texas in December 2016 and unsealed yesterday.

Igwilo is charged with one count of wire fraud conspiracy, one count of money laundering conspiracy, and one count of aggravated identity theft. According to the complaint, Igwilo recruited teams of individuals to send scam emails claiming to be from BBT Corporation.

He then sent US citizens he had recruited to visit the victims in person to finalize the scam and retrieve funds, which were then laundered through US bank accounts before being sent to Nigeria. The total amount of the victims’ losses were not disclosed by the Department of Justice in announcing the indictment.

Eight other individuals have been charged in connection with the scheme. Four await sentencing, two are pending trial, one is pending extradition, and one remains a fugitive.

Igwilo also remains a fugitive.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/doj-announces-indictment-in-nigerian-banking-scam/d/d-id/1333523?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook Data Deals Extend to Microsoft, Amazon, Netflix

An explosive new report sheds light on data-sharing deals that benefited 150 companies as Facebook handed over unknowing users’ information.

If you shared data with Facebook over the past few years, there’s a high chance Facebook handed it to Microsoft, Amazon, Spotify, or any of the other 150 companies that benefited from extensive data-sharing deals with the social media giant, The New York Times reports.

Internal Facebook records provide a more detailed look at data-sharing practices intended to help Facebook and its partners at the expense of users’ privacy. For example, Facebook let Microsoft’s Bing search engine view the names of “virtually all Facebook users’ friends without consent,” the report states. Netflix and Spotify could read account holders’ private messages.

Documents show the partnerships primarily benefited tech businesses but were also done with online retailers, entertainment sites, automakers, and media outlets, all of which had applications seeking data of hundreds of millions of people a month. The oldest deals were done in 2010; all were still active in 2017, and some continue to be in effect this year.

Facebook says it’s fading many of these partnerships and there is no evidence of data abuse by partner companies. It did admit to managing some deals poorly and letting companies continue accessing users’ data after they had disabled application features that needed it.

The findings have prompted inquiries about an agreement Facebook made with the Federal Trade Commission in 2011. As part of the deal, Facebook was prohibited from sharing user data without permission. Steve Satterfield, director of privacy and public policy at Facebook, said to the Times that none of the company’s deals dishonored the agreement or users’ privacy.

Facebook holds that it was not required to obtain user consent as part of these data-sharing deals because it considers partner organizations “extensions of itself.” Data privacy experts argue against this, and FTC employees say Facebook’s partnerships broke their 2011 deal.

You can read more details in the full NYT report here.

Facebook has since responded to the article. In a blog post published Dec. 18, Konstantinos Papamiltiadis, director of developer platforms and programs, explains how there were two purposes to granting major tech companies access to user data: to help people access Facebook accounts and features on outside devices and platforms, and to build “more social experiences” – for example, to view recommendations from Facebook friends on Pandora and Spotify.

People want to use Facebook features on devices and products the company doesn’t support, he says. Integration partnerships with Amazon, Apple, Microsoft, and Yahoo aim to enable use of Facebook features across services. However, as former Facebook CISO Alex Stamos points out, there’s a big difference between integration partnerships and sending secret data.

The former can be good: allowing for third-party clients, he says, is a positive move among dominant tech platforms. As an example, he points to Gmail: Limiting usage of Gmail to Android would be wrong. However, integrations that permit the transfer of illicit data to other companies’ servers “really is wrong.” Stamos calls for Facebook to build a table listing partner companies, the type of integration used, which data was accessible, steps needed to activate integration, and if/when the integration was shut down.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/facebook-data-deals-extend-to-microsoft-amazon-netflix/d/d-id/1333524?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How not to secure US missile defences

What sort of organisation might suffer the following list of security failures?

Three out of five physical locations visited for an audit failed to implement multi-factor authentication (MFA) on networks used to secure sensitive technical data.

Two weren’t securing their equipment racks.

Three weren’t routinely encrypting highly-sensitive data held on USB sticks.

At all five locations, admins could access and maintain systems without having to justify that level of privilege.

Most extraordinary of all, as of 2018, one site’s patching was so deficient that it failed to address a critical vulnerability that first came to light nearly three decades earlier, in 1990.

This might have been an extreme one-off except that another site had sat on another serious flaw dating from 2013 despite being reminded of that fact in early 2018.

The organisation in question is the US Department of Defense’s Ballistic Missile Defense System (BMDS), five of whose 104 sites were chosen at random in early 2016 for a security audit by the DOD’s Inspector General.

It’s hard to know what to make of the number of weaknesses uncovered in computer security across so few sites, but if these findings (published in redacted form in April but only recently noticed) are typical of the other 99, the BMDS has a problem on its hands.

As the DOD Inspector General spells it out in its report:

The disclosure of technical details could allow US adversaries to circumvent BMDS capabilities, leaving the United States vulnerable to deadly missile attacks.

The BMDS is a group of systems used to intercept incoming missiles, which is important to US defence for two reasons.

It intercepts missiles using its own missiles, hopefully sparing targets from destruction and the fact that it does this at least some of the time benefits first-strike deterrence.

Securing it should be a priority.

It’s a story that will remind readers with longer memories of the 2013 claim that for two decades during the 1960s and 1970s, the launch code for the US Minuteman nuclear missiles was eight zeros (00000000).

Whether that sounds like something from Dr Strangelove depends on how we interpret the point of that code – was it to secure the missiles with a secret code or, as appears to be the case, a way of making it easy for personnel to launch them in a hurry?

The thing about the past is they do things differently there in ways that don’t always make sense in hindsight.

A system as large and complex as the BMDS wasn’t designed but evolved over many decades, with networked computing systems added more recently. Some of the failures might simply reflect out-of-date processes that were well-intended when they were first implemented.

It’s also true that many of the above failings – administrators who don’t patch their systems, lock their racks, or encrypt removable drives – might apply to almost any organisation, although possibly not ones responsible for defending a world superpower.

What matters is the willingness to say what needs to be said to put things right – on that point at least, this report might do its job.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/WsOsZWRgt2Q/

SQLite creator fires back at Tencent’s bug hunters

The creator of SQLite, an open source database management system used in thousands of applications, has downplayed reports of a bug that could lead to remote code execution.

The Tencent Blade security research team reported the bug, called Magellan, in both SQLite and the open-source Chromium browser, which uses a version of the database. They said:

This vulnerability can be triggered remotely, such as accessing a particular web page in a browser, or any scenario that can execute SQL statements.

Developed in 2000, SQLite has become one of the most commonly-used open source programs and is a part of many other applications, including the Chrome, Safari and Firefox browsers and back-end web application frameworks. Skype uses it, and so do the Python and PHP programming environments. You’ll find it on all Android and iOS devices, and every Mac and Windows 10 machine. It also powers many Internet of Things devices, which SQLite’s developers call out specifically as an application. Those devices can be especially difficult to update in the field.

Tencent warned that the bug could be serious given the product’s “wide range of influence”. However, SQLite’s creator Dr Richard Hipp told Naked Security:

[Tencent] are highly motivated to spin this as a huge finding. A huge bug that’s going to affect a lot of people and I believe that they have exaggerated things for that purpose. Some news organizations have picked it up and said that millions of applications are affected by this and that’s just not true.

He argued in a tweet that the bug isn’t nearly as bad as reports make it out to be:

Whereas many SQL databases operate as servers separate to the main application, SQLite is a serverless embedded database. Developers link its software library into their application code and it hosts everything, including the database schemas and data, in a single disk file.

While application software using the database would normally shield it from direct user access, security could be an issue if the application allows an attacker to mount the exploit. This seems to be the case with Google Home, the voice-activated speaker that relies on SQLite internally. The researchers said that they successfully exploited that product.

The flaw lies not in the core SQLite engine itself, but in FTS3, a full-text search module that developers can use with the system. Sending SQL commands to FTS3 can trigger the flaw. An attacker might do that by directing an application using SQLite to visit a malicious website, which could then send the SQL commands using JavaScript.

A successful exploit could enable an attacker to leak program memory (a possible security danger), crash a program, or in the worst case execute code remotely on the system, Tencent Blade’s researchers said.

That’s unlikely in most cases, responded Hipp:

You need a combination of things. You have to be able to execute arbitrary SQL and you have to have FTS3 enabled, and in those cases you can get a remote code execution.

Hipp added that Google Chrome, which is built on Chromium, was vulnerable to this because it allowed SQL queries to FTS3 via Web SQL Database, a now-deprecated mechanism based on SQLite that allowed websites to directly query embedded databases via SQL. Hipp continued:

For the vast majority of applications that do not have an SQL injection problem, or do not enable full text search 3, there’s no impact to them at all.

The SQLite development team has patched the code and added a new feature that Hipp says will add more protection against any similar issues in the future. The latest update describes it thus:

Added the SQLITE_DBCONFIG_DEFENSIVE option which disables the ability to create corrupt database files using ordinary SQL.

For people who do have applications like WebSQL and Chrome, that are allowing anonymous passers-by to run arbitrary SQL statements, the defensive option adds additional defensive mechanisms in an attempt to avoid future zero day attacks like this.

Hipp admitted that the bug slipped by because the testing standard for FTS3 was inferior to the standard that the development team used for the core database engine. Historically, SQLite’s core engine is exposed to Google’s OSSFuzz tool for automated testing, but FTS3 has not been. The open source team will be exposing the library to Google’s tool in the future, he said.

Tencent’s researchers are following responsible disclosure rules by informing Google, which has fixed the Chromium vulnerability. However, they are still not releasing a proof of concept exploit. Instead, they are contacting other vendors privately with details of the vulnerability to have them update their products.

That hasn’t stopped others reportedly crashing Chrome with their own PoC code:

The Tencent Blade team has warned companies using Chromium in their products to update it to the official stable version 71.0.3578.80. SQLite users should update to 3.26.0, they added.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/G5et_j-7veU/

Instagram became the preferred tool in Russia’s propaganda war

Facebook, Twitter or Google’s YouTube: those are the social media platforms that garnered most of the focus of lawmakers, researchers and journalists when the Russian disinformation campaign around the 2016 US presidential election first came into focus.

But according to two new, comprehensive reports prepared for the Senate Intelligence Committee, one of which was released on Monday and the other leaked over the weekend, Instagram was where the real action was.

Disinformation and meddling may have reached more people on Facebook, YouTube or Twitter, but the posts got far more play on Instagram. In a years-long propaganda campaign that preceded the election and which didn’t stop after, Facebook’s photo-sharing subsidiary generated responses that dwarfed those of other platforms: researchers counted 187 million Instagram comments, likes and other user reactions, which was more than Twitter and Facebook combined.

The Washington Post [paywall] quoted Philip N. Howard, head of the Oxford research group that participated in one of the reports:

Instagram’s appeal is that’s where the kids are, and that seems to be where the Russians went.

A massive, multi-year campaign to manipulate Americans

One of the reports was commissioned by the Senate Intelligence Committee and written by the social media research firm New Knowledge, Columbia University and Canfield Research. According to that report, Russia’s propaganda war was broadened to reach the US starting in 2014 and would eventually spread to reach a “massive” scale.

According to the New Knowledge report, the Russian campaign reached 126 million people on Facebook, generated 10.4 million tweets on Twitter, uploaded 1,000+ videos to YouTube, and reached over 20 million users on Instagram.

The second report, written by Oxford University’s Computational Propaganda Project and network analysis firm Graphika, became public when the Post obtained it and published its highlights on Sunday.

Some of those highlights:

  • They’ve targeted Robert S. Mueller III. Russians “fluent in American trolling culture” went after special counsel Mueller following the election, when he was appointed to investigate alleged Russian election interference. The researchers said that Russian operatives used fake accounts on Facebook, Twitter and beyond, falsely claiming that the former FBI director was corrupt and that the allegations of Russian interference in the 2016 election were “crackpot conspiracies.” For example, one Instagram post falsely claimed that Mueller had worked in the past with “radical Islamic groups.”
  • Humor. The researchers found that activity coming from the Russian government-linked propaganda factory known as the Internet Research Agency (IRA) included making fun of the Mueller investigation and of Hillary Clinton. One widely shared image showed Hillary Clinton with the caption: “Everyone I don’t like is A Russian Hacker.” Another showed a woman in a car talking to a police officer, with the caption, “IT’S NOT MY FAULT OFFICER, THE RUSSIANS HACKED MY SPEEDOMETER.”
  • Instagram. The researchers found that “Instagram was perhaps the most effective platform” for the IRA, though they were active on many others. The group created a mere 133 Instagram accounts, but a dozen of them attracted more than 100,000 followers, making them what’s commonly viewed as “influencer” accounts, Wired reports. The researchers say the Russians doubled their internet use in the months after Trump’s election.
  • A multipronged social media jack-knife. The operatives hopped from platform to platform, enabling them to milk each for its particular tools and focus and making it impossible for any one company to find and stamp out all the misleading posts. They could get at “political and journalistic elites” on Twitter, the Post reports, while Facebook is handy for slicing and dicing the electorate via demographics and ideological leanings. They particularly focused on energizing conservatives and suppressing African Americans, who tend to vote Democratic. From the Post:

    YouTube provided a free online library of more than 1,100 disinformation videos. PayPal helped raise money and move politically themed merchandise designed by the Russian teams, such as “I SUPPORT AMERICAN LAW ENFORCEMENT” T-shirts. Tumblr, Medium, Vine, Reddit and various other websites also played roles.

Reactions

On Tuesday, the National Association for the Advancement of Colored People (NAACP) kicked off a week-long boycott of Facebook and Instagram over the findings about how African Americans have been targeted on social media and the platform’s history of “data hacks which unfairly target its users of color.” The group also returned a Facebook donation.

In a statement, Derrick Johnson, NAACP President and CEO, called Facebook’s part in the Russian campaign “reprehensible”:

Facebook’s engagement with partisan firms, its targeting of political opponents, the spread of misinformation and the utilization of Facebook for propaganda promoting disingenuous portrayals of the African American community is reprehensible.

The Post reports that top Democrats are reading the reports as indicative of the need for further study of social media and new regulations in order to prevent further electoral manipulations from foreign actors. Rep. Adam B. Schiff (D-Calif.), the incoming chairman of the House Intelligence Committee:

I think all the platforms remain keenly vulnerable, and I don’t have the confidence yet companies have invested the resources and people power necessary to deal with the scope of the problem.

Republican lawmakers on the Senate Intelligence Committee either declined to comment or hadn’t responded to inquiries from the Post as of yesterday.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qU1gBAQwV5Q/

Snack-happy parrot shows insider threats come in all shapes and sizes

A new form of insider threat has been discovered, with evidence of a threat actor attempting to burgle a homeowner via illicit snack delivery orders placed on her Amazon Alexa smart speaker.

According to the UK’s National Animal Welfare Trust, the culprit goes by the handle of “Rocco,” a fugitive African Grey Parrot that had already displayed antisocial tendencies – namely, foul language and tossing his water bowl around – while in the care of the trust.

Staff member Marion Wischnewski, who lives in Oxfordshire, had rehomed Rocco, in spite of his propensity to fling and his swearing, which had led the trust to fear that visitors would flee from his verbal floggings.

Once ensconced in his new workplace, Rocco set about endearing himself to his human overlords. Wischnewski told news outlets that he’s got a “sweet personality” and loves to dance to romantic music… music that, apparently, he’s learned how to request from Alexa. He has, after all, been exposed to the overlords’ conversations with Alexa, and, as members of his species are wont to do, has learned how to ask for what he wants.

What he wants, besides sappy songs to bounce to, are tasty snacks, various inanimate objects, and homeware. He has reportedly attempted to place orders for lightbulbs, a kite, watermelon, ice cream, raisins, strawberries, broccoli, and a tea kettle.

Thanks to security controls put in place by his overlord, Rocco has not managed to successfully defraud her smart speaker. The Sunday Times quoted Wischnewski:

I have to check the shopping list when I come in from work and cancel all the items he’s ordered.

This is a timely reminder that…

  • Insider threats are real. They come in more forms than you might imagine and aren’t always malicious: they can be caused by negligence, lack of training, or, say, bored, chatty parrots. Unfortunately, Alexa can’t tell the difference between legitimate and not-so-legitimate requests, showing that…
  • The internet of “smart” things (IoT) isn’t all that smart. Not if IoT devices can be used by snack-happy parrots or little kids who get a hankering for a big old pricey dollhouse.
  • There are controls you can put in place to stop Rocco-esque shopping madness. Wischnewski says she’s put a parental lock on her Alexa’s buying ability. She has to check and cancel any orders that Rocco may have made in her absence: what must be a prodigious task, given that Rocco seems to have fallen in love with Alexa and interacts with her about 40 times a day.

At least her Alexa comes with a parental lock that can stop the transactions… Unless that’s just what Rocco wants us to think?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/-kHFvT_Q5EA/