STE WILLIAMS

German cybersecurity chief: Anyone have evidence of Huawei naughtiness?

Germany’s top cybersecurity official has said he hasn’t seen any evidence for the espionage allegations against Huawei.

Arne Schönbohm, president of the German Federal Office for Information Security (BSI), the nation’s cyber-risk assessment agency in Bonn, told Der Spiegel that there is “currently no reliable evidence” of a risk from Huawei.

“For such serious decisions such as a ban, you need evidence,” Schönbohm said. Should that change, the BSI will “actively approach German industry” he assured the paper.

Huawei has opened a facility in Bonn, in west Germany, where it shares code and allows Schönbohm’s risk assessors to inspect Huawei kit. This is along the same lines as the UK’s Huawei Cyber Security Evaluation Centre (HCSEC) in Banbury, informally known as “The Cell”, which addresses GCHQ’s concerns about backdoors in Huawei products.

China selfie revolution

UK’s Huawei handler dials back support for Chinese giant’s kit in critical infrastructure

READ MORE

This has been running for seven years and the Oversight Board has now produced four annual reports. The most recent, in July, warned that “the Oversight Board can provide only limited assurance that all risks to UK national security from Huawei’s involvement in the UK’s critical networks have been sufficiently mitigated”.

HCSEC attempts to replicate Huawei binaries from source code provided by the company to ensure end-to-end scrutiny. It hasn’t fully completed this, the Oversight Board said, and also expressed concerns about third-party software (PDF).

“There are no concerns about individual companies,” Peter Altmaier, German Federal Minister for Economic Affairs and Energy, confirmed to Reuters on Monday. “But each product, each device must be secure if it is going to be used in Germany.”

The Five Eyes states have led concerns against Huawei without citing specific evidence. Australia confirmed in 2013 that it had blocked Huawei from its NBN fibre programme, and in August excluded it from selling 5G gear. A report last month suggested New Zealand companies were being advised to avoid doing deals with Huawei.

Twelve days ago, Canada arrested the founder’s daughter, Meng Wanzhou, on a US warrant over an unrelated issue: circumventing sanctions against Iran.

Huawei privately bridles at comparisons with the state-owned telco ZTE and can point out that it has been the victim of hacking. In 2014, the New York Times and Der Spiegel reported on “Operation Shotgiant”, a multiyear operation by America’s National Security Agency (NSA) that infiltrated Huawei’s network at its Shenzhen HQ and yielded confidential source code.

“Many of our targets communicate over Huawei produced products, we want to make sure that we know how to exploit these products,” one NSA document explained.

“The Huawei revelations are devastating rebuttals to hypocritical US complaints about Chinese penetration of US networks,” wrote former DoD counsel Jack Goldsmith.

Deutsche Telekom has a close strategic relationship with Huawei but said it was reviewing matters this week. Orange pledged to continue its relationships with Huawei’s European 5G rivals, Nokia and Ericsson.

Which comes as relief for the latter. The UK’s O2 is reportedly seeking up to £100m in damages from Ericsson for a bungle that deprived over 30 million customers of data access for 24 hours. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/18/german_cybersecurity_chief_show_me_the_huawei_evidence/

8 Security Tips to Gift Your Loved Ones For the Holidays

Before the wrapping paper starts flying, here’s some welcome cybersecurity advice to share with friends and family.PreviousNext

Image Source: Pixabay

Image Source: Pixabay

OK, so you’re sitting around the fireplace, eggnog in hand, the family’s about to open presents, and your brother-in-law wants to know whether he really needs to do the security updates for his Microsoft apps.

Don’t get angry – just ask him a few simple questions: Do you brush your teeth in the morning? Does your car need an oil change every 3,000 miles? Are there 24 hours in a day? With any luck he’ll get your point.

Kelvin Coleman, the new executive director at the National Cyber Security Alliance, says society needs a change in security mindset.

“We need to change the culture so we get to the point that being smarter with passwords and doing the updates on computer applications are just things that people do naturally,” Coleman says. “Everyone agrees that seat belts save lives and keep people safe. There’s no difference with computers. It’s like making sure the door on your house is locked before you leave for the day.”

That’s advice no one at your holiday gathering can dispute. And there’s more for you to helpfully pass along, courtesy of Coleman, along with Patrick Sullivan, director of security strategy for Akamai, and John Pironti, president of IP Architects and an ISACA member. Their practical tips will provide a safe cyber experience for everyone over the holidays – and all year, too.  

 

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full BioPreviousNext

Article source: https://www.darkreading.com/perimeter/8-security-tips-to-gift-your-loved-ones-for-the-holidays/d/d-id/1333498?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How to Engage Your Cyber Enemies

Having the right mix of tools, automation, and intelligence is key to staying ahead of new threats and protecting your organization.

There’s a lot of talk about “cyber threat intelligence” these days, but very few organizations have fully implemented and operationalized a program. Most companies will ingest technical intelligence, which consists of indicators of compromise, malware signatures, malicious IPs, and other tactical intel. These are relatively easy to understand and act on but they don’t do much to protect your organization long term.

At the end of the day, all attacks are perpetrated by humans. Understanding your attackers’ motives and tendencies can help you make strategic decisions to protect your company long term. This means good news and bad news.

The bad news: This type of intelligence is the most difficult (and most risky) to collect.

The good news: Your adversaries might be anonymous, but they’re not invisible.

Here is how organizations can use human intelligence — known as HUMINT — to engage their cyber adversaries and enhance their existing intelligence program.

What Is HUMINT?
HUMINT can be defined as the process of gathering intelligence through interpersonal contact and engagement rather than by technical processes, feed ingestion, or automated monitoring. It’s the equivalent of what an FBI or CIA agent does when they go undercover and involves creating avatars that act like fellow hackers to blend in on Dark Web and anonymous forums.

Whether it’s done by a threat actor or threat hunter, HUMINT gathering requires highly specialized skills and knowledge to avoid suspicion and detection.

So, why is it worth the risk?

Here are some of the ways companies can use HUMINT in their cybersecurity operations:

  • New Threat Discovery: Engaging with threat actors can help you uncover new tools, tactics, and/or attacks that may affect your organization. It’s a great way to supplement your existing intelligence feeds to provide more context and a deeper understanding of threats.
  • Threat or Attack Investigation: If you discover a new threat, you may want to engage your established threat actor sources to learn more about it and how it may impact you.
  • Damage Assessment: If you are breached, you need to understand the extent of that breach, what data has been exposed, and how the attacker got in. We’ve seen an increase in extortion attacks, where threat actors will claim to have stolen sensitive data and demand a ransom to not publish that data. HUMINT can help you uncover the source of a leak and/or if the attacker’s claim is legitimate.

Best Practices
There are a number of best practices organizations should keep in mind when conducting HUMINT gathering.

1. Take Personal Security Measures: Hackers are like white blood cells. If they detect a foreign object, they attack. If you are discovered as a threat hunter, you immediately become a target, so you need to make sure nothing leads back to you or your company. When engaging with cyber enemies, make sure you use a virtual machine with nothing saved on it. If your cover is blown, you don’t want them turning their attention to you or your company.

2. Tell a Good Story: When FBI or CIA agents go undercover, they spend months or even years developing their backstory. Your story has to be believable, so spend time developing a good backstory and stick to it. If you’re pretending to be a college student, make sure you know what classes you take, details of the university you’re attending, and why you’re spending your time on dark web forums.

3. Engage at All Hours: Hackers don’t work 9 to 5. Your avatar shouldn’t either. If you want to be believed as a threat actor, you need to spend time logging in to forums late at night and on weekends so others don’t get suspicious.

4. Use the Right Lingo: Again, HUMINT gathering is all about blending in. Many threat actors and communities have a distinct way of communicating and use lots of slang. Make sure you do the same to blend in.

5. Don’t Wait Until You Need It to Start: Avatars and sources take months or even years to develop. You can’t simply create an avatar and boom! … you have HUMINT. You must establish these sources early and continuously work at them, so when the need arises, you have the credibility and established sources to gather intelligence.

Automation, machine learning, and advanced cybersecurity solutions have enabled organizations to respond to threats faster and significantly reduce mitigation times. These technologies are critical to any effective cybersecurity program; however, as long as attacks are human-driven, humans will be part of the threat-hunting process. Having the right mix of tools, automation, and intelligence is key to staying ahead of new threats and protecting your organization. Collecting HUMINT through threat actor engagement can be a great way to supplement your existing intelligence program and help inform strategic decisions that make a long-term impact.

For more about HUMINT and its best practices, you can download our white paper.

Related Content:

Guy Nizan is the CEO Co-Founder of Intsights Cyber Intelligence. As CEO, Guy leverages his entrepreneurial experience, extensive military leadership training, and technology acumen in the areas of offensive security, cyber threat reconnaissance, and artificial intelligence … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/how-to-engage-your-cyber-enemies/a/d-id/1333477?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Sneaky phishing campaign beats two-factor authentication

Protecting an account with multi-factor authentication (MFA) is a no-brainer, but that doesn’t mean every method for doing this is equally secure.

Take SMS authentication, for example, which in recent times has been undermined by various man-in-the-middle and man-in-the-browser attacks as well as SIM swap frauds carried out by tricking mobile providers.

This week, researchers at Certfa Lab said they’d detected a recent campaign by the Iranian ‘Charming Kitten’ group (previously blamed for the 2017 HBO hack) that offers the latest warning that SMS authentication is not the defence it once was.

The targets in this campaign were high-value individuals such as US Government officials, nuclear scientists, journalists, human rights campaigners, and think tank employees.

Certfa’s evidence comes from servers used by the attackers which contained a list of 77 Gmail and Yahoo email addresses, some of which were apparently successfully compromised despite having SMS verification turned on.

We don’t normally get a chance to peer inside attacks that are as targeted as this one, let alone ones prodding 2FA for weaknesses.

The campaign was built around the old idea of sending a fake alert from a plausible-looking address such as [email protected].

Google sends out alerts from time-to-time, so a few people might be tricked by this but there were other tweaks to boost its chances even further, such as:

  • Hosting phishing pages and files on sites.google.com, a Google sub-domain.
  • Sending the email alert as a clickable image hosted on Firefox Screenshot rather than URL text which might trip Google’s anti-phishing system.
  • Tracking who has opened emails by embedding a tiny 1×1 “beacon” pixel that is hosted and monitored from an external website (marketers have used this technique for years, which is why it’s a good idea to turn automatic image loading off in programs like Gmail).

SMS bypass

But how to beat authentication?

It’s possible the attackers were able to check phished passwords and usernames on-the-fly to see whether authentication was turned on. If it was – and presumably that would have been the case for most targets – a page mimicking the 2FA sign-in was thrown up.

This sounds simple, but the devil is in the detail. For example, it seems the attackers were also able to find out the last two digits of the target’s phone number, which was needed to generate a facsimile of the Google or Yahoo SMS verification pages.

While SMS OTP authentication was the primary target, Time-based One-time Password (TOTP) codes from an authentication app were also targeted.

According to Twitter comments by Certfa, the attacks against SMS authentication were successful, which is not a surprise given that all the attacker has to do is phish the code.

As for TOTP and HMAC-based One-time Password algorithm (HOTP)-based authenticator apps (i.e. Google Authenticator), the researchers are less sure – as with SMS, it would depend on how quickly the attackers could capture and enter the code within the allowed time window.

Where does this leave 2FA?

Using 2FA in any form is better than nothing but SMS is no longer the best option if users have a choice – Google, for one, no longer offers this option unless it was set up on an account a while ago.

Naked Security has published numerous articles on the vulnerability of older 2FA technologies such as SMS as well as the pros and cons of app-based authentication (Google Authenticator). In 2016, the US National Institute of Standards and Technology (NIST) recommended that users plan to move from SMS to more secure methods of authentication.

The most secure option by far is to use a FIDO U2F (or the more recent FIDO2) hardware token such as the YubiKey because bypassing it requires physical access to the key.

Google even offers a specially-hardened version of Gmail, the Advanced Protection Program (APP), built around this kind of security with some additional features added on top.

Password managers are another option because these will only auto-fill password fields when they detect the correct domain (see caveats regarding mobile versions). If that doesn’t happen as expected this could be a sign that something is wrong.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Tn-DAPR4hNE/

Twitter fixes bug that lets unauthorized apps get access to DMs

Back in 2013, the OAuth keys and secrets that official Twitter apps use to access users’ Twitter accounts were disclosed in a post to Github… a leak that meant that authors didn’t need to get their app approved by Twitter to access the Twitter API.

Years later, the chickens are still coming home to roost: on Friday, researcher Terence Eden posted about finding a bug in the OAuth screen that stems from a fix that Twitter used to limit the security risks of the exposed keys and secrets. The bug involved the OAuth screen saying that some apps didn’t have access to users’ Direct Messages… which was a lie. In fact, they did.

Imagine the airing of dirty laundry that could ensue, Eden said:

You’re trying out some cool new Twitter app. It asks you to sign in via OAuth as per usual. You look through the permissions – phew – it doesn’t want to access your Direct Messages.

You authorise it – whereupon it promptly leaks to the world all your sexts, inappropriate jokes, and dank memes. Tragic!

Eden explained that Twitter put in place some safeguards following the publishing of its OAuth keys and secrets, the most important being that it restricts so-called callback addresses. After the apps successfully login, they then return only to a predefined URL. In other words, a developer can’t use the API keys with their app.

The problem is, not all apps have a URL, or support callbacks, or are, in fact, actual apps. For those situations, Twitter provides a secondary, PIN-based authorization method. “You log in, it provides a PIN, you type the PIN into your app,” and the app is authorized to read your Twitter content, Eden explained.

That’s the spot where the bogus OAuth information was being fed to the user, Eden said. The dialog was erroneously telling the user that the app couldn’t access direct messages, though it could. Eden:

For some reason, Twitter’s OAuth screen says that these apps do not have access to Direct Messages. But they do!

Eden submitted his findings via HackerOne on 6 November. After Eden clarified some points for Twitter, it accepted the issue on that same day.

Twitter fixed the bug on 6 December, announced that it was paying Eden a bounty of $2,940 and gave him the go-ahead to publish the details of his report.

Eden told media outlets that by using his proof of concept, he was able to read his own direct messages, along with those of a dummy account he had created.

It would have been a difficult attack to pull off, he said:

An attacker would have had to convince you to click on a link, sign in, then type a PIN back into the original app. Given that most apps request DM access – and that most people don’t read warning screens – it is unlikely that anyone was mislead by it.

Twitter agrees and said that users don’t have to lift a finger: there’s no danger of our DMs being intercepted. From its summary on the HackerOne report:

We do not believe anyone was mislead [sic] by the permissions that these applications had or that their data was unintentionally accessed by the Twitter for iPhone or Twitter for Google TV applications as those applications use other authentication flows. To our knowledge, there was not a breach of anyone’s information due to this issue. There are no actions people need to take at this time.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/dFzFtMviuyc/

Logitech flaw fixed after Project Zero disclosure

When it comes to fixing security vulnerabilities, it should be clear by now that words only count when they’re swiftly followed by actions.

Ask peripherals maker Logitech, which last week became the latest company to find itself on the receiving end of an embarrassing public flaw disclosure by Google’s Project Zero team.

In September, Project Zero researcher Tavis Ormandy installed Logitech’s Options application for Windows (available separately for Mac), used to customise buttons on the company’s keyboards, mice, and touchpads.

Pretty quickly, he noticed some problems with the application’s design, starting with the fact that it…

opens a websocket server on port 10134 that any website can connect to, and has no origin checking at all.

Websockets simplify the communication between a client and a server and, unlike HTTP, make it possible for servers to send data to clients without first being asked to, which creates additional security risks.

The only “authentication” is that you have to provide a pid [process ID] of a process owned by your user, but you get unlimited guesses so you can bruteforce it in microseconds.

Ormandy claimed this might offer attackers a way of executing keystroke injection to take control of a Windows PC running the software.

What to do? Tell the company, of course.  

Within days of contacting Logitech, Ormandy says he had a meeting to discuss the vulnerability with its engineers on 18 September, who assured him they understood the problem.

A new version of Options appeared on 1 October without a fix, although in fairness to Logitech that was probably too soon for any patch for Ormandy’s vulnerability to be included. As anyone who’s followed Google’s Project Zero will know, it operates a strict 90-day deadline for a company to fix vulnerabilities disclosed to it, after which they are made public.

On the basis of Ormandy’s first email contact, that passed on 11 December, the day he made the issue public and published the timeline described above.

I would recommend disabling Logitech Options until an update is available.

Clearly, the disclosure got things moving – on 13 December, Logitech suddenly updated Options to version 7.00.564 (7.00.554 for Mac). The company also tweeted that the flaws had been fixed, confirmed by Ormandy on the same day.

Logitech aren’t the first to feel Project Zero’s guillotine on their neck. Earlier in 2018, Microsoft ran into a similar issue over a vulnerability found by Project Zero in the Edge browser.

Times have changed – vendors have to move from learning about a bug to releasing a fix much more rapidly than they used to.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/j2tOkYMK4vg/

Facebook photo API bug exposed users’ unpublished photos

A bug in Facebook’s photo API may have exposed up to 6.8 million users’ photos to app developers, the company announced on Friday.

Facebook said that normally, when a user gives permission for an app to get at their Facebook photos, the developers are only supposed to get access to photos that are posted onto their timeline.

In this case, the bug allowed up to 1,500 apps – built by 876 developers – to access photos outside of the timeline. Specifically, photos from Marketplace or Facebook Stories were exposed. The most worrisome collection of exposed photos, however, were those that users hadn’t even posted.

It’s not that the apps were sniffing at your photo rolls, Facebook said. Rather, the API bug was letting those apps access photos that people had uploaded to the platform but hadn’t chosen to post.

They might have uploaded a photo to Facebook but hadn’t finished posting it because they lost reception, Facebook suggested.

Then again, maybe a user had second thoughts about posting a particularly sensitive, personal or intimate photo, and that’s where the fear factor kicks in: they might have had second thoughts for very good reasons, but a bug like this one makes reticence completely irrelevant.

Why is this even an issue, you might ask? One would imagine that photos that were never posted to Facebook were nothing more than a glimmer in the photographers’ eye, but no: Facebook says that it stores a copy of photos that are postus-interruptus for three days “so the person has it when they come back to the app to complete their post.”

Note the “when”: that’s marketing-positive speak that ignores the existence of the subjunctive “if,” as if second thoughts about posting just don’t happen in social media.

If only.

The only apps that were affected by the bug were so-called trusted ones: the apps that Facebook has approved to access the photo API and to which people had granted permission to access their photos.

You found this out WHEN?

Facebook says that its internal team discovered the bug, which may have affected people who used Facebook Login and who had granted permission to third-party apps to access their photos. It’s now fixed, but the third-party apps could have had inappropriate access to photos for nearly two weeks: specifically, the 12 days between 13 September to 25 September.

In its announcement, Facebook stayed quiet on the question of why we’re only hearing about this now. But when TechCrunch asked Facebook about what seems like an excessive notification lag, the platform said that its team discovered the breach on 25 September, went ahead and informed the European Union’s privacy watchdog (the IDPC, or Office Of The Data Protection Commissioner) on 22 November, and the IDPC then began a statutory inquiry into the breach.

Facebook must be suffering from serious apology fatigue at this point because all it managed to cough up was:

We’re sorry this happened.

Facebook told TechCrunch that it took time to investigate which apps and users were affected by the bug, and to then build and translate the warning notification it’s planning to send to those affected users.

Early this week, Facebook will release tools for app developers that will allow them to determine which people using their app might be affected by the bug. Facebook says it will be working with those developers to delete the photos that the apps should not have been able to access in the first place.

Facebook will be notifying users who’ve potentially been affected by the bug via an alert that will direct them to a Help Center link where they’ll be able to see if they’ve used any apps that were affected by the bug. It will look something like this:

What are the GDPR implications?

The delay might put Facebook at risk of stiff General Protection Data Regulation (GDPR) fines from the European Union for not promptly disclosing the issue within 72 hours. That can get painful: those fines can go up to €20 million (USD $22.68 million) or 4% of annual global revenue.

European regulators confirmed on Friday that they are, indeed, investigating Facebook for violating the GDPR – the first major test of the new regulations, as ABC News reports.

Here’s what Graham Doyle, the Irish Data Protection Commission’s head of communications, told ABC News:

The Irish DPC has received a number of breach notifications from Facebook since the introduction of the GDPR on May 25, 2018. With reference to these data breaches, including the breach in question, we have this week commenced a statutory inquiry examining Facebook’s compliance with the relevant provisions of the GDPR.

Yet another day that will live in privacy infamy

The photo API bug was discovered on 25 September: the same day that Facebook discovered that crooks had figured out to how to exploit a bug (actually, a combination of three different bugs) so that when they logged in as user X and did View As user Y, they essentially became user Y. In other words, the crooks exploited a bug so as to recover Facebook access tokens – the keys that allow you to stay logged into Facebook so you don’t need to re-enter your password every time you use the app – for user Y, potentially giving them access to lots of data about that user.

The access token bug affected what would turn out to be an estimated 30 million Facebook users.

As far as the newly disclosed photo API bug goes, we don’t know yet which apps got at photos they weren’t supposed to access. ABC News reached out to dating apps Tinder, Grindr and Bumble, but they hadn’t responded as of Monday.

Privacy advocates expressed a combination of concern and shell-shocked shrugging at the latest in Facebook’s privacy fumbles.

ABC News quoted Christine Bannan, counsel for the Electronic Privacy Information Center (EPIC), who said that the latest incident shows just how Facebook’s lack of concern for user privacy results in incidents like this:

It’s another example of FB not taking privacy seriously enough. Facebook just wants as much data as possible and just isn’t careful with it. This is happening because they are having developers have access to their platform without having standards and safeguards to what developers have access to.

Gennie Gebhart, a researcher with Electronic Frontier Foundation (EFF), told the news outlet that as far as users are concerned, it doesn’t matter if their data gets abused by design or by flubbery. It all amounts to the same in the end:

2018 has been the year of Facebook and other tech companies violating these privacy expectations, with nothing resembling informed consent. It is important to differentiate this from Cambridge Analytica, which wasn’t a bug. That was a platform behaving as it was intended. This is a different breed of privacy violation. This was an engineering mistake in the code. Of course, on the user end, those technicalities aren’t important. This is just another huge Facebook privacy scandal.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jPXoPqfusHM/

Memes, messengers, and missiles: From Twitter to chat apps and weapons, security is ho-ho-hosed this Xmas

Roundup We are now firmly into the holiday season, the Christmas parties are kicking off, and folks are swapping their Excel files for eggnog, or something cliched like that.

So, let’s have a quick look around the world of security this week before everyone puts on the “out of office”.

On the first day of Christmas my true love gave to me: a nuke that didn’t have security

Quick, think of the one place your really don’t want to see failing security.

Did you answer “intercontinental ballistic missiles”? Bad news…

A report from the US Department of Defense Inspector General’s office has found that America’s missile command is falling way behind when it comes to the security of its Ballistic Missile Defense System (BMDS). The summary of their findings is brief and to the point:

“We determined that officials did not consistently implement security controls and processes to protect BMDS technical information.”

Among the failings spotted in the report was the failure to install multifactor authentication software, leaving server racks unlocked, not installing intrusion detection tools on one classified network, and failing to encrypt data before it was transmitted.

“In addition, facility security officers did not consistently implement physical security controls to limit unauthorized access to facilities that managed BMDS technical information,” the December dossier noted.

The report recommends, not surprisingly, that the DoD look to first install these basic protections on the network and then get their act together as far as making sure access to both the data and the physical facilities housing it are locked off with access carefully logged and monitored.

We three memes controlling your bots

Researchers at Trend Micro have uncovered a truly remarkable scheme that malware-infected PCs are using to communicate with their central command-and-control servers.

The software nasty, given the catchy name “TROJAN.MSIL.BERBOMTHUM.AA”, instructs infected Windows machines to look for a specific (since disabled) Twitter account. The account itself wasn’t remarkable, containing only a few meme images. Within those images, however, was hidden the code that controlled the infected PCs.

The malware would download and open the images, then look for instructions hidden within. In this case, the memes tell the bots to capture screencaps of their host machines and send the images to a server, though the malware can also be ordered to list running processes, copy clipboard contents, and list filenames from the infected PC.

“We found that once the malware has been executed on an infected machine, it will be able to download the malicious memes from the Twitter account to the victim’s machine. It will then extract the given command,” Trend explained.

“In the case of the “print” command hidden in the memes, the malware takes a screenshot of the infected machine. It then obtains the control server information from Pastebin. Afterwards, the malware sends out the collected information or the command output to the attacker by uploading it to a specific URL address.”

Fortunately, it looks like this specific operation has been broken up. The meme-spaffing Twitter account has been disabled.

Up in the Outback, Signal’s pause; out with the Aussie backdoor clause

Secure chat company Signal is less than happy with the recently passed Australian law targeting encrypted communications. The new Oz rules allow Aussie snoops to demand surveillance backdoors in communications software and websites, allowing the government to read and monitor encrypted messages.

Signal dev Josh Lund said his project simply can’t comply with any government demand to decrypt secure end-to-end chatter. No, really, Lund said, there is no physical way Signal could remotely decrypt the contents of conversations.

“By design, Signal does not have a record of your contacts, social graph, conversation list, location, user avatar, user profile name, group memberships, group titles, or group avatars. The end-to-end encrypted contents of every message and voice/video call are protected by keys that are entirely inaccessible to us,” Lund explained.

“In most cases now we don’t even have access to who is messaging whom.”

This means that Signal faces the very real possibility of being banned in Australia for running afoul of the data access law. Even in that case, however, Lund cautioned the gov-a-roos that they probably wouldn’t be able to rid their continent of Signal.

“Historically, this strategy hasn’t worked very well. Whenever services get blocked, users quickly adopt VPNs or other network obfuscation techniques to route around the restrictions,” he explained. “If a country decided to apply pressure on Apple or Google to remove certain apps from their stores, switching to a different region is extremely trivial on both Android and iOS. Popular apps are widely mirrored across the internet.”

In other words, the Australian government would be playing whack-a-mole with banned apps, all while the likes of Google, Microsoft, Apple, and other US tech giants, are thoroughly cheesed off with the incoming spy law.

pichai

Google CEO tells US Congress Chocolate Factory will unleash Dragonfly in China

READ MORE

Simply having a fight over Dragonfly

Google’s Dragonfly campaign just got Choc-blocked, allegedly.

A report from The Intercept today indicates that the controversial project to build a Chinese search engine that met Beijing’s censorship requirements has been “effectively ended” following an employee revolt and probing by US Congress.

Dragonfly, for those not familiar, was Google’s rumored partnership with the Chinese government to create a version of its web search engine that could automatically exclude any results that were banned by the government as well as provide officials with the ability to track people’s search queries.

Concern over the privacy and human rights implications of such a project prompted staffers, including Google’s precious engineer caste, to speak out in public, something rarely seen from the highly insular world of Google.

When asked for comment, a Google spokesperson referred El Reg to the comments CEO Sundar Pichai made last week to the Congress.

Jingle Bells, Twitter smells, surveillance by bad eggs

And because creepy government surveillance is all the rage these days, we have Twitter warning that one of its web applications was used to slurp up location data on some twits.

In its alert on Monday, Twitter warns one of its support forum APIs had an issue that would have allowed miscreants to look up things like fellow tweeters’ telephone country codes, and whether an account was locked-out by Twitter. The bug was fixed on November 16.

This by itself isn’t too much of a problem. However, Twitter also said that prior to the November fix, it spotted “a large number of inquiries coming from individual IP addresses located in China and Saudi Arabia,” and that it can’t rule out that the collection of this location-based info wasn’t the work of state-backed hackers or spies.

In short, Twitter had a flaw that would betray your area code, and two of the most oppressive regimes on the planet may have abused it to collect user information en masse. “Falalalala, la la la laaaa!” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/18/early_security_roundup/

You better watch out, you better not cry. Better not pout, I’m telling you why: SQLite vuln fixes are coming to town

Google and other software developers have patched the SQLite component of their code after it was discovered it could be potentially exploited to inject malware into vulnerable systems.

The security flaw was spotted and reported by researchers at Tencent’s Blade Team, and is believed to be present in the SQLite library used by Google’s browser engine as well as other apps.

For Google, at least, the library backs Chrome’s WebSQL database API, which is why it’s being treated as a remote-code execution flaw. Fortunately, users can shield themselves from attacks by updating to the latest version of Chrome (stable version 71.0.3578.80).

For other applications, their developers should update their products to use SQLite 3.26.0 or newer, and then push out new builds to their users to install. In other words, if you’re using a program that includes a vulnerable instance of SQLite, wait for a security update to show up. If you’re a programmer who is using SQLite as a database built into your code, then update, and roll that update out.

There’s always a codename

Known as Magellan for marketing purposes, the Tencent-reported bug has no CVE entry as of yet, and essentially involves corrupting memory to gain arbitrary code execution. In order to do this, an attacker would have to be able to inject malicious SQL commands that then trigger memory corruption, leading to execution of code included in the injection. In the case of WebSQL API, though, webpages are allowed to run SQL queries on a database stored within the browser.

As SQLite creator D. Richard Hipp noted, an application would have to either accept arbitrary SQL commands from users, or suffer from an SQL injection flaw, for this memory handling bug to be exploitation. Generally, programs don’t allow users to write their own SQL commands, and they shouldn’t have SQL injection holes, in the first place.

Thus, this bug turns what would already be a bad situation – miscreants able to run their own dodgy arbitrary SQL commands within an application – into a worse situation: arbitrary malicious code execution. We also understand the bug involves an SQL UPDATE command, so using SQLite in read-only mode, or with other settings switched on, thwarts exploitation.

Blade Team says it is deliberately holding off on dishing detailed info about the flaw until more vendors can get their patches out. Team member Wenxiang Qian is being credited for the discovery. What is very interesting is that SQLite was considered a gold standard in terms of secure coding: it has been studied and audited at length, and thought to be safe and relatively bug free.

jesus

SQLite creator crucified after code of conduct warns devs to love God, and not kill, commit adultery, steal, curse…

READ MORE

“As a well-known database, SQLite is widely used in all modern mainstream operating systems and software, so this vulnerability has a wide range of influence,” the Tencent researchers say in their brief disclosure earlier this month.

“After testing Chromium was also affected by this vulnerability, Google has confirmed and fixed this vulnerability.”

What we do know is that the bug can be exploited much like other browser and scripting engine flaws: a specially crafted webpage or email could be viewed in a vulnerable application, which would trigger the smuggled in exploit and attack code. From there, the attacker would have the ability to execute and install spyware and ransomware on the victim’s machine.

The flaw could also be exploited to pull data stored in memory or simply crash the application, in theory.

While potentially serious, these sort of bugs are hardly uncommon. For some perspective, Microsoft patched 16 such remote code execution flaws in IE, Edge, and Office less than a week ago.

In its own tests, Blade Team says it has been able to use its exploit to successfully pwn a Google Home box (Home is based on Chromium). Thus far, there have been no reports of the bug being targeted in the wild. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/18/sqlite_vulnerability/

Chinese Hackers Stole Classified US Navy Info

Cyberattacks reportedly targeted US Defense contractor.

Hackers out of China breached the network of a US Navy contractor and stole classified military information that included plans for a supersonic anti-ship missile for submarines, according to a report in The Wall Street Journal.

Navy Secretary Richard Spencer has ordered a cybersecurity review in the wake of the attack on a contractor for the US Naval Undersea Warfare Center in Rhode Island, the Journal reported. The attackers siphoned 63 gigabytes of data from the contractor’s unclassified network.

While the Navy has not officially put the blame on the Chinese government for the attack, the attack employed malware stored on a computer in Hainan, China, and employed tools known to be used by Chinese nation-state hacking teams.

Read more details here

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/chinese-hackers-stole-classified-us-navy-info/d/d-id/1333506?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple