STE WILLIAMS

Real-Time Payment Platforms Offer Fast Cash & Fast Fraud

Real-time payment services like The Clearing House and Zelle will completely clear transactions in an instant…but account takeover attackers love that speed as much as you do.

ACH transfers and credit cards have offered ways for people to pay without cash or check for years. Yet, those kinds of transactions often take time – even several days – to officially clear, thereby delaying consumer and business account-holders’ access to funds. Not so with real-time payment systems (RTP). Real-time payment systems allow the immediate or near-immediate transfer of funds through a secured payment gateway, and are answering the call for faster payments and access to funds.

Yet the very benefit of RTP – speed — is what also makes it more insecure, say experts.  

(image by BillionPhotos.com, via Adobe Stock)

“What makes [RTP transactions] vulnerable, and attractive to hackers, are the same features that make them popular with the public — which is fast, simple, and easy-to-use transactions,”says Atif Mushtaq, CEO of SlashNext. “The most popular avenue for cyber criminals is data breaches for credential stealing that enable them to quickly perform account takeovers and drain bank accounts.”

“The instant or near-instant nature of RTP means that in many cases, when money is removed from an account, it’s going to be very difficult to get it back,” says Richard Henderson, head of global threat intelligence at Lastline. “The rapid clearing of payments mean that banks are really going to have to shoulder the risk burden when it comes to protecting customers when the worst happens and a kind retired lady gets hoodwinked out of tens of thousands of dollars.

 

What RTP Services Are – and Are Not

Most consumers have heard of mobile payment services like Zelle and Venmo. But there is some confusion on what services actually offer payments in real-time.

Many popular payment services require a period of time before the funds are released. Known as wallet-based systems, some services – Venmo is one — are run by financial services technology firms, not banks, and users need to open an account on the payment network in order to use it.  In Venmo’s case, payments made within the network – in person-to-person transactions or to purchase services from participating merchants — are unrestricted, but cannot officially be moved to out-of-network accounts, such as bank accounts, until the funds have cleared – which could take up to several days. (Venmo now does, however, offer real-time transfer of funds from a user’s Venmo wallet to their connected banking account.)

True real-time payment services are operated by banks and financial institutions. The Clearing House’s Real Time Payments network – accessible only to FDIC-insured financial institutions — is one example. And the well-known Zelle – a strong competitor to Venmo in the person-to-person mobile pay app market – also provides true real-time payments, because it uses The Clearing House’s network.

Other existing examples of RTPs are Payments Service (FPS) and Real Time Gross Settlement (RTGS). The US Federal Reserve said earlier this year that Federal Reserve Banks are planning to develop a new real-time payment and settlement service, called the FedNow Service.

The money transferred through a true RTP service moves from member-to-member bank accounts. The sending bank guarantees funds will be available, that all fund transfers will be properly debited or credited, and that asset transfers between account-holding institutions will occur to support the transfers.

How RTPs Platforms are Skimping on Security

However, in a recent interview with American Banker, Stephen Lange Ranzini, CEO of University Bank in Ann Arbor, Mich., outlined the many ways that established RTP platforms, including The Clearing House’s RTP and Zelle, fail to meet basic requirements laid out by both The Federal Reserve’s Faster Payments Task Force and the Federal Secure Payments Taskforce.

The three criteria overlooked that are most concerning to Lange Ranzini include:

1.)    All data with Personally Identifiable Information (PII) needs to be encrypted;

2.)   Systems need a robust enrollment process;

3.)   Systems need a robust authentication process each time a user tries to initiate transaction.

 

Current RTP systems do not fully meet any of these criteria, he said. And there are times during the lifecyle of the payment when the data involved in the transaction is “in the clear” he notes – meaning it is unencrypted.

Account Takeover a Common Criminal Strategy

Because RTPs reduce the amount of time that might customarily be spent preventing fraud, cyber criminals can take advantage by committing more efficient account takeover (ATO) attacks. With unfettered banking account access, attackers may move the victim’s money at will; and account-holders who are not checking their account regularly may have no idea the funds are gone.  

In some ways these ATOs are precisely the same as without RTP: attackers compromise accounts by using the same social engineering and hacking tricks security pros have been dealing with for years.

“There are multiple ways through which these attacks can occur for RTP users – including through email, SMS text message or even over the phone,” says Mushtaq. “The purpose is the same, which is trying to get the users to hand over their information.”

Once fraudsters have access to account details, they can push funds to attacker-controlled accounts, and the financial institutions will officially clear the transaction in in real-time. And (as Henderson noted earlier), once money is removed from an account, it will be very difficult to get it back, because the victim’s legitimate account authorized the payment and the financial institution cleared it. It puts both consumers and attackers at risk.

“Attackers will target accounting staff at businesses and attempt to rob them. This isn’t new,” says Henderson. “It is going to be essential for companies to start building out very strong procedures for how they send and receive payments. Using a dedicated computer for nothing but payments in accounting that has been hardened by your security staff will be very important.

“Don’t pay invoices from suppliers overseas if there is a change in how they have asked you to send funds until you can verify using alternative channels that it is legitimate. Multiple sign offs over a set amount should be the norm.”

Related Content:

Joan Goodchild is a veteran journalist, editor, and writer who has been covering security for more than a decade. She has written for several publications and previously served as editor-in-chief for CSO Online. View Full Bio

Article source: https://www.darkreading.com/theedge/real-time-payment-platforms-offer-fast-cash-and-fast-fraud--/b/d-id/1336677?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Instagram hides ‘false’ content, unless it’s from a politician

Facebook is expanding its efforts to muffle misinformation on Instagram, which it owns.

In 2018, two reports prepared for the Senate Intelligence Committee found that when it came to Russia’s misinformation campaign in the 2016 US presidential election, Instagram was where the real action was: researchers counted more Instagram reactions to fake news than on Facebook and Twitter combined.

Instagram launched a fact-checking program a few months after the reports came out…

…then said Yikes, this is a sticky wicket a month later. Some places don’t have fact-checkers, different places have different standards of what constitutes journalism, it takes a long time to verify content, and it’s hard to figure out how to treat opinion and satire, Facebook explained, saying that it’s working on improving machine learning to help.

Be that as it may, on Monday, Facebook announced that it’s taking Instagram’s limited, experimental fact-checking program worldwide, hooking up with fact-checking organizations around the globe so they can assess and rate misinformation on Instagram.

Instagram’s fact-checking program relies on 45 third-party organizations that review and label false information on the photo/video-sharing platform. From Monday’s post:

We want you to trust what you see on Instagram.

What does THAT mean?

Just like on Facebook, Instagram won’t be removing content flagged by a fact-checker. Rather, it will reduce its distribution by hiding it from the Explore and hashtag pages. Photos and videos marked as false will also be covered with an interstitial warning blocking the content in the feed or Stories until users tap again to see the post.

The labels enable people to “decide for themselves what to read, trust, and share,” Facebook says. Those labels will show up everywhere on the Instagram platform – in a user’s feed, profile, stories, and direct messages, regardless of the country in which users are located. Instagram’s initial fact-checking program was limited to the US.

In October, Facebook had announced that it was rolling out the same kind of interstitial labels on its main platform.

Facebook says it will also stitch together fact-checking across its main platform and Instagram: starting on Monday, 16 December, anything that’s labeled as false (or partly false) on Facebook will automatically be labeled as such on Instagram, and vice versa. In order to find and label misinformation across its platforms, Facebook says it uses image-matching technology.

The label will link to a fact-checker’s rating and provide links to articles from credible sources that debunk the claim(s) made in the post. Accounts that repeatedly get false-content flags will be removed from Explore and hashtag pages, making them harder to find.

Oh, you’re a politician? Never you mind.

Instagram, like parent company Facebook, says it isn’t going to do any of this to politicians’ original speech. In the world of political discourse, the past few years have made it crystal clear that one person’s gobbledygook is another person’s common sense. Thus, fact-checkers aren’t going to try to suss out which is which, since Instagram won’t be sending politicians’ posts to them.

For over a year, Facebook has been practicing this hands-off approach to political speech. In September, Facebook’s VP of Global Affairs and Communications Nick Clegg explained that it’s all part of the company’s “YOU decide” stance:

We don’t believe …that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny.

Instagram’s newly expanded misinformation program is only one of its efforts to make the platform safer. Also on Monday, it started telling users when their photo or video captions may be considered offensive: an expansion of what it started in July, when the platform started asking bullies if they really wanted to say that? in what looks like offensive/bullying comments.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zAu2kI-UzaU/

Proposed standard would make warrant canaries machine-readable

For years, organisations have been using a common tactic called the warrant canary to warn people that the government has secretly demanded access to their private information. Now, a proposed standard could make this tool easier to use.

When passed in 2001, the US Patriot Act enabled authorities to access personal information stored by a service provider about US citizens. It also let them issue gag orders that would prevent the organisation from telling anyone about it. It meant that the government could access an individual’s private information without that person knowing.

Companies like ISPs and cloud service providers want their users to know whether the government is asking for this information. This is where the warrant canary comes in. First conceived by Steve Schear in 2002, shortly after the Patriot Act came into effect, a warrant canary is a way of warning people that the organisation holding their data has received a subpoena.

Instead of telling people that it has been served with a subpoena, the organisation stops telling them that it hasn’t. It displays a public statement online that it only changes if the authorities serve it with a warrant. As long as the statement stays unchanged, individuals know that their information is safe. When the statement changes or disappears, they can infer that all is not well without the organisation explicitly saying so. Here’s an example of one.

A warrant canary can be as simple as a statement that the service provider has never received a warrant. The problem is that those statements aren’t standardised, which makes it difficult for people to interpret them. How can you be sure that a warrant canary means what you think it means? If it disappears, does that mean that the service provider received a warrant, or did someone just forget to include it somewhere? Does the canary’s death indicate a sinister problem, or did it just die of natural causes? This isn’t idle speculation – warrant canary changes like SpiderOak’s have confused users in the past.

The other problem is that these statements are designed to be read by people, which make them difficult to track and monitor at scale. That’s what the warrant canary standard would solve.

The proposed standard surfaced on Github on Tuesday. It was created by GitHub user carrotcypher, inspired by the work of organisations like the Calyx Institute (a technology non-profit that develops free privacy software) and the now-defunct Canary Watch, a project from the Electronic Frontier Foundation (EFF), Freedom of the Press Foundation, NYU Law, Calyx and the Berkman Center. Canary Watch listed and tracked warrant canaries. When it shut down Canary Watch, the EFF explained:

In our time working with Canary Watch we have seen many canaries go away and come back, fail to be updated, or disappear altogether along with the website that was hosting it. Until the gag orders accompanying national security requests are struck down as unconstitutional, there is no way to know for certain whether a canary change is a true indicator. Instead the reader is forced to rely on speculation and circumstantial evidence to decide what the meaning of a missing or changed canary is.

Canarytail seeks to change that. As it explains on its Github readme.md page:

We seek to resolve those issues through a proper standardized model of generation, administration, and distribution, with variance allowed only inside the boundaries of a defined protocol.

Instead of some arbitrary language on a website, the warrant canary standard would be a file created using the JSON language, which is notable for displaying data as a list of key:value pairs readable by both people and machines. The file would include 11 codes with a value of zero (false) or one (true). These codes include WAR for warrants, GAG for gag orders, and TRAP for trap and trace orders, along with another code for subpoenas, all of which will have specific legal implications for an organisation and its users. If the value next to any of these keys is zero, the person of software reading the file can infer that none of the warnings have been triggered. If the code changes to one, it’s cause for concern.

The file also contains some other interesting codes, including DURESS, which indicates that the organisation is being coerced somehow, along with codes indicating that they have been rated. There is also a special code indicating a Seppaku pledge, which is a promise that an organisation will shut down and destroy all its data if a malicious entity takes control of it.

In a smart bit of cryptographic manoeuvring, the proposed standard must be cryptographically signed with a public key, and includes information about the expiry date. It uses a block hash from the bitcoin blockchain to verify the freshness of the digital signature. As another safeguard, it includes a PANICKEY field with another public key. If the file is signed with this key, people can interpret it as a kill switch, causing the warrant canary to fail immediately. That’s useful if an organisation suddenly gets raided and can’t afford to wait until the current warrant canary file expires.

A standard like this could help revive warrant canaries by making them easier to track and more deterministic. In the meantime, plenty of non-standard warrant canaries have disappeared, including Reddit’s and Apple’s.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/CmOKFRN2Bk4/

Get in line! 38,000 students and staff forced to queue for new passwords

No, these students aren’t lining up to see Santa.

They’re lining up for new passwords as the IT staff at their university – Justus Liebig University (JLU) in Gießen, a town north of Frankfurt, Germany – continue to mop up after a malware attack hit the school on Sunday, 8 December.

In what has to be the most analog password-reset operation of modern times, 38,000 students and staff were told to grab their identity cards and join a queue so they can get a new password for their university email accounts. They have to pick up the passwords in person, JLU said on Wednesday, due to unspecified security reasons as well as the legal requirements of the German National Research and Education Network (DFN).

There is no alternative to this procedure. Collecting the password in person is a prerequisite for the ability of every JLU member to use e-mail at JLU in the near future. All previous e-mail passwords are thus invalid!

Following the attack, JLU staff took down the email server, the internet and internal networks, fearing that they’d been infected. Then, they reset all email account passwords, as a precautionary step – a move that affected all students and staff.

At this point, JLU’s IT Service center is still scanning devices. For days, the university has been using more than 1,200 USB flash drives loaded with antivirus scanners to scan each and every JLU computer for traces of the malware. None of the devices are getting back onto the network until they show up as virus-free and get slapped with an all-clear green sticker.

Actually, some of those computers have gone through the scan twice. IT Service first scanned computers last week. Then, the department repeated the scans over the weekend, after the antivirus scanner received updated virus signatures to make sure that it would detect the malware that infected JLU’s network.

How do you pass out 38,000 passwords in an orderly fashion? In a highly organized way: JLU is doing it with a schedule based on students’ and staff’s birth month.

The priority: get this all done by Christmas.

The priority [is] to restore email communication skills for all members and members of the JLU before the Christmas break.

Godspeed, JLU IT Service!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Iubgv-x9nTA/

S2 Ep21: Plundervolt, domain name gunfight, Facebook snubs Congress – Naked Security Podcast

In this episode, we look into a gunfight over a domain name [6’56”], explain the Plundervolt attack [17’50”], and explore the encryption drama that’s unfolding between Facebook and Congress [25’24”].

Host Anna Brading is joined by Sophos experts Mark Stockley, Paul Ducklin and Greg Iddon.

Listen and share!

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ginNM4Lr290/

Chrome 79 patched after Android WebView app chaos

Google has rushed out a fix for a bug in the Android version of Chrome that left some app users unable to access accounts or retrieve stored data.

The problem happened when users upgraded from version 78 to 79 last week, after which apps using a stripped-down browser component called WebView started throwing up issues.

For affected apps, this quickly turned into a big problem, with the Chromium bugs forum filling up with comments from numerous disgruntled developers.

Here’s a flavour. On December 13, one commenter wrote:

This is a major issue. We can see the old data is left in the filesystem, but it’s not “found” by Chrome 79 – which I consider even worse – for one, it breaks the apps as it’s not available.

And another on the same day:

We have verified that all our clients with Chrome/WebView updated to v79 have lost all their app data.

As its name suggests, WebView provides a way for app developers to integrate web pages or even applications inside Android apps using a cut-down browser that’s part of Chrome.

Used to display everything from login pages to terms and conditions documents, it’s useful because it avoids the need to visit the original web pages by launching a separate browser app.

Google has shifted WebView function into and out of Chrome more than once – versions 7, 8 and 9 use it, but from Android 10, it once again becomes a separate app.

What went wrong?

The short answer is that the Android Chrome 79 update of December 10 (79.0.3945.79) changed the path location used by different APIs to store local profile data so that apps could no longer ‘see’ it. The data was still there but the apps couldn’t access it.

The update didn’t hit all users running WebView-based apps – updating is done in tranches of users – but when it did, the problems were often severe.

One developer of a financial app noted that its users had lost access to invoices and credits. The developers reinstated the old data, but this caused a new issue:

Our users have assumed their unprocessed data has been lost and have already re-keyed in transactions that they collected while offline. There’s now been 5 days’ worth of new transactions.

Google, of course, has since apologised for the screw-up:

The M79 update to Chrome and WebView on Android devices was suspended after detecting an issue in WebView where some users’ app data was not visible within those apps. This app data was not lost and will be made visible in apps when we deliver an update this week. We apologize for any inconvenience.

It also released a blog explaining how it has fixed the bug in a new update, v79.0.3945.93.

App developers and users should then find themselves back to where they were before the flawed update, although some might lose access to data created during the hiatus.

From a Google representative:

We’re deeply sorry that this happened and that there is no realistic way to proceed from this point without additional data loss in some cases, but this hopefully represents the best compromise.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/gdacxbn3fBw/

Hiding malware downloads in Taylor Swift pics! New SophosLabs report

SophosLabs just published a new report on an intriguing but lesser-known part of the malware scene known as MyKings.

You probably haven’t heard of MyKings, mainly because it’s not ransomware and the gang isn’t currently slamming businesses up against the wall by demanding money, so it hasn’t made big enough waves to make the headlines.

In simple terms, MyKings is all about illicit Monero cryptomining, and at the current low price of Monero, our researchers estimate that on some days the crooks are only making about $300.

For all we know, MyKings might be little more than a sideline hobby for the people running it (albeit a hobby pulling in a quiet and untaxed $100,000 a year, of course).

Compared to the multimillion dollar extortions that some cybercrime gangs are demanding for ransomware recovery, it’s easy to write off malware like MyKings as unimportant and therefore not worth trying to learn from.

But that couldn’t be further from the truth, because the MyKings story gives a fascinating insight into a type of cybercrime that involves a huge amount of complexity, and has a surprising reach.

According to SophosLabs research, the MyKings crew:

  • Currently have about 45,000 infected computers in their Monero-mining botnet, up from about 35,000 a year ago.
  • Can upgrade their malware code on infected computers at will.
  • Are using surprisingly sophisticated ‘rootkit’ tricks to get kernel access and to avoid detection.
  • Also go after your local cryptocoin wallets.
  • Employ a ‘fileless’ password stealing tool to crack passwords and spread on your network.
  • Use the ETERNALBLUE exploit to spread.
  • Kill off numerous security products or stop them loading at all.
  • Get rid of rival cryptomining software and other programs of their choice.
  • Rewrite your firewall rules to keep rival crooks out.
  • Hide malware downloads inside innocent-looking images to complicate detection.

In other words, if you measure the threat of MyKings only in terms of the financial cost of its most obvious side effect, namely the electricity it steals to mine Monero…

…you’re missing a lot of vital lessons.

If the MyKings gang suddenly decided to give up on the cryptomining, for example, the flexible way in which their malware can reconfigure itself via the internet means that they could switch over to almost any other sort of malware-based cybercrime they liked.

Or they could just sell on their whole botnet, complete with auto-upgrade functionality, and who knows where your cybersecurity might go next?

Get the report

The MyKings report was produced by experienced and well-respected researcher Gabor Szappanos, also known as Szapi, whom many of our readers will already know from his previous research papers.

Szapi doesn’t just give you a detailed and informative review of how real-world multi-component malware operates and evolves.

He also tells the story in a way that helps you plan your defences well, and reminds you not to judge a malware book by its cover.

Read the paper now!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Axkezd1BhWc/

Email blackmail brouhaha tears UKIP apart as High Court refuses computer seizure attempt

The UK Independence Party (UKIP) has suffered a data breach after allegedly having 143 party email accounts accessed amid demands made by blackmailers, the High Court in London has been told (PDF).

UKIP is suing former party leader Richard Braine, former general secretary Tony Sharp and one-time party returning officer Jeff Armstrong, and, in Mr Justice Warby’s words, “a former member who has IT skills” called Mark Dent.

Although the lawsuit is ongoing, an interim judgment from the Queen’s Bench Division of the High Court reveals claims of illicit email access and blackmail. It also reveals the chaos tearing apart the party that put Nigel Farage onto the political map.

Amid “internal political strife” in mid-October this year, Armstrong was accused by party comrades of trying to block a group of candidates, the so-called “Batten Brigade”, from standing in internal elections to UKIP’s National Executive Committee.

From a website whose address was mentioned in the judgment, it appears one of those candidates was former party leader Gerard Batten himself. He publicly defended Youtube star and party candidate Carl Benjamin, aka Sargon of Akkad, when the latter used Twitter to say he “wouldn’t even rape” a female Labour MP. Batten stood down from the leadership in June.

As senior UKIPpers argued over the NEC elections, party discipline began to break down. The NEC voted to suspend Armstrong. Party leader Braine then suspended the entire NEC – which was subsequently unsuspended by NEC member and chairwoman Kirstan Herriot, who in turn declared that Braine himself was now suspended. Party functionaries argued over who should access what IT systems at party HQ. Various people called the police to claim crimes had been committed. Email accounts were suspended and accessed.

While all this was going on, a mysterious message was sent. At least four people received this email from the address reply[at]munge.cockington.com on the night of 16 and 17 October:

Subject: You’re [sic] ukip emails

On Wednesday we legally got all your ukip emails for years, ones from or to you or which you sent from outside of ukip to any one with a ukip email. If any one says we do not have them or did not get them legally they are lying, that is why we removed the Party Secretary. After two days our B.B. team will be reviewing the emails for evidence. Then the useful parts can find their way any where, even your neighbours, we know where you are. Think how much you will lose. We give you a chance. By Midnight on Friday 18, you must resign from ukip and all your positions you claim in ukip, sending the resignation to both [email protected] and [email protected], who do not have any connection but can verify for us. Then we won’t do any thing. Once you betrayed the Party Leaders you don’t deserve pity but we give you’re [sic] choice. B.B.

Describing this as a “blackmail threat”, High Court judge Mr Justice Warby observed: “The demand for prompt resignation addressed to several senior UKIP figures would appear to be unwarranted, and the threat to disclose email correspondence appears on the face of things to be plainly illegitimate.”

The threat was not carried out. Herriot was made aware of it by Neil Hamilton, who forwarded it to her from his private email address while adding “it may be a spoof”.

UKIP immediately asked the High Court for an injunction to stop Braine, Sharp, Armstrong and Dent from revealing anything they may have obtained from 143 named UKIP email addresses – and, in early November, added a request for an order forcing Dent’s computer to be “seized and searched”. It was claimed that an audit of UKIP’s servers identified that Dent had something to do with the email data breach mentioned by the blackmailer.

Dent had, on Braine’s instructions, visited party HQ to lock Herriot out of her [email protected] account, open up the party’s Mailchimp account to somebody evidently favourable to Braine’s faction and to “do a Microsoft Office 365 Evidence scan of the chairman’s account and other UKIP.org account to gain evidence, for use later.”

UKIP argued that because Braine had been suspended, he had no power to send Dent to party HQ and therefore Dent’s access to its systems was unauthorised.

Unfortunately for the party its contracted IT expert, Zain Ul-Haq, gave it a report which concluded: “I have insufficient information to determine whether data was exfiltrated during the security event.”

He did, however, recommend a forensic examination of a computer used to access its mail server.

Dent, noted Mr Justice Warby, “denies responsibility for, or even knowledge of, the blackmailing email.”

Dismissing both UKIP’s application to seize Dent’s computer and to impose a non-disclosure injunction on him, Braine, Armstrong and Sharp, Mr Justice Warby ruled: “UKIP has not pleaded as a fact, because it lacks an evidential basis to assert, that Mr Dent acquired any of the allegedly confidential information. It follows that he cannot be accused of passing that information to the other named defendants, or to Persons Unknown, and the Particulars of Claim contain no such averment.”

He added: “The prospects of UKIP establishing at a trial that any of the defendants to this claim obtained, and then threatened to disclose, confidential information derived from UKIP’s email database are slender in the extreme, or worse.”

The judge also ruled that Ul-Haq’s report “contains no indication that a download [from UKIP systems] occurred or might have occurred.” The full judgment is available from the judiciary website. ®

Sponsored:
What next after Netezza?

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/19/ukip_email_blackmail_system_access_kerfuffle/

How a Password-Free World Could Have Prevented the Biggest Breaches of 2019

If history has taught us anything, it’s that hackers can (and will) compromise passwords. Innovation in authentication technology is poised to change that in the coming year.

When it comes to the frequency of cybersecurity incidents, we may be heading into uncharted territory: So far, 2019 is on track to be the “worst year on record,” according to the most recent research from Risk Based Security. There have been more than 5,180 breaches within the first nine months of this year (up from 3,886 within the same time period in 2018), with nearly 8 billion records lost (up from about 3.8 billion within the same time period last year).

Not surprisingly, three of five breaches involve the compromise of password credentials, which provide a constant entry point for cybercriminals. According to research from HSB, personal data — addresses, phone numbers, credit card account information — remains a prime target, with 39% of consumers surveyed reporting that they’ve fallen victim to these attacks.

Five incidents from the past year made for some of the biggest headlines in cybersecurity news. With each one involving the exposure of password credentials and/or personal data, we see that no company or individual is immune — not the CEO of a social media giant or an entertainment industry icon or a beloved donut shop brand:

�œ Disney+: In November, less than a week after the world debut of Disney+, thousands of hijacked accounts were offered for sale in cybercriminal marketplaces. Password reuse (account owners using passwords they use for other services) led to the bulk of the compromises. Via credential stuffing, hackers take a set of usernames/passwords that were leaked in prior breaches, then apply them to another service to try and gain access. The practice has grown to epidemic proportions, as Akamai recorded more than 61 billion credential-stuffing attempts from January 2018 through June 2019.

�œ ATT: A Torrance, Calif., man filed a lawsuit in October against ATT, accusing the carrier’s employees of helping hackers rob him of $1.8 million worth of cryptocurrency. The incident was linked to a “SIM-swapping” scheme, in which attackers take control of a victim’s phone number by convincing a target’s carrier to switch their subscriber identity module (SIM) to a SIM card in a device under the attackers’ control. In some cases, they pay phone company employees to make the switches. With this, they gain access to an abundance of personal data and applications associated with the victim’s phone.

�œ Twitter CEO Jack Dorsey: An anonymous hacker took over Dorsey’s Twitter account briefly in August to tweet bomb threats and racist posts. This breach was also linked to SIM swapping. In many SIM-swapping cases, hackers will get the phone numbers by calling a customer help line for a phone carrier while pretending to be the victim. Once hackers have control of a phone number, they will often call a service like Twitter and ask for a temporary login code, commonly sent to their device via text.

�œ Microsoft Office 365: Barracuda Networks reported in May that hackers compromised the Office 365 accounts of three of 10 organizations in March alone, and then sent 1.5 million malicious and spam emails. Again, many of the incidents were linked to the stealing of login credentials from the databases of prior breaches, which were published in criminal forums — that is, credential stuffing.

�œ Dunkin’ Donuts: The company announced in February — for the second time in three months — that hackers used credential stuffing to access customer “DD Perks” awards accounts, selling the accounts on Dark Web forums.

These five incidents illustrate the emergence of credential stuffing and SIM swapping as two increasingly formidable attack methods. They also demonstrate that our continued reliance on passwords leaves us more vulnerable than ever. For as long as they have existed, cybercriminals have successfully used stolen passwords to “unlock the door” of an enterprise network or personal account and do pretty much whatever they want once inside. With new innovations in authentication technologies, there really is no need for traditional login processes anymore. [Editor’s note: Trusona is one of several vendors that markets passwordless authentication technology.] In fact, here’s how a password-free world would prevent both of these two common attack methods entirely:

Credential stuffing: This is essentially a volume play, as cyber crooks grab a big bag of stolen or leaked usernames/passwords and throw them at numerous websites and applications to see which ones stick. Obviously, without passwords, there would be no credential stuffing. Enterprises would benefit from strongly considering the use of websites, online services, and mobile apps for their users/employees that do not have a login field, embracing alternatives such as biometrics, QR codes, and certificate-based authentication.

SIM swapping: After gaining access to a victim’s phone number, hackers will request a one-time passcode via SMS for logins associated with that number. Because the SMS travels through the telephone company network, it goes to the device that controls the stolen phone number.

But the use of push notification authentication eliminates this scenario, by sending a push notification directly to a user to alert them that someone is trying to gain authentication to their device. The user then approves or denies access. Push notifications do not travel via the telephone network; they go through the device operating system network (such as iOS or Android). Thus, the notification appears on the user’s device, and not the hacker’s.

It’s time we recognize that today’s era of breaches requires different strategies to thwart them. If past is prologue, then hackers can (and will) continue to compromise passwords. When we cut the cord and get rid of them for good, we will finally take the next, pivotal steps toward a truly protected environment for the enterprise and the individual. This sounds like a pretty good New Year’s resolution to make in 2020.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “‘Motivating People Who Want the Struggle’: Expert Advice on InfoSec Leadership

 

Ori Eisen has spent the last two decades fighting online crime and holds more than two dozen cybersecurity patents. Prior to founding Trusona, he established online financial institution and e-commerce fraud prevention and detection solution 41st Parameter, acquired by … View Full Bio

Article source: https://www.darkreading.com/endpoint/how-a-password-free-world-could-have-prevented-the-biggest-breaches-of-2019/a/d-id/1336629?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Cloud External Key Manager Now in Beta

Cloud EKM is designed to separate data at rest from encryption keys stored in a third-party management system.

Google this week announced the beta availability of its Google Cloud External Key Manager (Cloud EKM), a new tool intended to create separation between data and encryption keys.

Cloud EKM, which debuted at Google Cloud Next UK last month, lets users protect data at rest in BigQuery and Compute Engine with encryption keys stored and managed in a third-party key management system outside Google’s infrastructure. Google Cloud calls the Cloud EKM service a “bridge” between its cloud Key Management Service (KMS) and third-party key managers.

This approach gives users stricter control over the creation, location, and distribution of keys, Google explains, as well as full control over who accesses them. With keys stored outside Google Cloud, users can enforce that access to data by requiring use of an external key. It also lets users employ a single key manager for both on-premises and cloud-based keys.

To facilitate the implementation process, Google Cloud is teaming up with key management vendors Equinix, Fortanix, Ionic, Thales, and Unbound. Both the Ionic and Fortanix integrations are ready now; those for Equinix, Thales, and Unbound are coming soon, officials say in a blog post.

Read more details here and here.  

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How to Manage API Security.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/google-cloud-external-key-manager-now-in-beta/d/d-id/1336669?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple