STE WILLIAMS

Brit airport pulls flight info system offline after attack by ‘online crims’

Bristol Airport deliberately yanked its flight screens offline for two days over the weekend in response to a cyberattack.

Techies took down computer-based flight information systems at the airport in provincial England between Friday morning and the wee hours of Sunday morning.

The electronic screens were replaced by whiteboards and extra staff were drafted in to handle the resulting confusion, partly alleviated by an increase in announcements over the speaker system. Flights remained unaffected throughout but passengers were advised to check in earlier than normal to accommodate any delays.

By Sunday, the Bristol airport had restored flight information screens in the arrival and departure areas while techies worked to bring back site-wide access.

Flight info screens were reportedly taken offline in order to contain a “ransomware-style” attack. Rather than paying crooks to restore data, the airline rebuilt affected systems before service was restored. The airline described the hack as opportunistic rather than targeted. In a statement, the airport stressed that key systems were not affected.

Part of Bristol Airport’s administrative systems were subjected to an on-line criminal attempt. A number of processes, including the application providing data for flight information screens in the terminal were taken off line purely as a precautionary measure, while the problem was contained and to avoid any further impact. Established contingency plans were implemented to keep passengers informed about flight information. Flight operations remained unaffected.

Bristol Airport always remains vigilant against all types of hostile on-line activity. As with every event of any type we will monitor and keep under review how to avoid it re-occurring. However, it is important to recognise that security measures already in place ensured minimum disruption to passenger journeys.

Bristol is the UK’s ninth largest airport, handling more than 8.23 million passengers a year. The airport specialises in budget carriers (Ryanair, easyJet) and charter flights, which between them run direct flights to scores of destinations in 34 countries.

Ransomware has been a problem for both businesses and consumers, particularly over the last three or four years. Several hospitals and municipal authorities have fallen victim to attacks over that time. The transportation sector has not been immune. For example, both Odessa airport and the Kiev metro in the Ukraine, were blighted by the BadRabbit ransomware last October. ®

* To be fair, Bristol, as the ninth busiest airport in the UK, deals with a little more than 8 million passengers (PDF, Civil Aviation Authority 2017 figures) a year as opposed to Heathrow’s 75 million.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/17/bristol_airport_cyber_attack/

Check out this link! It’s not like it’ll crash your iPhone or anything (Hint: Of course it will)

Apple iPhones, iPads, and Mac computers that stray onto websites with malicious CSS code, while using Safari, can crash or fall over – due to a flaw in the web browser.

The WebKit rendering engine vulnerability can be triggered by just a few lines of code in a cascading style sheet (CSS). On iOS devices, at least, it all starts to go wrong when the browser tries to parse a processor-intensive CSS feature called backdrop-filter on nested page elements.

The so-called Safari Reaper attack – developed by Berlin-based security researcher Sabri Haddouche and uploaded to GitHub this week – effectively crashes iOS devices, from iPads to iPhones running iOS 7 to 12, and even Apple smartwatches. The CSS causes the rendering engine to exhaust the system’s resources, and force the gadgets to reboot to recover.

Macs can be similarly frozen by the same exploit, forcing them to restart, so don’t try this at home.

Other browsers that make use of WebKit may also be vulnerable, however, this has not been tested as yet. On systems that don’t crash, the HTML renders a picture of a “triggered” Thomas the Tank Engine.

Haddouche, who works for secure messaging outfit Wire, told El Reg that the same trick crashes tabs on IE and Edge. The researcher came across the vulnerability while researching browser-crashing attacks more generally last week.

He suggested restricting nested elements and expensive CSS calls to defend against the attack.

The method is reminiscent of the “evil text” iPhone bug of 2015 except in this case we’re talking about CSS. Neither vulnerability can push malware onto crashed devices so it’s more a nuisance than anything else, at least by itself – it’s a way to crash your pal’s iThing or Mac by tricking them into opening a dodgy page in Safari. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/17/safari_reaper/

The 7 Habits of Highly Effective Security Teams

Security requires smart people, processes, and technology. Too often, the “people” portion of the PPT equation is neglected.

Worldwide spending on digital transformation technologies is expected to reach nearly $1.3 trillion this year, according to a forecast from IDC. But securing against today’s threats requires more than just technology solutions — it demands a strong security team.

What constitutes a strong security team? If you’ve had a malware infection or some other security breach, you might think yours missed the mark. However, the analysis of a team goes beyond a single event.

Based on the experience of working with hundreds of security professionals and some of the most security-conscious organizations in the world, here are seven habits of the most effective security teams.

1. They Invest in Intelligence, Not Security Silver Bullets
Security technologies are a means to an end. Despite heavy investment, companies often find out about security incidents months after they happen and then scramble to close the hole after data has been exfiltrated.

What’s worse is that post-breach analysis typically uncovers warning signs that were overlooked. The best security teams use technology to become more proactive in making risk management decisions. They use technology to combine data from across the enterprise, so analysts can make more intelligent decisions.

2. They Understand What Needs Protecting
Attackers have an end goal in mind when aiming at a company. Successful teams adopt an attacker mindset to understand how each and every device, server, and piece of technology relates to this end goal — and how each puts their organizations at risk if compromised.

Attackers spend an inordinate amount of time studying their targets and infrastructure, looking for weaknesses and reassessing the environment every step of the way. Understanding these patterns is critical to protect against the attacks. You can’t protect everything all the time — prioritizing assets that are most critical to the organization and the likely avenues of attacks on those is a sign of a great team.

3. They Recognize That Alerts Don’t Tell the Whole Story
The most effective security teams almost never “respond to security alerts.” Instead, they use them as another data point in the risk assessment that defines their priorities.

Chasing after every alert provides a direct line of failure for security teams, creating chaos and work without improving enterprise security. The best security teams consider alert severity in context, with factors such as what’s being targeted and the likelihood of impact to the organization caused by the activity. Effective security teams prioritize the incidents that could cause the most harm.

4. They Understand No Amount of AI Replaces Human Intuition
Replacing security teams with artificial intelligence and machine learning may be one of the most overhyped — and dangerous — trends in our industry.

Human decision-making is indispensable in creating and enforcing strong enterprise security because human insight compensates for the intrinsic limitations of mathematical models. Technology investment should focus on supporting the security teams and automating cumbersome tasks such as forensic investigations that require a high degree of process-oriented expertise. The best teams democratize this capability and empower humans to make important risk management decisions.

5. They Learn from Yesterday to Protect Against Tomorrow
The best teams learn from past attacks to better protect themselves in the future. Although attackers will improve their malware and tools, their strategies remain largely the same. The most mature security teams don’t just look for malware — they look for behaviors that are anomalous and don’t belong.

6. They View Security as a Team Sport
According to Cybersecurity Ventures, there will be a global shortfall of 3.5 million cybersecurity jobs by 2021. Security teams need to create the next generation of professionals. The most successful teams do this by creating processes that guarantee repeatable results. The best teams have repeatable playbooks that can be used by anyone on the team — and have mechanisms for preserving, sharing, and applying institutional knowledge into their technology stack

7. They Continually Sharpen the Saw
The best teams continually improve the security apparatus by testing for vulnerabilities and documenting the knowledge they generate about their organization. This information is fed to the security teams so they can identify and secure the vulnerabilities in the network infrastructure. This culture of collective responsibility keeps the entire team focused on the broader goal.

The job of defending the enterprise is continually evolving. It can be tempting to think that buying the latest security technology is the best and only pathway to a secure organization. However, even companies that have spent hundreds of millions of dollars on security investments get breached.

Security requires a combination of investment in people, processes, and technology. Too often, the “people” portion of the PPT equation is neglected.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Gary Golomb has nearly two decades of experience in threat analysis and has led investigations and containment efforts in a number of notable cases. With this experience — and a track record of researching and teaching state-of-the art detection and response … View Full Bio

Article source: https://www.darkreading.com/endpoint/the-7-habits-of-highly-effective-security-teams/a/d-id/1332809?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ransomware Takes Down Airport’s Flight Information Screens

The attack left airport staff to post flight times and gates on whiteboards at Bristol Airport in Britain.

The screens showing airline passengers information on flight schedules and boarding gates were dark for two days at one British airport this month due to a ransomware attack.

Officials at Bristol Airport said they did not pay a ransom, and they were able to restore service to the screens following the outage.

The airport information screens first went dark on Friday, Sept. 14, and airport personnel rapidly resorted to a combination of increased PA announcements and status updates hand-written on whiteboards to provide flight information. Airport officials said that no flight operations were affected and there were minimal disruptions in the terminals as passengers heeded requests to arrive early and pack patience in their carry-on kit.

“A number of processes, including the application providing data for flight information screens in the terminal were taken off line purely as a precautionary measure, while the problem was contained and to avoid any further impact,” the airport said in a statement.  

For more, read here and here

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/ransomware-takes-down-airports-flight-information-screens-/d/d-id/1332826?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Yahoo Class-Action Suits Set for Settlement

Altaba tells SEC it will incur $47 million to settle consumer litigation for massive Yahoo data breaches.

The consumer class-action litigation cases against Yahoo in the wake of its epic data breaches are expected to be settled for $47 million.

Altaba, the holding company that retains what’s left of the company in the wake of Verizon’s purchase of Yahoo, this week said in a filing with the US Securities Exchange Commission that it has reached an agreement in principle to settle the lawsuits.

“We have also received final court approval of the securities class action settlement, and we have negotiated an agreement to settle the shareholder derivative litigation (subject to court approval). We estimate that the Company will incur an incremental net $47 million in litigation settlement expenses to resolve all three cases,” Altaba said in its filing. “Together, these developments mark a significant milestone in cleaning up our contingent liabilities related to the Yahoo data breach.”

Read more here  and here.

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/yahoo-class-action-suits-set-for-settlement/d/d-id/1332828?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook’s robot coders step into the future of programming

In one of those landmark moments that will doubtless pass most of us by, but ought to have coders sitting up and taking notice, Facebook’s Android app recently became one of the first in the world to run software debugged by Artificial Intelligence (AI).

Called SapFix, the company describes it as an “AI hybrid tool” that can be used in conjunction with the Sapienz automated Android testing tool originally developed by university researchers but taken in-house by Facebook some time ago.

Sapienz finds the bugs in the code that might cause something like a crash or perhaps even a simple security vulnerability – and this is the new bit – SapFix fixes them. Beams Facebook:

To our knowledge, this marks the first time that a machine-generated fix – with automated end-to-end testing and repair – has been deployed into a codebase of Facebook’s scale.

How does AI do this?

From Facebook’s description, the workflow begins by trying to revert the code back to the state it was in before the bug that caused the problem was introduced.

If it’s a more complex issue, SapFix looks at a collection of “templated fixes” built up from those made by human developers over time.

If even this won’t work, SapFix sets about what Facebook calls a “mutation-based fix” whereby it starts making small code modifications to the problem statement until it thinks the bug has been mitigated.

Finally, it creates several versions of the fix to see whether each solves the issue by running them through the separate Sapienz testing tool. Then, and only then, the system sends its solutions to a human being for review.

So far, SapFix is only in the proof-of-concept phase, which is why no fixes are implemented without a human making that decision. But it seems to work:

Since we started testing SapFix in August, the tool has successfully generated patches that have been accepted by human reviewers and pushed to production.

This sounds more like automated problem-solving for humans than true AI, which would be autonomous – presumably why Facebook describes it as a “hybrid” of both worlds.

The question is how far the AI decision making could be pushed. Logically, the next step would be to allow whatever SapFix turns into to make bigger decisions – the first fateful step on the road to the machines-programming-machines of science fiction paranoia.

This won’t happen soon because it might change the nature of the human accountability that is still important in software management. And if programmers aren’t doing simple grunt work like this, will they stop understanding the software they are designing?

It’s a future that might see programmers becoming the people who simply build the AI systems that do the real work. Or perhaps even those will be built by AI too.

But let’s not get carried away. The company hopes to offer SapFix to other developers on an open source basis, which could give the underlying tech a big bump. For now, this is not the beginning of Skynet, just a faster way to get Facebook’s Android app out the door.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nqeA0CUBdbs/

On the hook! Phishing trip nets “Barbara” 5 years and whopping fine

A Nigerian man is facing the prospect of up to five years in the decidedly unprincely confines of a US jail after pleading guilty to operating an email phishing scam targeting businesses around the world. To add a little spice to the mix, the fraudster also set up romance scams as an attractive young woman named “Barbara.”

In Manhattan Federal Court on Tuesday, Onyekachi Emmanuel Opara, 30, originally from Lagos, Nigeria, was also ordered to pay $2.5m in restitution. In April, he pled guilty to charges of wire fraud and conspiracy to commit wire fraud amounting to $25m.

Opara was arrested in South Africa in 2016 and extradited to the US to face charges in January 2018. One of his co-conspirators, David Chukweneke Adindu, pleaded guilty to charges of conspiracy to commit wire fraud and conspiracy to commit identity theft. Adindu was sentenced to 41 months last year.

The Department of Justice (DOJ) said that between 2014 and 2016, the pair participated in multiple business email compromise (BEC) scams that targeted thousands of victims around the world, including in the US, the UK, Australia, Switzerland, Sweden, New Zealand and Singapore.

The spear-phishers would send bogus emails to employees, directing them to transfer funds to bank accounts that they controlled. The emails were made to look like they came from supervisors at the targeted companies or from third-party vendors that they did business with.

To make the emails that bit more convincing, the crooks set up domain names similar to those of the companies and vendors they were posing as: just one of the more nefarious purposes for which typosquatters set up domains that at a quick glance look like a legitimate business save for one, stray keystroke.

Besides screwing with the domain names, Opara and Adindu would sometimes spoof the email metadata to make it look like the messages came from legitimate email addresses.

This is a good example of why we should be cautious with email, even if it looks like it’s coming from a friend or colleague. Besides spoofed email addresses, there’s always the chance that somebody’s hijacked the account of your trusted correspondent, as happened to Sophos’s Peter Mackenzie: you can read Peter’s tale of his solicitor’s email account being taken over and the way the attacker set up a malicious file to grab account credentials here.

After the victims transferred money to the scammers’ bank account as directed in the bogus emails, the crooks quickly withdrew it or transferred it into other bank accounts they also controlled. They went after more than $25 million from victims around the world.

BEC scams are, indeed, worth big bucks: between 2013 and 2015, losses reported to the FBI’s Internet Crime Complaint Center (IC3) totaled $1.2 billion.

According to the indictment (PDF), one of the victimized companies was a New York investment firm. Posing as an investment adviser at another company, the duo instructed an employee at the New York firm to wire $25,200 into a fake account that they said was an “annuity fund.” They then asked for another $75,100 transfer, but by then the jig was up: the employee had already figured out that he’d been scammed.

They also went after a metal forgery in Illinois, sending an email pretending to be the CEO and instructing a staffer to wire over $85,250.50. The next day they requested another $325,500.50, but the business didn’t repeat its mistake.

When he wasn’t busy ripping off businesses with Adindu and other, unnamed gang members, Opara cooked up accounts on dating websites and struck up romance scams with US individuals. Posing as the young, hot “Barbara,” he’d instruct his marks to send money overseas and/or to receive money from BEC scams and forward the proceeds to his cronies, who were also located overseas.

One of his victims sent over $600,000 of their own money to bank accounts controlled by the crooks. Opara went after another 14 individuals, at least, on dating websites, setting them up to receive funds from his BEC scams into their bank accounts and to then transfer the proceeds to overseas bank accounts.

We have to sympathize with the lovelorn who fall for these cruddy come-ons. If you’ve got friends or family stuck in these spider-web fantasies, please do try to convince them that online, all too often, people aren’t who they claim to be.

If your company has been hit by BEC, the first thing on your mind might be your job, or your shareholders, or your employees. But after you get over the shock and triage the damage, please do make sure to report it to the authorities. An important part of battling these kind of scams is making sure that law enforcement knows about them.

To that end, in the US, you can report the scam by filing a complaint with the IC3. In the UK, such instances should be reported to Action Fraud.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/1fOj-JWDJSo/

Deepfake pics and videos set off Facebook’s fake news detector

Facebook will begin officially checking videos and photos for authenticity as part of an expanding effort to stamp out fake news, the company said last week.

Facebook has already responded to the fake news epidemic by checking articles that people post to its social media service for authenticity. To do this, it works with a range of third-party fact checking companies to review and rate content accuracy.

A picture’s worth a thousand words, though, and it was going to have to tackle fake news images eventually. In a post to its newsroom site on Thursday, it said:

To date, most of our fact-checking partners have focused on reviewing articles. However, we have also been actively working to build new technology and partnerships so that we can tackle other forms of misinformation. Today, we’re expanding fact-checking for photos and videos to all of our 27 partners in 17 countries around the world (and are regularly on-boarding new fact-checking partners). This will help us identify and take action against more types of misinformation, faster.

Facebook, which has been rolling out photo- and video-based fact checking since March, said that there are three main types of fake visual news. The first is fabrication, where someone forges an image with Photoshop or produces a deepfake video. One example is a photo from September 2017, which depicted a Seattle Seahawks player burning a US flag. The image, of a post-game celebration, had been doctored to insert the flag.

The next category is images that are taken out of context. For example, in 2013, a popular photograph on Facebook purportedly showed Raoni Metuktire, chief of the Brazilian Kayapó tribe, in tears after the government announced a license to build a hydroelectric dam. In fact, he was sobbing because he had been reunited with a family member.

The third category superimposes false text or audio claims over photographs. In Facebook’s example, a fake news outlet called ‘BBC News Hub’, superimposed unsubstantiated comments on a photo of Indian Prime Minister Narendra Modi, claiming that he has been lining his own pockets with public resources and is “The 7th Most Corrupted Prime Minister in the World 2018″(sic).

Facebook’s image checking system uses machine learning algorithms to consume various data points around an image. These can include feedback from Facebook users. Flagged images go to the specialist fact checkers, who then use tools such as reverse image searching and image metadata. The latter can tell them when and where the photo or video was taken. They will also use their own research chops to verify the image against other information from academics and government agencies.

The system also uses optical character recognition (OCR) to ‘read’ text from photos and compare it to text in other headlines. The company is also testing new techniques to detect if a photo or video has been tampered with, it said.

Facebook’s announcement came just one day after CEO Mark Zuckerberg posted a lengthy update outlining the company’s progress on stopping election tampering, and how the company has been working to stamp out fake accounts and misinformation. He said:

When a post is flagged as potentially false or is going viral, we pass it to independent fact-checkers to review. All of the fact-checkers we use are certified by the non-partisan International Fact-Checking Network. Posts that are rated as false are demoted and lose on average 80% of their future views.

Facebook may have the best of intentions, but it has tangled with photo analysis in the past and failed. In September 2016, it was forced to back down after censoring a famous historical image of a nine-year-old naked Vietnamese girl running away from a napalm attack. In that case, the company initially vetoed the photograph for violating its community standards.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/s-zYqavBaJQ/

Equifax IT staff had to rerun hackers’ database queries to work out what was nicked – audit

Equifax was so unsure how much data had been stolen during its 2017 mega-hack that its IT staff spent weeks rerunning the hackers’ database queries on a test system to find out.

That’s just one intriguing info-nugget from the US Government Accountability Office’s (GAO) report, Actions Taken by Equifax and Federal Agencies in Response to the 2017 Breach, dated August but publicly released this month.

During that attack, hackers broke into the credit check agency’s systems, getting sight of roughly 150 million people in America plus 15 million Brits, and others.

Computer security breaches are rarely examined in this much detail, however, several departments of the US government are Equifax customers, which meant the Feds wanted the GAO to convince them it’s not going to happen again.

The cyber-break-in happened on May 13 when criminals started exploiting a vulnerability in the Apache Struts 2 framework running on Equifax’s online portal. The company didn’t clock it until July 29. However, the report confirmed that failing to patch this flaw earlier was not the only screw-up.

Ironically, the security breach was only picked up when someone updated an expired certificate on a piece of kit that was supposed to be monitoring outbound encrypted traffic, and immediately noticed something was wrong. With that device effectively switched off for 10 months due to the expired certificate, “during that period, the attacker was able to run commands and remove stolen data over an encrypted connection without detection,” noted the auditors.

Had that been operational, history might have been different. As the auditors put it:

Specifically, while Equifax had installed a device to inspect network traffic for evidence of malicious activity, a misconfiguration allowed encrypted traffic to pass through the network without being inspected.

According to Equifax officials, the misconfiguration was due to an expired digital certificate. The certificate had expired about 10 months before the breach occurred, meaning that encrypted traffic was not being inspected throughout that period. As a result, during that period, the attacker was able to run commands and remove stolen data over an encrypted connection without detection.

Equifax officials stated that, after the misconfiguration was corrected by updating the expired digital certificate and the inspection of network traffic had restarted, the administrator recognized signs of an intrusion, such as system commands being executed in ways that were not part of normal operations. Equifax then blocked several Internet addresses from which the requests were being executed to try to stop the attack.

We’ll call that the “holy crap” moment but there were other failings, including a lack of segmentation, a technique that could have isolated the databases from one another, or at least triggered an alarm when the intruders tried to move sideways through the network.

letters stuffed in a mailbox. Photo by SHutterstock

Eight months after Equifax megahack, some Brits are only just being notified

READ MORE

It was a similar story for data governance – jargon for making it harder for an attacker to access certain fields within the databases. Even simple query rate limiting might have helped, “specifically, the lack of restrictions on the frequency of database queries allowed the attackers to execute approximately 9,000 such queries – many more than would be needed for normal operations.”

Equifax did get lucky on one score: had the attackers erased some of the logs, reconstructing what they’d been up to during all those weeks of easy access may have been much harder. Even getting that far required Equifax’s IT team to rerun the as much of the attack as they could using a test copy of the database against which the thousands of known queries were run.

While this is an excellent idea straight out of “we’ve been breached 101,” it was still a time-consuming way to have zero per cent fun. It might at least start to explain why Equifax took until September 7 to reveal the network breach despite knowing about it for weeks.

The GAO makes no recommendations on future security, which is not its remit. What’s striking from its report, however, is how small individual errors and oversights in a company with plenty of resources can lead to the data of nearly 150 million people ending up in the hands of bad people. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/17/gao_report_equifax_mega_breach/

Tick-tock, tick-tock. Oh, that’s just the sound of compromised logins waiting to ruin your day

Comment It has never been easier to conduct a cyber attack. There now exists a range of off-the-shelf tools and services that do all the heavy lifting – you just need to pick an approach and tool you like best.

There’s ransomware-as-a-service with its “here’s one I made earlier” code, search engines that show connected interfaces with known vulnerabilities, and downloadable and easy-to-use scanning tools for the discerning script kiddie.

Heck, why bother with tools that need time and effort to find vulnerable systems? Why not just steal credentials and log in via the front door?

Using Troy Hunt’s site Have I Been Pwned?, you can check your user ID against almost 5.4 billion sets of credentials that have been stolen over the last few years. 5.4 billion. One would hope that the majority don’t work any more due to breached sites shutting down, people deleting their logins for those sites, and many more changing their passwords. But of 5.4 billion sets of credentials, plenty will still be valid.

Alongside Hunt’s project is the flood of credentials that continue to be stolen. One security vendor reckoned eight million per day in 2016. Even with a pinch of “they would say that, wouldn’t they” salt, that’s rather a lot.

Why does it take so long to detect threats?

You can’t detect something you don’t see. Imagine, for example, that one of your staff falls for a phishing campaign. How would you know?

With good training, many of them will tell you. They’ll realise they gave away their credentials and will call the security or IT team, who will help them change their passwords to avoid compromise. Unless, of course, dishing up the credentials kicked off an automated attack and the exploit has happened already.

Could the threat have been detected? These days, yes, as phishing is a by-numbers game that employs suspicious domains which are easy to spot. That said, there will still be instances where phishing emails do succeed and you’re left on clean-up duty.

Argh! I've been phished

Feel the shame: Email-scammed staffers aren’t telling bosses about it

READ MORE

But the industry is turning its attention to “behaviour”. In this context AI-based monitoring tools are discussed – systems that watch the network, the user PCs and servers to see what people and applications are doing, and looking out for abnormal behaviour.

Now, it is possible to identify abnormal behaviour without such tools. It’s straightforward to log the sources of your cloud service logins and run scripts that will smell a virtual rat. In many cases this is an out-of-the-box service that’s turned on by default. Yes, AI tools are much cooler and more effective, but you can do the basics with free features and simple scripts.

This approach can and does fail, however, when – as often happens – people either don’t turn on logging or they do turn it on but don’t monitor the results.

R3alC0mplexP4ssword44

So all kinds of systems are open to attack because the unifying factors in each are the means of authentication, the password, and the presence of humans.

Let’s say the user gave away their password, it wasn’t detected but you were lucky: it simply got squirrelled away in a database rather than deployed in an automatic attack. The user has changed the password.

What have they changed their password to? Something completely different, I hope. In many cases, the previous password isn’t a million miles away from the old: if they had R3alC0mplexP4ssword43, the new one is likely to be R3alC0mplexP4ssword44.

You’ll have configured the system not to let them use something too short, or something that’s not complex, or something they’ve used before: checking to see if they’ve used something sufficiently different from their old ones is harder.

And that’s because systems are secure. They don’t store passwords in plain text – they hash them first. Which means it’s non-trivial to check new passwords for similarities with old ones. All you can do is take a set of variations of the new one, hash them, and compare them with the database – which is far from exhaustive, and attackers will always be able to try more alternatives at their leisure than your system can in the few seconds you have in a password change function.

The point is that compromised credentials have longevity. Even if you changed them instantly, they could be used months or years later with some cunning heuristic algorithm to help guess the passwords that succeeded them. A ticking time bomb, as it were, except you can’t hear the ticking.

Are we configuring systems correctly?

Have a cloud-based email system without multi-factor authentication? You and thousands of others, yet there’s no way you can configure a single-factor authentication mechanism securely. Make passwords as complex as you like, force changes as often as you like, but someone will eventually give up their credentials and the hackers are in.

Do we know how to configure our systems properly anyway? Not so long ago I port-scanned a client’s LAN and found a SAN controller. Google told me the default “admin” and “root” passwords. The “admin” password didn’t work (they’d clearly changed it) but the “root” one let me straight in. Why? The client didn’t know the “root” login existed. True, I had to be in their office to get on their LAN, but that’s not always the case. And they had MAC address whitelisting, but I just configured my Mac’s LAN card to pretend to be the meeting room PC.

How bad is this situation? Search engines such as Shodan.io will serve up screen after screen of vulnerable systems with default credentials set. So it’s pretty bad.

Are systems more susceptible than others?

Top of the list: anything in the cloud, especially email. By default the average cloud-based mail application is more accessible than something hidden in a corporate LAN behind a NAT firewall.

Next is anything web-facing by necessity: if you have to make it open to the world, the world can probe it for insecurities. The majority of attacks you can do in this context are by probing software vulnerabilities not using compromised credentials, but there are so many it has to be stated.

Boy with binoculars photo via Shutterstock

Event management kit can take a hammering these days: Use it well and it’ll save your ass

READ MORE

Then anything “old”. This source of attack might not come from compromised credentials but, again, vulnerabilities as systems go unpatched through oversight or by falling off their vendor’s support roadmap by dint of their age.

Finally, anything on a network. Even if something isn’t directly vulnerable, it may be possible to reach it via something else in your infrastructure and then hop off over the LAN using trust relationships or – yes – compromised credentials from internal application or database logins.

What can we do?

First we can make our passwords as complex as we can, and change them regularly. That’s a pain for users, but we find compromises that work. Most importantly we need to stop thinking that user IDs and passwords are enough. Multi-factor authentication is absolutely mandatory for anything that can be accessed from the internet, and since our people are getting so used to it – and because it’s so simple to put tools such as face recognition or fingerprint scanners on our devices – why not use it internally too?

Next, provide rigorous, regular training to help our users stop falling for scams. I’ve done face-to-face awareness training programmes whose outcome has demonstrably been to double user reports of suspicious activity and halve the number of accidental breaches.

Finally, stop making stupid mistakes. If your connected device has any default credentials, or has any services running that you don’t need, you need a kicking. And a good slap if you’re not updating the software and firmware, but that’s a separate issue. We need to train our techies properly: if they don’t know how to secure it, they can’t secure it properly.

Rethinking passwords

If we can’t prevent passwords being stolen and systems compromised, we have two options. One is to search for ways to prevent passwords from becoming such a weak link.

The best way to do this, ironically, is to share information about our security with others in the same situation. Remember I talked about tools that flag activity that isn’t “normal”? The best way to teach AI what “normal” constitutes would be to pour data into the right machine models. In this case, the sources for this data should be companies and organisations like us. Not only would our contribution help others, but their contributions would help to alert us when someone tries to re-use or abuse our users’ credentials.

The other option is for greater monitoring, using things like rules to spot signs of a breach – such as a user’s access from new and exotic locations, for example – with the addition of automatic alerts. Also, for more intelligence that can learn to differentiate more subtle and hidden forms of rogue behaviour.

Monitoring means you don’t just find the bad guys but also identify whose IDs have been nicked.

Compromised credentials are an existential risk that manifests itself as a practical threat. We can protect ourselves because the tools are getting better – we just need to recognise that that ticking sound inside our IT systems is a wake-up call. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/17/compromised_credentials/