STE WILLIAMS

Vodafone hounds Czech customers for bills after they were brute-forced with Voda-issued PINs

Two crooks scammed Vodafone customers in the Czech Republic out of $26,000 thanks to weak telco-issued PIN codes.

Vodafone preset the online passwords for their customers with a numerical password of 4-6 digits. A pair of chancers with no technical skills were able to launch a brute-force attack that reportedly involved trying random phone numbers and the passcode 1234 to crack accounts registered with the mobile network’s customer portal.

It gets worse. The fraudsters were able to obtain new SIM cards by simply knowing a phone number and the PIN code linked to that account. No photo ID or even email confirmation were apparently required. The duplicate SIMs were ordered through an online service before they were picked up in person.

At this point the duo funded online betting accounts through premium-rate SMS messages sent through a payment gateway and debited against a compromised mobile account. Money was then withdrawn from these betting accounts, leaving Voda customers with a nasty debt and the fraudsters laughing all the way to the bank.

The attack wasn’t terribly sophisticated so perhaps it isn’t surprising that the crooks were soon pinched.

The 60 affected customers’ bills were padded with fraudulent transactions. Rather than them writing off, Vodafone is aggressively chasing payments, even resorting to debt collectors.

The telco reportedly claimed that its clients are liable for the fraudulent transactions because they had weak passwords – ignoring that these easily guessed codes were issued as a result of security shortcomings in its own system.

These codes may have been handed out as temporary credentials, but Vodafone didn’t let customers know that these details needed to be changed. In some cases users apparently didn’t even know they had a web account.

El Reg learnt of the whole sorry business from Prague-based software developer Michal Špaček, who made a series of Twitter posts about the matter.

“Vodafone says your password is your responsibility and points to [its] ToS [terms of service],” he wrote. “[The] bad guys were even able to get new SIM cards because they knew a phone number and a password. No additional checks.”

A local newspaper’s report of the scam can be found here (in Czech).

Petr Bužo and Nikola Horváthová from the Czech city of Teplice were jailed for three and two years respectively over the scam.

The compromised accounts were all reportedly set up before 2012. For the last six years customers have selected their own six-digit passwords themselves when setting up an account at mobile phone shops. Security experts like Špaček are not impressed with the new system’s robustness either.

A friend of Špaček, Michal Illich, some years ago was assigned “1234” when setting up an account, which he received printed out in an envelope as if it had been generated by a machine.

Through the Voda web portal, the defendants reportedly had access to victims’ date of birth, residence, bank details and call records. Thankfully they never abused this information to mount ID theft scams or follow-up phishing attacks.

El Reg invited Vodafone in the Czech Republic to comment on the case and criticism of its security policies. Vodafone said:

We were sorry to hear that some of our customers fell victim to targeted fraudulent activity by criminals. We make it very clear to all our customers that they need strong, unique passwords in order to protect themselves from this kind of criminal behaviour. We have been working with law enforcement to ensure that those responsible were brought to justice and compensate our customers.

Authentication expert Per Thorsheim told El Reg that Vodafone’s Czech Republic arm had made a litany of security gaffes. One basic measure, using email addresses instead of PIN codes, would have been enough to frustrate the simple brute-force attack the fraudsters used.

“If Vodafone used email as username *and* said passwords [should be a minimum length of 8 characters], ‘password’ would probably get you access, but you would need a long list of valid user email addresses. Definitely a harder attack to do.

“The crazy part of this story is that Vodafone has a shitty authentication setup, a good probability they have set ‘1234’ for users themselves, and then they blame their customers for bad security and getting hacked.”

If Vodafone had any rate-limiting, account lockout, geofencing or time-based security on logins, that would help improve security without inconveniencing legitimate users, Thorsheim further noted.

The Oslo-based security expert concluded: “The Information Commissioner’s Office in the Czech Republic should look into this, based on what might be bad protection of personal data, and ask for risk analysis and DPIA [data protection impact assessments].” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/07/vodafone_czech_republic_fraud/

Feel the shame: Email-scammed staffers aren’t telling bosses about it

The number of UK companies on the receiving end of business scams involving email has risen by nearly two-thirds – 58 per cent – in the last year, new data from Lloyds Bank has revealed.

Stats from the bank showed the average loss from so-called “business email compromise” (BEC) frauds has reached £27,000.

IT workers are among the most susceptible to falling victim, along with those working in legal firms, HR and finance.

So-called “tech savvy” – the survey’s words, not ours – millennials face the highest risk of being targeted, with more than one in 10 falling victim or knowing someone who’d been a victim.

The most popular impersonation tactic by such con artists is to try to persuade staff inside an organisation to change bank account details by posing as a supplier, or by spoofing emails that appear to be from a contact or manager to achieve the same.

Chillingly, Lloyds – which polled 1,500 SME employees – found that more than a third are not even sure how to identify these fraudulent emails.

Half of those asked said they’d encountered scammers who had posed as their boss, with about the same number experiencing the same ruse for a supplier.

Shame, shame…

Shame is making things worse: a quarter of victims of impersonation fraud were apparently so ashamed they decided to hide their mistake from their team for fear of being fired.

Clearly a dodgy move as a fraud that goes unreported can grow, with fraudsters able to continue to access systems and data or make rogue payments using the compromised details.

The survey is part of the government, National Crime Agency, Ofcom and private sector’s Get Safe Online campaign. The report said up to 8 per cent of SMEs – nearly half a million businesses – might have fallen victim to BEC fraud at some point.

No reason was given for the surge in SMEs, but with hundreds of business transactions and interactions happening a day and with SMEs typically understaffed and under-resourced, it would be difficult to verify all emails, and individuals are left to make decisions based on assumptions. It could also be argued that BEC is difficult to solve because it is a form of social engineering.

Email, meanwhile, remains weakly authenticated and easy to game – even when companies move to cloud-hosted services such as Office 365 and G Suite. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/07/scam_business_emails_on_the_rise/

Revealed: British Airways was in talks with IBM on outsourcing security just before hack

Exclusive Just weeks before being hacked in late August, British Airways’ parent IAG was planning to outsource its cybersecurity to IBM, admitting it needed a “group-wide strategic and proactive approach” to counter threats.

The memo in full

Subject: Group IT Cyber Security Update

From: John Hamilton

Sent: 01 August 2018 13:56

All,

Organisations across the world are facing a significant rise in more sophisticated and persistent cyber threat activity, and increasing regulatory requirements.

Group IT has been looking at a group solution to strengthen our capability to continue to protect IAG (International Airlines Group) and its operating companies (OpCos). Internal and external reports undertaken highlight that further investment is required in cyber security across IAG to provide a group-wide strategic and proactive approach.

We have therefore outlined proposals to set up a Cyber Security Office and transfer the services of Cyber Security to a third-party partner, IBM, as a managed service to cover all cyber security services required to support IAG and its OpCos. Security Operations services will remain in Service Operations, Tower 3, in Service and Infrastructure.

This proposal has been approved by the British Airways Management Committee (MC) for the start of a collective consultation process with BA colleagues and their representatives. We will of course, listen to and evaluate any alternative proposals put forward and are committed to consulting with affected colleagues within the applicable local and legal frameworks.

We recognise and appreciate this proposal will mean a period of uncertainty and concern for colleagues working in the Cyber Security function. Should you have any questions or concerns, please speak to your line manager.

Regards,

John Hamilton and Laurie Diffey

John Hamilton | Group IT Service Effectiveness Manager

WTS

The Register has learned, from a leaked internal memo, that BA was consulting its staffers about the move. According to the missive, the airline expected to transfer the majority of its cybersecurity functions to IBM with the exception of its security operations services, which will remain part of its own function.

BA’s management committee approved the outsourcing scheme prior to putting it out to consultation with workers and unions at the beginning of August. “We recognise and appreciate this proposal will mean a period of uncertainty and concern for colleagues working in the Cyber Security function,” John Hamilton, group IT service effectiveness manager, wrote in the memo.

An infosec expert with experience in the aviation industry told El Reg: “You don’t outsource something that is working well.” The airline may have proposing outsourcing either because it is “struggling to get enough high-quality staff or because the board wanted to cut costs,” we were told.

BA has a bad reputation of cost-cutting at the moment, he added.

In any case, British Airways, at the start of August, felt it needed outside help to secure its computer systems.

The security breach

Fast-forward five weeks to Thursday, 6 September, and BA was obliged to open an investigation into the theft of customer information from its website and mobile app servers by hackers, as we reported yesterday.

The personal and financial info of travellers booking flights or other services through British Airways was potentially in the hands of cyber-crooks for 15 days between August 21 and September 5. Around 380,000 credit and debit cards were potentially compromised as a result of the intrusion, making it one of the biggest single UK payment card security blunders in UK history. Compromised info includes card numbers and CVV codes, BA added on Friday.

Neither travel nor passport details are thought to have been exposed by the hack, which has been reported to both the police and UK data protection regulators.

Our aviation-experienced infosec source also offered some informed speculation on how the cyber-break-in unfolded.

“This will probably come down to either not having an update tested before it goes live, cost-cutting resulting in the site not being tested as often as it should have been or lower quality support (aka not patching the servers),” he said.

“Given the specific time window of stolen data, I suspect a third party web server component compromise. It would be hard for the security team to spot the change in the user’s web experience especially if they have limited influence in the organisation as developers and web admins will not follow security processes,” he added.

Our expert concluded: “Given the rumours of outsourcing security, the team are probably not as effective as they could be (or they are swamped) with other problems.”

What happened this week?

How exactly the crooks broke into BA’s network remains unclear, publicly at least. BA chief exec Alex Cruz appeared on BBC Radio 4’s flagship Today programme on Friday morning to say that the airline’s partners alerted it to the intrusion on Wednesday night.

“There was a very sophisticated and malicious criminal attack on our website,” Cruz said. “We have a network of partners that are monitoring continuously what happens to websites across the world. We got a signal from one of those partners.”

Asked to clarify, Cruz said it was BA’s own systems that alerted it to problems rather than those of an external security researcher, bank or financial service provider. Cruz sidestepped several questions on how the criminals broke in.

You can listen to the interview here (start at the 1:50 mark).

British Airways website

‘World’s favorite airline’ favorite among hackers: British Airways site, app hacked for two weeks

READ MORE

BA is offering to reimburse customers for any financial losses attributable to the security breach. Most will likely be covered by credit card indemnification against fraud anyway. The bigger problem, for the airline, is what financial sanctions it might receive from data privacy watchdogs at the ICO under the tougher regime introduced when the EU’s General Data Protection Regulation was brought into effect in May this year.

El Reg asked BA to comment on the rationale for its planned outsourcing and how many jobs will be involved. We also flagged up sections of BA’s internal memo that warned that the outsourcing “proposal will mean a period of uncertainty and concern for colleagues working in the cyber security function.”

We’re yet to hear back but we’ll update this story as and when we hear more. El Reg also contacted IBM, so far without success, and industry experts with a request to comment on the security breach and the proposed outsourcing plan.

An spokeswoman for the UK’s information privacy watchdog, the ICO, told us: ”British Airways has made us aware of an incident and we are making enquiries.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/07/ba_security_outsurcing_consultation_memo/

The Role of Incident Response in ICS Security Compliance

The data-driven nature of IR can provide many of the reporting requirements governing industrial control system safety, finance, consumer privacy, and notifications.

Regulatory compliance in industrial environments poses unique challenges not found in traditional IT settings. A leading source of this complexity stems from the pre-Internet, largely proprietary nature of industrial control system (ICS) networks, specifically their lack of open computing standards, which are taken for granted in IT networks. These closed ICS networks are extremely hard to update, and even harder to maintain in compliance with state, federal, and industry regulations.

In addition, most ICS networks lack built-in security components, notably automated asset management, proactive security monitoring, and real-time threat analysis and prevention. Plus, most of the applicable regulations and guidelines apply specifically to verticals such as healthcare and energy, and cover ICS only either indirectly or at a very high level. Consequently, the responsibility for security and incident response (IR) falls primarily on those who implement and utilize ICS, namely operational technology personnel, not the security team. 

5 Core Elements of ICS Compliance
Although specific regulations and standards vary, there are five key elements to consider when developing an ICS compliance program:

Asset management: Identifying and classifying ICS assets and the data they contain.

Identity and access management: Using role-based access control (RBAC) and authentication, authorization, and accounting (AAA) to manage ICS assets.

Risk assessments, vulnerability management, and change management: All of these functions involve identifying risks and vulnerabilities, and patching ICS assets, which can be challenging because different vendors provide varying levels of support and maintenance. 

Security controls: Isolating the ICS network from the rest of the organization’s networks. The key tool is encryption — of data at rest and in transit — to ensure the integrity of applications as well as data. Other important tools are monitoring and logging network activity.

Physical security: Mostly, this means restricting physical access to the ICS devices. Because internal security capabilities of most ICS devices are often very limited, organizations must ensure that proper external controls are in place to fill gaps.

ICS Compliance Frameworks
US ICS-CERT has some of the most detailed recommendations for security and compliance specific to ICS, specifically, Recommended Practice: Creating Cyber Forensics Plans for Control Systems (2008) and Recommended Practice: Developing an Industrial Control Systems Cybersecurity Incident Response Capability (2009). 

Another good source of information for all organizations is the National Cybersecurity and Communications Integration Center (NCCIC) Industrial Control Systems. It provides recommendations and best practices.

Most verticals have specific guidelines for what organizations should do in incident response. Generally, organizations should familiarize themselves with all existing frameworks, laws, and regulatory and compliance standards so they can use them to create effective plans, policies, and procedures.

Incident Response ICS Compliance
Because meeting ICS regulatory compliance requirements involves documenting processes and procedures, the data-driven nature of IR provides many of the reporting elements to comply with the strictest regulations regarding finance, safety, consumer privacy, customer notifications, and so on.

For example, the foundation of ICS compliance is built on auditing of assets. Without proper auditing, an organization is forced to assume the worst when a breach or attack occurs — that everything has been infected.

Detection, also a central element IR, is tightly aligned with compliance. Being able to detect and respond to a breach when it occurs, instead of weeks or months later, enables organizations to limit or avoid regulatory sanctions, as well as public relations nightmares.

IR investigation and threat hunting, meanwhile, provide the audit trail for satisfying compliance mandates. If an organization suffers a breach it must be able to quickly determine when it happened, what damage was caused, and whether it has been remediated or not.

Finally, IR’s ability to document workflows and findings can play a central role in complying with disclosure requirements and help meet the short deadlines for notifying all internal and external stakeholders.

Related Content:

 

Black Hat Europe returns to London, Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

John Moran is a security operations and incident response expert. He has served as a senior incident response analyst for NTT Security, computer forensic analyst for the Maine State Police Computer Crimes Unit and computer forensics task force officer for the US Department of … View Full Bio

Article source: https://www.darkreading.com/risk/the-role-of-incident-response-in-ics-security-compliance/a/d-id/1332747?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

British Airways Issues Apology for Severe Data Breach

The airline “is deeply sorry” for its worst-ever cyberattack, which has affected 380,000 customers.

It’s been a bumpy week for British Airways, which has apologized to 380,000 customers whose credit card information and other personal data was compromised in the worst cyberattack to hit the airline’s website and app in the 20-plus years it has been online.

The breach was first detected on Wednesday, Sept. 5, when British Airways learned bookings made during the two weeks prior had been affected by cybercriminals. Between Aug. 21 and Sept. 5, attackers compromised 380,000 card payments and stole customers’ names, physical and email addresses, and credit card numbers, expiration dates, and security codes.

BA chairman and chief executive Alex Cruz said the airline is “deeply sorry” for the attack, which he described as “very sophisticated” and “malicious,” Reuters reports. Cruz did not describe how attackers gained access to the data, but said the carrier’s encryption was not broken.

Read more details here.

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/risk/british-airways-issues-apology-for-severe-data-breach/d/d-id/1332759?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Dark web sites could be exposed by routine slip-up

Operators of sites on the dark web might not be as anonymous as they think. A simple misconfiguration could expose their server’s IP address, warned a security researcher this week.

Security researcher Yonathan Klijnsma explained that a simple slip could enable anyone online to map the internet locations of dark web sites using Tor‘s onion service protocol to cloak themselves. His company has already built a searchable database that maps many hidden services to their IP addresses, according to Bleeping Computer.

On the public web, people identify websites domain names (like nakedsecurity.sophos.com) that are easy to read and remember. The internet’s Domain Name System (DNS) – effectively a directory for websites – maps these human readable domain names to the IP addresses that computers use to communicate.

Information about IP addresses is public, and knowing a website’s IP address can unlock lots of information about a website associated with it. It can be used to find the online hosting company that hosts a website, and it provides a target for attack, both of which might be useful if you want to unmask a site operator trying to stay anonymous.

Dark web sites are hidden services, computer services that are only accessible via the anonymous Tor network where their public IP address information is cloaked. This enables website owners to publish information without anyone knowing who they are.

Misconfigured servers

Anonymity relies on the hidden service owner configuring their web server properly, and it is here that Klijnsma discovered what turned out to be a common mistake. The problem is that a website operating as a hidden service is still at heart a web server with an IP address.

Misconfiguring the server can reveal that address.

A hidden service should be configured to only listen for connections via its local IP address (127.0.0.1), known as localhost, where it talks to the Tor daemon. In turn, the Tor daemon binds to the computer’s external IP address and ensures that the website is accessible via the anonymising Tor network.

However, some hidden service operators misconfigure their web servers to listen for connections on external hostnames or IP addresses, which can cause the IP information Tor tries to hide to leak out.

What Klijnsma found was a leak via a very common web server asset: a digital certificate.

Most web servers use SSL certificates when communicating with visitors. These serve two purposes. Firstly, they encrypt traffic so that snoopers can’t intercept and read it. Secondly, they enable the website to prove its identity to the visiting web browser. Imagine an SSL certificate as a notarised envelope from a trusted third party with your name and (web) address on it. If you give it to someone, then they know it’s from you, and that the message inside it is legit.

Many hidden Tor services use SSL certificates, and those certificates list the sites’ .onion addresses in their Common Name fields. This means that any hidden service misconfigured to listen for communication from the internet will send that certificate, along with its anonymous dark web .onion address, to visitors from the public internet.

That gives visitors two pieces of data: a dark web .onion address and the IP address it’s trying to cloak.

That’s enough information to approach a hosting company and find the site operator’s name and address or to get the site taken down. A malicious actor could also target the IP address with a denial of service attack (DoS), or attempt a hack.

This isn’t the first time that dark websites have given themselves away with misconfigured servers. A feature in the Apache web server that provides detailed information about itself to a localhost query can also give up valuable information about a hidden service – including public IP addresses.

Security researchers and law enforcement officials alike use open source intelligence (OSINT) all the time to track down malicious parties online. IP addresses are a prime piece of data in that process. Klijnsma’s technique just gave us all a look at how it can be used on the dark web just as easily as on the public one, and also proved once again that just because you’re using Tor doesn’t necessarily mean you’re safe. There is more than one way to get busted on the dark web.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/y4fqFQcvqto/

Firefox finally casts Windows XP users adrift

If you’re one of the millions of holdouts still unwisely clinging to Windows XP, Mozilla’s Firefox browser just waved you goodbye.

It’s a connection to XP that started with Firefox’s public launch in 2004, and ended with version 52 in March 2017, after which support for the obsolete OS continued under the Extended Support Release Channel (ESR) which staggered on until version 59.1.0.

Mozilla has used this week’s launch of Firefox 62 as the moment to cast XP adrift for good, justified because the company believes the OS makes up only 2% of Firefox’s user base, down from 8% in 2017.

Firefox has lasted longer than Chrome, which ended its support with version 50 in 2016, and Microsoft itself which stopped supporting XP’s native browser, Internet Explorer 8, two years earlier (although security updates were said to be possible via server versions).

Mozilla is pleased that it soldiered on alone:

That’s millions of users we kept safe on the internet despite running a nearly-17-year-old operating system whose last patch was over 4 years ago. And now we’re wishing these users the very best of luck.… and that they please oh please upgrade so we can go on protecting them into the future.

In short, there will be no more updates, including most importantly of all, no more security updates, leaving XP users with only one place to go – Opera – which ceased development with version 36 in 2016 but said it would continue to provide security fixes.

(Anyone running Windows Vista should assume that the same browser support timelines outlined above apply to them too.)

Version 62 fixes

The new version fixes nine CVEs, including one rated ‘critical’, three rated ‘high’, and two rated ‘moderate’.

So far, little detail is available on these, but the important one – identified as CVE-2018-12376 – patches a memory corruption flaw which “with enough effort… could be exploited to run arbitrary code.”

More interesting for security will be the next release due later this month, version 63, which will turn tracking protection on by default.  Version 65 coming in January 2019 will add the same default setting to the blocking of cross-site trackers which are used to ‘follow’ users from site-to-site to build up a broader picture of their habits.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PiWzFR8uINU/

Supermicro wraps crypto-blanket around server firmware to hide it from malware injectors

Researchers claim to have discovered an exploitable flaw in the baseboard management controller (BMC) hardware used by Supermicro servers.

Security biz Eclypsium today said a weakness in the mechanism for updating a BMC’s firmware could be abused by an attacker to install and run malicious code that would be extremely difficult to remove.

A BMC is typically installed directly onto the motherboard of a server where it is able to directly control and manage the various hardware components of the server independent of the host and guest operating systems. It can also repair, alter, or reinstall the system software, and is remotely controlled over a network or dedicated channel by an administrator. It allows IT staff to manage, configure, and power cycle boxes from afar, which is handy for people looking after warehouses of machines.

Because BMCs operate at such a low level, they are also valuable targets for hackers.

In this case, Eclypsium says the firmware update code in Supermicro’s BMCs don’t bother to cryptographically verify whether or not the downloaded upgrade was issued by the manufacturer, leaving them vulnerable to tampering. The bug could be exploited to execute code that would then be able to withstand OS-level antivirus tools and reinstalls.

To do this, an attacker already on the data center network, or otherwise able to access the controllers, would need to intercept the firmware download, meddle with it, and pass it on to the hardware that will then blindly install it. Alternatively, a miscreant able to eavesdrop on and fiddle with internet traffic feeding into an organization could tamper with the IT team’s BMC firmware downloads, which again would be accepted by the controller.

“We found that the BMC code responsible for processing and applying firmware updates does not perform cryptographic signature verification on the provided firmware image before accepting the update and committing it to non-volatile storage,” says Eclypsium.

“This effectively allows the attacker to load modified code onto the BMC.”

Two execs in a server room. Has to have happened some time heh. Photo by Shutterstock

Can we talk about the little backdoors in data center servers, please?

READ MORE

In addition to running malware code beneath the OS level, the researchers said the flaw could also be used to permanently brick the BMC or even the entire server. Even worse, a potential attack wouldn’t even necessarily require physical access to the server itself.

“Because IPMI communications can be performed over the BMC LAN interface, this update mechanism could also be exploited remotely if the attacker has been able to capture the admin password for the BMC,” Eclypsium warned.

“This requires access to the systems management network, which should be isolated and protected from the production network. However, the implicit trust of management networks and interfaces may generate a false sense of security, leading to otherwise-diligent administrators practicing password reuse for convenience.”

Fortunately, Eclypsium says it has already reported the bug to Supermicro, who responded by adding signature verification to the firmware update tool, effectively plugging this vulnerability. Admins are being advised to get in touch with their Supermicro security contacts to get the fix in place. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/07/supermicro_bmcs_hole/

Could you hack your bosses without hesitation, repetition or deviation? AI says: No

Comment Businesses find themselves in a world where the threat to their networks often comes not simply from a compromise of their computers, servers, or infrastructure, but from legitimate, sanctioned users.

There is nothing new about the notion of cyber-attackers seeing human beings as their biggest target. For years, real-world attacks have repeatedly exploited ordinary user and powerful admin accounts to gain a foothold in a network. Usually, they have done this by tricking humans into handing over their credentials or running malware on their work PCs, or trying their luck at account brute forcing, or exploiting vulnerabilities, and similar techniques.

But attackers who are already on the inside of a network, abusing his or her credentials for nefarious intent without anyone the wiser are rapidly gaining notoriety.

In principle, it’s possible to secure, patch, and lock down devices from external attack. Crucially, it’s not any easier to guard against internal network users going rogue. Barriers and compartments can and should be put in place to limit access and any damage done, however, these may be circumvented via vulnerabilities, determined insiders, or managers demanding special access for their staff.

The default coping mechanism is assumption-based security — or hoping for the best. In other words, if a user authenticates using legitimate credentials, then the balance of probability is that they are who they say they are and should be trusted.

A web of technologies has grown up to mitigate this issue. Privilege management and enhanced authentication technologies are prime examples, but implementing these solutions creates a world of complexity for admins attempting to manage many different systems, each designed to close one aspect of the user problem.

Bad behavior

One alternative has been to monitor users for bad actions using conventional application and network logs, though attackers found blindspots in these systems, and exploited them to evade detection. The biggest weakness was the idea of defining what a user was and wasn’t allowed to do in terms of a set of static rules. The key word here is static. Some IT admins swear by their comprehensive and finely tuned static rules, proud that they detect all sorts of weird and wonderful malicious activity, and some admins are rather good at writing them. However, as external and internal miscreants grow more sophisticated, a more sophisticated means to detect bad behavior is perhaps needed.

Judge for yourself: in 2014, Gartner articulated a new concept of so-called user behavior analytics (UBA), later refined into user and entity behavior analytics (UEBA). Conceptually, UEBA is the ultimate expression of the collapse of the perimeter security model. It became obvious that the perimeter could be in thousands of places at once, around everything and anything – especially users and their accounts.

In a network where nothing is inherently trustworthy, a new measure would be needed that in UEBA would be fulfilled by the idea of the anomaly.

But what is an anomaly? In a perimeter network, a user who is breaking, or attempting to break static rules, certainly looks like an anomaly, but UEBA takes a slightly more sophisticated approach. It tries to understand users, accounts, devices, and applications based on their intention – a measure that is based in turn on building up a kind of profile of acceptable or standard behavior. This profile is part of what’s known as baselining.

Baselining: how deep can you go?

Baselining is the heart of this model of security. It forms an automatically adjusting definition of what is normal, and what is not normal and therefore unwanted, in terms of internal user activity. UEBA vendors implement this principle in different ways, but the essence is to turn network security into a problem of big data analytics where the raw material is sifted using automated machine learning.

Indicators from multiple monitoring systems are aggregated into databases that translate a mountain of data into something machines can process by applying a set of algorithms. Because the data has huge variety of characteristics, this is no mean task, but the center of gravity is always the user context – what is the user’s state and how might this affect security? Within this, there’s a spectrum of options from behavior to scenario-based analytics.

Baselining is not a new idea for security, but the addition of machine learning harnessed to big data has supercharged what seems to be possible by expanding the complexity of data input through which a baseline and any deviation from it can be understood.

Rather than imposing a set of rules or norms on networks, baselining analyses behaviour to define this normal state. This varies depending on the context. For example, the range of contacts a user will interact with through an email system and the nature of that communication will almost always be within certain limits.

Deviants detected

Baselining can be used to model this for each user or for groups of users. Once baselines are established, UEBA identifies sudden deviations from the pattern. Similarly, the applications and internal resources a user accesses will also fall within certain limits, which should mean that deviations such as the time of day or the IP of the machine from which the recourse is being accesses will stand out.

Using this approach, the job of the security admin no longer becomes defining what can and can’t be done but, rather, it becomes one of setting the point where a deviation from the baseline breaches an acceptable threshold and should turn into an alert.

You have to be careful, though. Set the threshold too low, and the chance of a false positive rises, but set it too high and an attack might be missed. In theory, UEBA baselining should make the chances of either less likely because the baseline covers a range of indicators, and not simply a single application or type of access.

What sets a good UEBA system apart? The depth and sophistication of baselining — something only UEBA-specific systems are capable of achieving. The lesson here is to beware of vendors that have simply applied the name to an older set of technologies, because you won’t get such depth. A second characteristic of a good UEBA system is the type of statistical modeling that informs the machine-learning algorithms, and the ability to evolve and cope with natural changes in the way networks and their users behave.

Of course, there’s a catch, and an obvious challenge with this model is devising thresholds that reflect different users in different contexts, particularly when trying to minimize insider attacks by privileged accounts. Frankly, there is no easy answer to this, although UEBA advocates argue that all malicious activity will offer giveaways, such as accessing valuable data in an unusual way.

Another challenge is the existence of temporary accounts and users who need to be given access to a network – such as, say, external contractors. UEBA offers a structure for applying machine learning and baselining to security, tho the real world remains a complex place.

businessman operating virtual hud interface and manipulating elements with robotic hand

We can rebuild him, we have the technology: AI will help security teams smack pesky anomalies

READ MORE

Paul Simmonds, chief exec of the Global Identity Foundation, whose career includes stints as CISO of AstraZeneca and ICI, explained the complexities of these challenges. “The problem we all face is that’s is very difficult to apply UEBA to anything other than entities within our locus-of-control,” he told The Reg.

“Thus, you can make it work for your banking customers, or your employees, but have real problems expanding beyond that due to a lack of ability to understand people, devices, organizations, code and agents from outside.”

Simmonds also questions whether the phrase UEBA is always helpful: “What you actually are trying to understand is simply context – do we understand what the entity is trying to do? Contextual-based analytics would be more accurate.”

There is a final challenge that no UEBA system can solve on its own, and the answer to which really is down to you: the response. An alert is one thing, but what next? You need to process and respond to what you’re being told as rapidly as possible, which probably means acting in minutes to mitigate sophisticated attacks.

Some have proposed automated response as the next frontier for AI-driven security, but in most security operations centres, this will still come down to difficult choices made by men and women using their own judgment and a handbook of response tools.

The network perimeter has been compromised by attackers, threats, and risks on both sides of the firewall. Anticipating activity and actions considered out of the ordinary is a powerful new security model in this world of zero trust. UEBA isn’t a silver bullet – it comes with calibration requirements – but the use of machine learning to build a baseline is a far more intelligent approach compared to the old method of brittle rules for staying safe in this new and complex world. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/07/ueba_security_ai_anomaly_detection/

It looks like tech-savvy drivers will have to lead connected car data purge

The privacy issues thrown up by connected cars don’t seem to be going anywhere soon.

Drivers of cars from BMW, Jaguar Land Rover and Mercedes-Benz have reported that previous owners retain unfettered access to the data and controls of connected cars after resale. The problem is international and extends to hire cars due to drivers connecting their smartphones to rented rides.

If smartphones are spies in your pocket, connected cars are spies on wheels.

Matt Watts, the IT worker whose travails with his Land Rover first encouraged us to look into the issue, said connected cars also pose a tracking risk in incidents of relationship breakdowns.

“You go somewhere they’d never expect you to be and yet a short time later they track the car and turn up! This whole topic has so many more implications than any of us realise and people simply aren’t aware.”

Watts told El Reg that he has relayed his concerns to relevant charities but is yet to hear back from them.

Used connected cars need disconnecting, as the UK’s National Cyber Security Centre pointed out after our initial report.

Consumers have got used to the idea of factory resetting their smartphone before selling it on. Cleaning out a car before resale or after a rental is a well-understood practice but this applies only to the contents of a glove box and not to the data a connected car holds, which can include sensitive travel movements, contact details, call records from tethered smartphones and more.

Drivers normally get a warning when they hook up to their car through bluetooth but this is omitted when a USB connection is made, so motorists can unwittingly transfer their smartphone contacts and call logs onto the systems of leased or rented cars.

Privacy4cars

US car industry executive turned privacy advocate Andrea Amico said that almost every rental car is returned without the removal of private data, a problem that is replicated in the case of second-hand car sales. He blamed the complexity of the process of deleting data from connected cars – a procedure often only explained in the small print of long car manuals.

Amico told El Reg: “Infotainment systems, even from the same manufacturer, come with a variety of both hardware and firmware. Even within the same manufacturer and year of production, variances between models can go from small to huge. If it was truly easy and intuitive to delete information we would not see the statistics we see.”

Amico is marketing a free mobile app called Privacy4Cars, which provides step-by-step tutorials to help users quickly erase personal information such as phone numbers, call logs, location history and garage door codes from vehicle infotainment systems.

Users are able to select from hundreds of vehicle makes, models and years. The same tech is been sold to the car industry (fleet management companies, car rental or car sharing operators, dealer groups etc.) commercially and as a software development kit that can be embedded into existing apps.

Amico, who heads up the privacy efforts at International Automotive Remarketers Alliance, said the app is as useful for European drivers as US car owners.

The Society of Motor Manufacturers and Traders, a UK trade body, said that although car makers have a responsibility for data processing, consumers also have to get into the habit of removing their data and dissociating their smartphones when they sell on their connected cars.

Is it realistic to expect buyers of second-hand cars to know if the car has been connected? The response from the car industry has been to put the onus on the previous owner to delete data while minimising the role of manufacturers to come up with a thought-through process for dealers to enforce.

ICO connection

Car makers typically run the apps and manage the servers through which connected car services are delivered, making them “data controllers” under the General Data Protection Regulation. Privacy watchdogs are actively examining the issues created by connected cars.

The UK’s Information Commissioner’s Office was a co-sponsor of the International Conference of Data Protection and Privacy Controllers’ resolution on connected vehicles (PDF) that was put together last September. The ICO advocates a privacy-by-design approach, which would appear to require bringing manufacturers on board and may be difficult to apply to cars already on the road.

An ICO spokesperson told El Reg: “Data protection laws require the collection and use of personal data to be fair and transparent. Being clear with individuals about the use of their data, and providing options to control that data, are important matters for organisations to get right – particularly where new technologies and new ways of processing data are being introduced into existing products or services. This applies just as much to connected vehicles as it does to any other device or product.

“A key way this can be done is by considering privacy issues at the design stage, and by taking appropriate actions to address them. However, it isn’t just about data protection compliance – it’s about building trust among consumers, giving them good customer service and treating them with respect.”

DVLA

In response to our previous stories on connected cars, Reg readers have suggested the Driver and Vehicle Licensing Agency could have a role in easing the privacy headaches posed by connected cars.

When a car is sold, scrapped, or disposed of to a dealer in the UK the DVLA must be informed. Dealers have access to the DVLA database.

“Some way of linking the DLVA owner change event to a scrub it clean event ought not to be beyond the bounds of possibility,” suggested Reg reader Neil Barnes.

Government IT projects have a dire reputation but the DVLA’s driving licence verification tool protects privacy and is seen as something of a success. Whether the DVLA would be willing to accept a privacy regulating role that’s outside its remit is questionable, as other readers have pointed out. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/07/connected_cars_privacy/