STE WILLIAMS

Threat Awareness: A Critical First Step in Detecting Adversaries

One thing seems certain: Attackers are only getting more devious and lethal. Expect to see more advanced attacks.

We’re in an arms race with cybercriminals. Adversaries are becoming stealthier and more dangerous, constantly improving their attack techniques to evade the best security defenses. From malicious mobile apps to exploited cloud misconfigurations and devastating Remote Desktop Protocol (RDP) exploits, there’s no shortage of threats we need to protect against. 

As security practitioners, it’s our mission not only to build the new tools needed to arrest and detect threats effectively, but to help make sense of the wide-ranging nature of what constitutes security. That starts with threat awareness. After all, you can’t defend against what you don’t understand.

Here are some of the major threat landscape changes we’ve seen in the last year and will continue to see this year and beyond.

Evading Security Controls with Automated, Active Attacks
Automated, active attacks are on the rise. These types of attacks involve a mix of automation and human-directed hacking to evade security controls.

Attackers in the recent Snatch ransomware attacks, for example, gained access by abusing remote access services, like RDP, and then used hand-to-keyboard hacking to complete the attack. Recently, attackers have upped the ante by exfiltrating data before beginning encryption and rebooting machines into Safe Mode during the attack in order to circumvent endpoint protection. These changing attack methods are part of the growing trend of defense evasion and highlight the need for protection at every layer of the environment.

On the detection side, the problem is that it’s challenging to determine what’s a malicious versus legitimate use of those tools. This method has been used successfully by the criminals behind the SamSam and MegaCortex ransomware attacks.

Raising the Stakes with Ransomware
Ransomware creators know that if they can’t get past detection systems, their operation has little chance of success. Therefore, they’re putting a lot of effort into figuring out ways to evade detection systems altogether. One of the most effective methods is changing their appearance — often by obfuscating their code — to disguise their true intent.

For example, attackers may digitally code-sign ransomware with an Authenticode certificate. Anti-ransomware defenses give code with signatures a less thorough examination, and some endpoint security products may even choose to trust it.

At the same time, attackers exploit vulnerabilities to elevate their privileges to an administrative credential. This way, their privileges will meet or exceed the access permissions necessary to ensure that encrypting files will be successful.

Scamming Through Stealthy, Malicious Apps
Smartphones, tablets, and other mobile devices are rich environments for attacks. Not only can attackers steal user information and cash or cryptocurrency, but they can also use mobile devices to gain access to corporate resources.

Fraudulent banking apps, referred to as bankers, continue to plague users by stealing credentials for financial institutions. Downloaders — apps that appear to be finance-related but are really downloading banker payloads in the background — are increasingly common. Some bankers even steal credentials by abusing accessibility features to virtually monitor keystrokes when a user logs in to legitimate banking apps.

Unscrupulous developers are also finding success exploiting the legitimate in-app advertising model found on mobile devices, creating apps whose sole purpose is to maximize ad revenue. The most nefarious types of adware, known as Hiddad, hide themselves from the app tray and launcher, so they’re impossible to find and remove. Hiddad malware often takes the form of a legitimate app, like a QR code reader, but makes money through aggressive advertising.

Fleeceware is another example of how developers take advantage of legal models to scam unwitting consumers. These apps vastly overcharge users for app functionality that’s already available for free or at low cost, often relying on free trials that are nearly impossible to cancel to lure users into paying $275 for a simple calculator app. It’s an ongoing, widespread issue, with researchers recently uncovering 20 new fleeceware applications that may have nearly 600 million downloads.

Exploiting Misconfiguration in the Cloud
The strengths that make cloud such a valuable platform for computing and business operations — flexibility, speed and ease of use — are also what makes it challenging to secure. With changes happening at the rapid pace of cloud, operator error is a growing risk. All it takes is one misstep in configuration to expose the entire customer database to attack.

Attackers are taking note. Most cloud-based security incidents are a result of misconfiguration of some kind. Attackers know that companies struggle with a general lack of visibility into their cloud environments, so they can sneak in and carry out an attack before anyone notices. That’s why they’ve seen success with Magecart malware, which infected retailers’ “shopping cart” pages without their knowledge to steal customer information from businesses like Ticketmaster and Cathay Pacific.

As more and more companies turn to the cloud for backups, these attacks are becoming increasingly common. Businesses need visibility into the consequences of configuration changes, as well as the ability to monitor for malicious and suspicious activity in the cloud.

Abusing Machine Learning
Attacks against machine learning security systems are moving from an academic possibility into the toolkits of attackers. Machine learning systems have their own weaknesses, and it’s only a matter of time before attackers figure out how to evade them. Research shows how attackers could trick models, highlighting the need for multiple layers of protection.

With machine learning becoming as a regular part of defense, we’re also seeing the first signs of using machine learning models on offense. Imagine using text-to-speech machine learning models to evade security measures like voice authentication. Such technology, like the technology underlying “deepfakes,” has already allegedly been used in a CEO vishing (voice phishing) scam. In the future, attackers might also use machine learning to optimize attacks, like phishing email click-through rates.

The Road Ahead
As we look ahead, one thing seems certain: Attackers are only getting more devious and lethal. We expect to see more advanced attacks, like the weaponization of machine learning. In the meantime, awareness of existing threats gives companies the information they need to design effective protection.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “The Perfect Travel Security Policy for a Globe-Trotting Laptop.”

Dan Schiappa is Executive Vice President and Chief Product Officer with Sophos. In this role, Dan is responsible for the overall strategy, product management, architecture, research and development, and product quality for the network security, enduser security, and Sophos … View Full Bio

Article source: https://www.darkreading.com/cloud/threat-awareness-a-critical-first-step-in-detecting-adversaries/a/d-id/1337155?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

IWD: biometrics, machine learning, privacy and being a woman in tech – Naked Security Podcast

To celebrate International Women’s Day we invite you to this all-female splinter episode. We discuss privacy, biometrics, machine learning, social media, getting into cybersecurity and, of course, what it’s like to be a woman in tech.

Host Anna Brading is joined by Sophos experts Hillary Sanders, Michelle Farenci and Alice Duckett.

Listen now!

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Cny0sSBkvVI/

FYI: When Virgin Media said it leaked ‘limited contact info’, it meant p0rno filter requests, IP addresses, IMEIs as well as names, addresses and more

A Virgin Media server left facing the public internet contained more than just 900,000 people’s “limited contact information” as the Brit cable giant’s CEO put it yesterday.

In fact, the marketing database also contained some subscribers’ requests to block or unblock access to X-rated and gambling websites, unique ID numbers of stolen cellphones, and records of whichever site they were visiting before arriving at the Virgin Media website.

This is according to British infosec shop Turgensec, which discovered the poorly secured Virgin Media info silo and privately reported it to the broadband-and-TV-and-phone provider. The research team today said the extent of the data spill was more extensive, and personal, than Virgin Media’s official disclosure seemed to suggest.

Here, in full, is what Turgensec said it found in the data cache that was exposed from mid-April to this month:

* Full names, addresses, date of birth, phone numbers, alternative contact phone numbers and IP addresses – corresponding to both customers and “friends” referred to the service by customers.

* Requests to block or unblock various pornographic, gore related and gambling websites, corresponding to full names and addresses. IMEI numbers associated with stolen phones.

* Subscriptions to the different aspects of their services, including premium components.

* The device type owned by the user, where relevant.

* The “Referrer” header taken seemingly from a users browser, containing what would appear to be the previous website that the user visited before accessing Virgin Media.

* Form submissions by users from their website.

Those website block and unblock requests were a result of Britain’s ruling class pressuring ISPs to implement filters to prevent kids viewing adult-only material via their parents’ home internet connections. The filters were also supposed to stop Brits from seeing any particularly nasty unlawful content.

Virgin Media today stressed the database held about a thousand subscribers’ filter request inquiries.

The leaky server has since been hidden from view. Virgin Media’s CEO Lutz Schüler said last night: “Based upon our investigation, Virgin Media does believe that the database was accessed on at least one occasion but we do not know the extent of the access or if any information was actually used.”

He added: “The database did not include any passwords or financial details, such as credit card information or bank account numbers, but did contain limited contact information such as names, home and email addresses and phone numbers.”

Double meanings and fluff

In a separate email to its subscribers this week, Virgin Media tried to reassure its punters that the only records accessible from the marketing database were “contact details (such as name, home and email address and phone numbers), technical and product information, including any requests you may have made to us using forms on our website.”

As it turns out, the words “technical and product information” were doing an awful lot of heavy lifting. Turgensec’s strategically worded statement stops short of accusing Virgin Media of outright lying, but it is still rather damning.

“We cannot speak for the intentions of [Virgin Media’s] communications team but stating to their customers that there was only a breach of ‘limited contact information’ is from our perspective understating the matter potentially to the point of being disingenuous,” the infosec house said on Friday.

Turgensec also quibbled with the ISP’s attempt to blame the security blunder on IT workers “incorrectly configuring” an internet-facing database. Rather, the database – which was filled with unencrypted plain-text records – was a sign of “systematic assurance process failure,” Turgensec said.

Virgin Media logo

Like a Virgin, hacked for the very first time… UK broadband ISP spills 900,000 punters’ records into wrong hands from insecure database

READ MORE

The security biz is also peeved with the way Virgin Media disclosed the gaffe. Turgensec didn’t ask for any financial reward for finding the database but, as is traditional, it did expect a public hat tip for its efforts so as to get some industry recognition. Instead, Virgin Media went straight to the press without thanking the people who saved its bacon.

Turgensec urged all Virgin Media customers who received a notice from the broadband provider to file a GDPR request for a full breakdown of what data of theirs was spilled. With 900,000 people affected, that tie up the ISP’s legal team for a while.

“Companies like to downplay the impacts whilst upselling their supposed care and due diligence in an attempt to place shareholder value over their customer’s rights. Their customers have a right to ensure their data is protected ‘by design’ which in many cases it isn’t,” Tergensec lamented.

“It would seem highly unlikely to us that in this case, after being left open for 10 months, the data has not been obtained by multiple actors some potentially malicious.”

Virgin Media, meanwhile, rejected the allegation it held back on important details.

“Out of the approximate 900,000 people affected by this database incident, 1,100, or 0.1 per cent, had information included relating to our ‘Report a Site’ form,” a spokesperson told The Register.

“This form is used by customers to request a particular website to be blocked or unblocked – it does not provide information as to what, if anything, was viewed and does not relate to any browsing history information.

“We strongly refute any claim that we have acted in a disingenuous way. In our initial notification to all affected people about this incident we made it clear that any information provided to us via a webform was potentially included in the database.”

Virgin Media added it is developing a tool to allow customers to search exactly what of their account information was exposed. ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/06/virgin_more_leak_details/

Don’t be fooled, experts warn, America’s anti-child-abuse EARN IT Act could burn encryption to the ground

On Thursday, a bipartisan group of US senators introduced legislation with the ostensible purpose of combating child sexual abuse material (CSAM) online – at the apparent cost of encryption.

The law bill is called the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act, which folds up into the indignant acronym EARN IT. (See also the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act, aka the USA PATRIOT Act.)

Backed by senators Lindsey Graham (R-SC), Richard Blumenthal (D-CT), Josh Hawley (R-MO) and Dianne Feinstein (D-CA), the proposed law intends to make technology companies “earn” their exemption from liability allowed under Section 230 of the US Communications Decency Act by requiring internet companies to follow a set of best practices to keep CSAM off their networks.

For the uninitiated, Section 230 gives internet platforms blanket legal protections: simply put, websites can’t be held liable for any bad stuff shared by users, plus or minus some minor caveats. Critics say today’s rules are too broad, and let technology giants off the hook too easily.

“Companies that fail to comport with basic standards that protect children from exploitation have betrayed the public trust granted them by this special exemption,” said Blumenthal in a statement. “Online platforms’ near complete immunity from legal responsibility is a privilege – they have to earn it – and that’s what our bipartisan bill requires.”

The best practices contemplated by the lawmakers have yet to be spelled out; they’re to be determined by a 19-member government commission that includes 4 non-government experts or “survivors of online child sexual exploitation.” Input from these four can be ignored, however, since the best practices require approval only of 14 commissioners. After that, the US Attorney General (AG), who is on the commission, can accept the guidelines, if the heads of the FTC and DHS agree, or send them back to be reformulated.

And therein lies the issue: based on the US government’s ongoing efforts to demonize encryption for leaving law enforcement in the dark and AG William Barr’s public opposition to encryption, technical experts expect the guidelines will force technology platforms to avoid encryption they can’t undo on-demand in order to check for the presence of CSAM.

encryption

Departing MI5 chief: Break chat app crypto for us, kthxbai

READ MORE

“Because the AG continually lambastes end-to-end encrypted messaging for cloaking pedophiles’ exchanges of CSAM and grooming of child victims, this is code for ‘encryption is not a viable alternative best practice,'” explained Riana Pfefferkorn, associate director of surveillance and cybersecurity at the Stanford Center for Internet and Society, in a blog post. “This will be used to discourage any ‘product design’ that includes encryption that isn’t backdoored for law enforcement.”

Matthew Green, associate professor of computer science at Johns Hopkins University, offered similar analysis in a blog post on Friday.

“The new bill would make it financially impossible for providers like WhatsApp and Apple to operate services unless they conduct ‘best practices’ for scanning their systems for CSAM,” he wrote.

“Since there are no ‘best practices’ in existence, and the techniques for doing this while preserving privacy are completely unknown, the bill creates a government-appointed committee that will tell technology providers what technology they have to use.”

In effect, the position advanced by the bill’s authors is that because CSAM is bad, all internet content and communication must be subject to scrutiny upon demand. That’s a viewpoint that doesn’t leave much room for encryption.

“It’s extremely difficult to believe that this bill stems from an honest consideration of the rights of child victims, and that this legislation is anything other than a direct attack on the use of end-to-end encryption,” Green concludes.

Other advocacy groups like the ACLU, the Center for Democracy and Technology, and Free Press, among others, have issued similar statements in opposition to the bill.

US Senator Ron Wyden (D-OR) on Thursday called the bill a disaster, suggesting it’s a cynical attempt to use concern about children to gain control online speech and harm internet security.

“This terrible legislation is a Trojan horse to give Attorney General Barr and Donald Trump the power to control online speech and require government access to every aspect of Americans’ lives,” Wuden said.

“It is a desperate attempt to distract from the Justice Department’s failure to request the manpower, funding and resources to combat this scourge, despite clear direction from Congress more than a decade ago.”

The EARN IT Act arrived as AG Barr announced that other members of the Five Eyes intelligence alliance – Australia, Canada, New Zealand, and the United Kingdom – have agreed to a set of principles to guide internet companies in their efforts to combat CSAM. Representatives for six online companies – Facebook, Google, Microsoft, Roblox, Snap and Twitter – were there to endorse the initiative.

Pfefferkorn argues that widespread agreement about the need to discourage CSAM shouldn’t dissolve the right to privacy and security.

“[W]hile it’s certainly a necessary, urgent, and desirable goal to combat the scourge of online child exploitation, there are still limits on what tech companies should do,” Pfefferkorn said. “Stepping up to fight CSAM should not mean wholesale converting their services into even more powerful surveillance tools for law enforcement than they already are.” ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/06/earn_it_bill_encryption/

7 Cloud Attack Techniques You Should Worry About

Security pros detail the common and concerning ways attackers target enterprise cloud environments.PreviousNext

(Image: Adam121 - stock.adobe.com)

(Image: Adam121 – stock.adobe.com)

As organizations transition to cloud environments, so too do the cybercriminals targeting them. Learning the latest attack techniques can help businesses better prepare for future threats.

“Any time you see technological change, I think you certainly see attackers flood to either attack that technological change or ride the wave of change,” said Anthony Bettini, CTO of WhiteHat Security, in a panel at last week’s RSA Conference. It can be overwhelming for security teams when organizations rush headfirst into the cloud without consulting them, putting data and processes at risk.

Attackers are always looking for new ways to leverage the cloud. Consider the recently discovered “Cloud Snooper” attack, which uses a rootkit to bring malicious traffic through a victim’s Amazon Web Services environment and on-prem firewalls before dropping a remote access Trojan onto cloud-based servers. As these continue to pop up, many criminals rely on tried-and-true methods, like brute-forcing credentials or accessing data stored in a misconfigured S3 bucket. There’s a lot to keep up with, security pros say.

“When you’re taking your existing security skills and you’re moving into an entirely different environment, then it’s an incredible challenge to figure out what you really need to focus on, as well as what’s going on out there in the real word,” said Rich Mogull, analyst with Securosis and CISO of DisruptOps, in an RSA Conference talk about attack kill chains in the cloud.

Here we discuss some of these common kill chains, as well as other cloud attack techniques, that are top-of-mind for security pros and cybercriminals alike. Anything you’re worried about that we didn’t list here? Feel free to share your thoughts in the Comments section, below.

 

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full BioPreviousNext

Article source: https://www.darkreading.com/cloud/7-cloud-attack-techniques-you-should-worry-about/d/d-id/1337259?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New Ransomware Variant Developed Entirely as Shellcode

PwndLocker is harder to detect than other crypto-malware, Crypsis Group says.

Researchers have discovered a new ransomware variant that they say has significantly different behavior and characteristics than most other ransomware types.

The ransomware, called PwndLocker, was found by The Crypsis Group in February during a client engagement. Subsequent analysis showed it was developed entirely as shellcode—something that malware authors have traditionally reserved for more specialized purposes.

The malware also implemented a custom encryption algorithm that the researchers discovered was potentially breakable, and in fact has already been broken. However, according to Crypsis, the malware authors can easily swap out the existing encryption algorithm with a stronger one at any time.

Matt Thaxton, senior consultant with The Crypsis Group, says PwndLocker’s use of shellcode – or location-independent code – makes it a more complex and harder-to-spot ransomware variant than others. “The reason these types of code are harder for automated tools to spot is because they usually don’t reside on disk and because they are often injected into other legitimate processes such as native, signed Windows-processes,” he says.

Shellcode can sometimes be classified as fileless malware. But in the case of PwndLocker, it wouldn’t be classified as fileless because it loads from a fake avi file, Thaxton noted.

Many exploits use shellcode to force vulnerable legitimate processes to use or to run illegitimate code. But typically malware authors have used shellcode only in secondary malware downloaders and sophisticated implants because of how complex and time-consuming it can be to create and implement such code. This is the first time, however, that ransomware has been developed using shellcode, Thaxton says.

“I’m not sure why this threat actor decided to write their ransomware in this way,” he says. “My only guess would be that they wanted it to be very unique so that it is harder to spot through the usual [methods].” Also, it is possible that the malware authors wanted to be distinctive simply for the sake of differentiating from other variants.

Another noteworthy feature with PwndLocker is its use of a relatively weak custom-developed encryption algorithm rather than the more robust Windows crypto API, Thaxton notes. There’s no real reason why they couldn’t have just used the API like almost every other ransomware in the wild currently does, he says. “There may have been a reason for creating it this way that is yet to be determined. But at this point, it’s not clear,” Thaxton says.

In an alert Friday, security vendor Emsisoft said it has developed a way to decrypt files that PwndLocker might have encrypted. However, each decryptor requires customization before use. That means that victims of PwndLocker who want their files decrypted will need to send the ransomware executable that was used in the particular attack, Emsisoft said. “While the ransomware automatically deletes the executable, it is often possible to recover it using file recovery tools,” the vendor said.

According to Emsisoft, PwndLocker has been observed mainly targeting business and government organizations and demanding ransom of more than $500,000. The malware has numerous variants, all of which are designed to delete shadow copies of data, which makes recovery harder.

“There is always going to be a mix of old and new ransomware variants as threat actors work to gain fast cash or put their unique stamp on an evolving threat landscape,” Thaxton says. “Enterprises can’t ‘guess’ what is going to come next.”

The best approach is to adhere to best practices across the enterprise, and pay attention to end-user training within the security program, he says.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “The Perfect Travel Security Policy for a Globe-Trotting Laptop.”

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/new-ransomware-variant-developed-entirely-as-shellcode/d/d-id/1337260?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Remote working due to coronavirus? Here’s how to do it securely…

Many if not most organisations have already crossed the “working from home”, or at least the “working while on the road” bridge.

If you’re on the IT team, you’re probably used to preparing laptops for staff to use remotely, and setting up mobile phones with access to company data.

But global concerns over the current coronavirus (Covid-19) outbreak, and the need to keep at-risk staff away from the office, means that lots of companies may soon and suddenly end up with lots more staff working from home…

…and it’s vital not to let the precautions intended to protect the physical health of your staff turn into a threat to their cybersecurity health at the same time.

Importantly, if you have a colleague who needs to work from home specifically to stay away from the office then you can no longer use the tried-and-tested approach of getting them to come in once to collect their new laptop and phone, and to receive the on-site training that you hope will make them a safer teleworker.

You may end up needing to set remote users up from scratch, entirely remotely, and that might be something you’ve not done a lot of in the past.

So here are our five tips for working from home safely.

1. Make sure it’s easy for your users to get started

Look for security products that offer what’s called an SSP, short for Self-Service Portal.

What you are looking for is a service to which a remote user can connect, perhaps with a brand new laptop they ordered themselves, and set it up safely and easily without needing to hand it over to the IT department first.

Many SSPs also allow the user to choose between different levels of access, so they can safely connect up either a personal device (albeit with less access to fewer company systems than they’d get with a dedicated device), or a device that will be used only for company work.

The three key things you want to be able to set up easily and correctly are: encryption, protection and patching.

Encryption means making sure that full-device encryption is turned on and activated, which protects any data on the device if it gets stolen; protection means that you start off with known security software, such as anti-virus, configured in the way you want; and patching means making sure that the user gets as many security updates as possible automatically, so they don’t get forgotten.

Remember that if you do suffer a data breach, such as a lost laptop, you may well need to disclose the fact to the data protection regulator in your country.

If you want to be able to claim that you took the right precautions, and thus that the breach can be disregarded, you’ll need to produce evidence – the regulator won’t just take your word for it!

2. Make sure your users can do what they need

If users genuinely can’t do their job without access to server X or to system Y, then there’s no point in sending them off to work from home without access to X and Y.

Make sure you have got your chosen remote access solution working reliably first – force it on yourself! – before expecting your users to adopt it.

If there are any differences between what they might be used to and what they are going to get, explain the difference clearly – for example, if the emails they receive on their phone will be stripped of attachments, don’t leave them to find that out on their own.

They’ll not only be annoyed, but will probably also try to make up their own tricks for bypassing the problem, such as asking colleagues to upload the files to private accounts instead.

If you’re the user, try to be understanding if there are things you used to be able do in the office that you have to manage without at home.

3. Make sure you can see what your users are doing

Don’t just leave your users to their own devices (literally or figuratively).

If you’ve set up automatic updating for them, make sure you also have a way to check that it’s working, and be prepared to spend time online helping them fix things if they go wrong.

If their security software produces warnings that you know they will have seen, make sure you review those warnings too, and let your users know what they mean and what you expect them to do about any issues that may arise.

Don’t patronise your users, because no one likes that; but don’t leave them to fend for themselves, either – show them a bit of cybersecurity love and you are very likely to find that they repay it.

4. Make sure they have somewhere to report security issues

If you haven’t already, set up an easily remembered email address, such as security911 @ yourcompany DOT example, where users can report security issues quickly and easily.

Remember that a lot of cyberattacks succeed because the crooks try over and over again until one user makes an innocent mistake – so if the first person to see a new threat has somewhere to report it where they know they won’t be judged or criticised (or, worse still, ignored), they’ll end up helping everyone else.

Teach your users – in fact, this goes for office-based staff as well as teleworkers – only to reach out to you for cybersecurity assistance by using the email address or phone number you gave them. (Consider snail-mailing them a card or a sticker with the details printed on it.)

If they never make contact using links or phone numbers supplied by email, they they are very much less likely to get scammed or phished.

5. Make sure you know about “shadow IT” solutions

Shadow IT is where non-IT staff find their own ways of solving technical problems, for convenience or speed.

If you have a bunch of colleagues who are used to working together in the office, but who end up flung apart and unable to meet up, it’s quite likely that they might come up with their own ways of collaborating online – using tools they’ve never tried before.

Sometimes, you might even be happy for them to do this, if it’s a cheap and happy way of boosting team dynamics.

For example, they might open an account with an online whiteboarding service – perhaps even one you trust perfectly well – on their own credit card and plan to claim it back later.

The first risk everyone thinks about in cases like this is, “What if they make a security blunder or leak data they shouldn’t?”

But there’s another problem that lots of companies forget about, namely: what if, instead of being a security disaster, it’s a conspicuous success?

A temporary solution put in place to deal with a public health issue might turn into a vibrant and important part of the company’s online presence.

So, make sure you know whose credit card it’s charged to, and make sure you can get access to the account if the person who originally created it forgets the password, or cancels their card.

So-called “shadow IT” isn’t just a risk if it goes wrong – it can turn into a complicated liability if it goes right!

Most of all…

Most of all, if you and your users suddenly need to get into teleworking, be prepared to meet each other half way.

For example, if you’re the user, and your IT team suddenly insists that you start using a password manager and 2FA (those second-factor login codes you have to type in every time)…

…then just say “Sure,” even if you hate 2FA and have avoided it in your personal life because you find it inconvenient.

And if you’re the sysadmin, don’t ignore your users, even if they ask questions you think they should know the answer to by now, or if they ask for something you’ve already said “No” to…

…because it might very well be that they’re asking because you didn’t explain clearly the first time, or because the feature they need really is important to doing their job properly.

We’re living in tricky times, so try not to let matters of public health cause the sort of friction that gets in the way of doing cybersecurity properly!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/VPeZsnKHuhw/

UK spy auditor gives state snoops a big pat on the back for job well done – except MI5

The UK’s spy agency auditor has given public sector snoopers a clean bill of health – except for domestic surveillance specialists MI5, whose cloud data storage blunder is still under investigation.

In its annual report for 2018, published this week, the Investigatory Powers Commissioner’s Office (IPCO) concluded once again that all is broadly well in the murky world of British state surveillance, where everyone from eavesdropping agency GCHQ to council binmen is legally allowed to spy on you.

Laying the report before Parliament on Thursday, Prime Minister Boris Johnson said in a written statement: “Overall, this report demonstrates that the security and intelligence agencies, law enforcement agencies and other relevant public authorities show extremely high levels of operational competence combined with respect for the law.”

Security minister James Brokenshire chipped in to add: “I welcome the independent scrutiny from the Commissioner and am pleased that he recognises the exceptional dedication and professionalism demonstrated by our law enforcement and security agencies.”

MI5, however, came in for pointed criticism from Lord Justice Fulford, the previous Investigatory Powers Commissioner, who wrote the agency’s latest report. As reported last year, MI5 was being careless with the storage of data it had hoovered up. This week’s IPCO report said:

The information initially supplied to IPCO suggested there were serious deficiencies in the way the relevant environment implemented important IPA safeguards, particularly the requirements that MI5 must limit to the minimum necessary the extent to which warranted data is copied and disclosed, and that warranted data must be destroyed as soon as there are no longer any relevant grounds for retaining it.

Moreover, MI5 hadn’t locked down access to what appears to have been a cloud server; as IPCO put it, the domestic spy agency had an “inconsistent approach to controls around the extent to which users were able to copy data and place it into storage areas within the environment”. The spies were warned they were subject to “ongoing, detailed scrutiny”.

MI5

Departing MI5 chief: Break chat app crypto for us, kthxbai

READ MORE

Otherwise, despite the introduction of the so-called “double lock,” where a former judge signs off on spying warrants that were first rubber-stamped by a cabinet minister, IPCO broadly ruled all was well and most public sector organisations were abiding by the Snooper’s Charter (aka the Investigatory Powers Act), the law that allows them to rifle through your digital dustbins more or less at will.

IPCO did publish how it carries out its audits, which includes snap inspections; targeted, in-depth audits of specific spying operations; close looks at the public sector body’s justification for spying; and looking at internal documents. A rather thorough process judging by the report’s detailed description, it certainly leaves the impression that the auditor’s staff are dedicated to their task.

Under its customary “serious errors” section, the report also detailed how many police (and they were all police) blunders had led to innocents being arrested, their homes raided and missing people not found as a result of typos, time zone confusion and other human but inexcusable errors.

The Register has asked for clarification on one case where a suicidal person died before police found them, following a data oversight. IPCO said after investigating it had “notified the affected person of the fact of the serious error,” which on the face of it could not have happened.

Hundreds of lawyers, journalists, doctors and MPs were targeted by the public sector for covert snooping, something that is perfectly legal in the UK. IPCO said that in some of these cases the spying was carried out for witness protection-style reasons.

Sir Brian Leveson, the retired senior judge who currently fills the post of Investigatory Powers Commissioner, said in a canned quote: “Overall, investigatory powers are used responsibly within the UK. However, our investigations have highlighted key areas where our oversight needs to keep pace with those we oversee, including how to approach the impact of new and evolving technology.”

The full report can be read from the IPCO website [PDF]. ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/06/ipco_annual_report_2018_mi5_naughty/

Securing Our Elections Requires Change in Technology, People & Attitudes

Increasing security around our election process and systems will take a big effort from many different parties. Here’s how.

The security of our elections is top of mind for practically every voter in the US. With the state primaries underway, all eyes are on our electronic (and in some cases mobile) voting systems to understand if malicious attacks are happening — and if our systems are able to defend against them. Most experts agree that we are unprepared and underfunded when it comes to securing our elections — which should concern us all.

A big problem is that when we look at the entire ecosystem of the national election process, we don’t treat it the same way we treat business systems. This is a mistake. Voting is a business of our state governments. And the most valuable asset for states is voter information — similar to the customer information and data assets of a for-profit business (which are increasingly safeguarded by data privacy regulations). To modernize our current model of election management, trust, and security, it’s important to examine three interrelated pillars for state governments: technology, people, and attitudes.

1. Technology: Making Cybersecurity More Proactive
To address the growing security threats that many players in the broader election system ecosystem face, proactive cybersecurity technology and policy must take center stage in three important ways:

Cybersecurity hygiene of individual companies and agencies
Greater transparency and data-driven assessments of election system hardware and software providers should be mandated in order to measure each company’s cybersecurity hygiene against an established baseline. In addition, there needs to be increased monitoring of the deployment and implementation of technology in state and local election systems to ensure that misconfigurations aren’t creating additional vulnerabilities.

A “layered defense” approach to cybersecurity
Given the complicated, interdependent nature of government systems and databases, security measures should be established to minimize the likelihood of an attack — particularly from internal staff. For example, an ill-intentioned employee could access and hack a state voter registration database through a vulnerability in the Department of Motor Vehicles network. Implementation of a layered defense approach and incorporating a “least privileged principle” that limits an individual’s access to only very specific parts of a network or election system makes internal access more difficult and successful hacking more unlikely.

Ongoing validation of effectiveness of security controls
As is true in the business world, any government agency or organization playing a role in the election ecosystem cannot afford to assume that established security technology and protocols always work as they’re supposed to. With such a complex array of interrelated software elements from multiple vendors, each with different settings and procedures, and with continually changing network and access protocols, ongoing changes in the IT environment – what I call “environmental drift” — can negatively affect security performance. When left unchecked, there is tremendous risk that security controls will not provide the necessary defenses when an attack occurs. Frequent and regular evaluations to validate the effectiveness of security controls should be a key component of the overall process.

2. People: New Roles and Relationships for the state CIO and CISO
Typically, the role of chief elections officer is filled by the secretary of state, who oversees testing and certifying all voting equipment for security, accuracy, reliability, and accessibility. States also have a CIO and a CISO, but they don’t currently have a formal direct working relationship with the secretary of state or state elections commissions. I believe that they should — especially now, with the prominence of e-voting. State CIOs and CISOs can be of tremendous value to the secretary of state and election commissions in helping them understand the evolving cybersecurity threat landscape, while tracking its potential threat impact on a daily basis.

Governors should also have cyber-protection teams that know how to scan the environment for the bad guys and look for flaws, before an attack occurs. The right place for this cybersecurity resource to exist and collaborate with the state CIO and CISO would be in “Fusion Centers” set up to deal with any kind of emergency, regardless of origin. I have seen this work already underway in Michigan, Virginia, Rhode Island, and Louisiana, and believe other states should follow their lead.

3. Attitudes: Moving from Naivete to Thoughtful Experimentation
There are several attitude challenges that we face today. While most state and local governments understand that threats are out there and vulnerabilities exist, they don’t understand their nature or magnitude, or how best to address them. At the local level, there is often a perception that individual precincts are too small to be viable targets. In a democracy where every vote must count, a broader mindset is required. And when security technology is brought in as the solution, there is too often an overreliance placed on it and a false assumption that it’s working as it’s supposed to in order to protect election integrity. When cyber hygiene is one of the top priorities in business organizations today, why should state/local election systems be different?

There are forward-looking states experimenting with electronic and mobile voting to reflect current technology and cultural change — with a dual purpose of deterring voter fraud and boosting voter turnout. Initiatives and experiments to enable people to vote by mobile phone — anywhere, anytime — require deep attention to proactive cybersecurity and digital identity. In the 2018 midterm elections, West Virginia became the first state to introduce purely online voting for overseas military voters with a mobile app that used blockchain technology, with identity authentication through a fingerprint or facial recognition. With the security concerns inherent to this kind of experiment, more research should be done and trials conducted to make mobile voting a more viable way for people to vote. 

From Ideas to Action
To increase trust, accountability, and security around our election process and systems, it will take a combined and concerted effort from many different parties — on both the state and local government side as well as from the technology community. State governments and election officials should take the lead, but others involved in the process share an equal responsibility — from the federal government and technology companies in both the election systems and cybersecurity spaces, all the way down to individual citizens. Only when we all come together can we ensure that every vote counts.

Related Content:

 

Major General Earl Matthews, USAF (Ret.), is an award-winning retired Major General of the U.S. Air Force with a successful career influencing the development and application of cybersecurity and information management technology. His strengths include his ability to lead … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/securing-our-elections-requires-change-in-technology-people-and-attitudes/a/d-id/1337200?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Physical Flaws: Intel’s Root-of-Trust Issue Mostly Mitigated

An insider, or security expert with physical access, can compromise the hardware protections of Intel chips sold in the past five years.

Most Intel processors produced in the past five years have a vulnerability in the component of the chip responsible for securely executing security services, threatening the system’s so-called root of trust, warns Positive Technologies.

Intel fixed the vulnerability in the Intel Converged Security and Management Engine (CSME) in May 2019 and updated its guidance in February, but it has not released enough details for companies to assess their risk, says Mark Ermolov, lead specialist of operating system and hardware security at Positive Technologies. The security firm has produced a proof-of-concept attack for the flaw but will not release it yet, he says.

“The worst-case scenario is that a hacker gains access to the chipset key, which would enable him to access all information stored on the computer, encrypted or otherwise, and even run a keylogger on Intel CSME to track everything a victim types into the affected computer,” Ermolov says. “Exploitation is difficult but possible. Most importantly, now, in order to get the root key, you do not need multimillion-dollar equipment or much time. Using this vulnerability, a qualified specialist can get the master key in just a few hours using only software tools and then do whatever he wants with this system.”

The severity of the vulnerability remains to be seen. Because the CSME is the root of critical security functions on the system — handling encryption and secure boot — a compromised system, and the information on that system, can no longer be trusted.

On unpatched systems, an attacker who already compromised the operating system could exploit the issue, assigned CVE-2019-0090, in the Intel CSME to undermine the system’s fundamental security. For patched system, only physical access will allow such a compromise. Yet the vulnerability itself cannot be patched because it’s in the hardware and part of the chip’s architecture, says Ermolov, who considers the issue worse than the speculative execution flaws Spectre and Meltdown.

“Since this is a hardware vulnerability, the situation cannot be fixed with updates,” he says. “Intel has issued a mitigation, which greatly complicates the attack but does not make it impossible.”

Intel considered the flaw to be critical but, with patching, not a risk for companies. Intel acknowledged the issue, but highlighted the fact that patched systems that are not run in Intel Manufacturing Mode — an undocumented execution mode meant for manufacturers to test their systems — can only be attacked at the keyboard. 

“Intel recommends that end users adopt best security practices by installing updates as soon as they become available and being continually vigilant to detect and prevent intrusions and exploitations,” the company said in a statment sent to Dark Reading. “End users should maintain physical possession of their platform.”

Security researchers have heavily scrutinized Intel’s CSME because compromising the hardware allows the security of a system to be undermined at a fundamental level. Two years ago, a collection of researchers published two attacks, Meltdown and Spectre, that took advantage of the speculative execution of Intel processors to allow attackers to gain access to any information flowing through the hardware. Since then, at least six other similar flaws have been found.

In reaction to the vulnerabilities, Intel has committed to putting security first and has recently published an analysis of all the 236 vulnerabilities reported in 2019. Two Intel security experts also discussed the companies approach to securing the CSME during a talk at the 2019 Black Hat Security Conference.

Postive Technologies has focused on Intel’s Management Engine and, now, Intel’s Converged Security and Management Engine. In 2017, the company found a stack overflow bug in the Intel ME that could be used by insiders and supply chain attackers to gain and retain total control of a system. 

With the latest vulnerability, Positive Technologies researchers focused on the input-output memory management unit (IOMMU), finding that the boot order of the devices allowed external drivers to gain control of execution too early to ensure secure booting. 

“Researchers found that there is a very big bug: The IOMMU is activated too late after x86 paging structures were created and initialized,” Ermolov says. “Only Intel can mitigate all of known exploitations vectors by blocking all integrated devices that are known to have DMA [direct memory access] capabilities [to the CSME].” 

Positive Technologies found it could exploit the issue through the Integrated Sensors Hub. Intel’s patch has closed that vector, eliminating any current way of exploiting the bug using local code execution — that is, after an attacker has already compromised the system.

“Intel blocked ISH [Integrated Sensors Hub], so now it can’t issue DMA transactions to the CSME,” Ermolov says. “But we’re convinced there other exploitation vectors, and they will be found soon.”

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The Perfect Travel Security Policy for a Globe-Trotting Laptop.”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/physical-flaws-intels-root-of-trust-issue-mostly-mitigated/d/d-id/1337254?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple