STE WILLIAMS

WhatsApp image showing drug dealer’s fingerprints leads to arrest

A dealer had some Class A drugs to sell.

So he sent out an advertisement for ecstasy on WhatsApp. White, blue, yellow, red: they looked like candy in the photo, sealed in plastic, held out for display on his palm.

Smart drug dealer, right? Much to the chagrin of law enforcement, WhatsApp encrypts messages end-to-end. That means all messages: calls, photos, videos, file transfers and voice messages.

But the pill pusher didn’t consider that his message might end up on a seized phone in the hands of the police. Not did he likely consider a certain piece of evidence captured in that photo: his fingers.

In what the BBC calls a first for police in Wales, the image of a fingerprint helped to identify the man and to bring down an extensive drug-selling ring that could turn out to be larger still as the investigation continues.

Dave Thomas, of South Wales Police’s scientific support unit, called the work “groundbreaking.” He said that the WhatsApp photo helped to secure 11 convictions and to bring down the drug ring’s supply chain.

The middle and bottom part of a couple of fingers were just about visible under the bag of tablets in the image. In a video interview filmed by the BBC, Thomas pointed to the photo to describe how the imaging work was done:

Through some work done by our imaging unit, we enhanced what we could see on here. We did some inverting of the marks, [and] we then looked at the scale, which was another problem for us. We didn’t have a scale. Eventually we came from that with a suspect – main file fingerprints – and we compared them directly against this part of the mark which we could then search and identify the individual, which resulted in a number of arrests and a number of jail terms.

Thomas told the BBC that police are now looking more closely at the photos found on seized phones, in case they too might lead to evidence.

Of course, there’s nothing new about using fingerprints to identify criminals, Thomas said. The only new wrinkle is doing it through social media-shared photos:

[Fingerprinting] is an old-fashioned technique, not new. Ultimately, beyond everything else, we took a phone and looked at everything on it – we knew it had a hand with drugs on it.

[Drug dealers] are using the technology not to get caught, and we need to keep up with advancements.

As it is, he said, 80% of people now own mobile phones. Besides telltale social media postings, many of those phones are used to purposefully record incidents the police may wind up investigating, be they fights or accidents. Then too, phones offer police evidence regarding their owners’ whereabouts via location data: evidence coming from cell towers that crooks often overlook.

Thomas:

We want to be in a position where there is a burglary at 20:30, we can scan evidence and by 20:45 be waiting at the offender’s front door and arrest them arriving home with the swag.

That will work through remote transmission – scanning evidence at the scene and sending it back quickly for a match.

It’s the future. We are not there yet but it could significantly enhance the ability of the local bobbies to arrest people very quickly.

The WhatsApp drug dealer turned out to be one member of a family business. The prints led to Elliott Morris, 28, of Redditch, Worcestershire, who’s been sentenced to 8.5 years for conspiracy to supply cannabis. His father, Darren, was sentenced to 27 months, and his mother, Dominique, received 12 months.

Police were initially tipped off when neighbors complained about a large number of visitors going to a separate address from that of the Morrises. In August 2017, police raided the address and found a large amount of cannabis, cash, and a “debt” list of people.

Five men were arrested, including one whose phone held WhatsApp and other social media messages about drug sales being sent out from the area. That led officers to Elliott Morris’s parents – Darren, 51 and Dominique, 44. They found that the family was running a “cannabis factory” at their home.

Also on the seized phone, they found the photo with the fingers.

Detective Inspector Dean Taylor told the BBC that even after staff at the scientific support unit increased the size and improved the clarity of the image, they still couldn’t find a match in a national fingerprint database. However, the BBC notes, when police fingerprint offenders, they take prints of the top part of their fingers, only occasionally getting the middle and bottom parts.

The elder Morrises’ prints didn’t match what officers found in the ecstasy photo. But they did match those of their son, Elliott, whom officers found in what Taylor described as a “rural log cabin hide-out.” That’s where they also found a second cannabis factory, as well as some very familiar looking ecstasy tablets.

Elliott was the main operator of the operation, so he got a stiffer sentence. Police are continuing the investigation, which has also identified £20,000 ($28,673) worth of Bitcoin as being drug-related profits.

Six other men who admitted taking part in the conspiracy, one of which was the owner of the phone that held the Whatsapp message, received between 8 and 30 months each.

Elliott Morris’ girlfriend, Rosaleen Abdel-Saleem, was found not guilty of taking part in the conspiracy but was fined £350 ($501) for possessing ecstasy. A Birmingham man, Chazino Suban, was also found not guilty of the conspiracy but was fined £700 ($1003) for possessing cannabis and cocaine.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jq2lBFjGfVw/

5 simple tips for better computer security

Protecting your privacy and securing your home computers is easier than you might imagine. Better security isn’t just for big organizations or the uber-nerds – everyone, regardless of their computer literacy, can take simple steps to better secure their data and their personal devices. Small steps really can make a big impact.

If you’re not sure where to start, here are five tips that will go a long way to keeping you and your information safe.

1. Use unique passwords for every service you use

As tempting as it might be to reuse the same password across various websites (less to remember, less to type, you might be thinking), this is akin to you using the same key for your front door, back door, car, garage, and everything else you want to keep a lock on.

As easy as it might make things for you, it makes things even easier for an attacker to break into all of your accounts. If a hacker manages to grab your password through breaching one site, they get the keys to your entire digital life. That’s why you really want to have a unique password on each and every one of the websites you log in to.

This might sound like a lot to wrangle – “I thought you said these would be easy!” – but this is where technology can really come to your aid. There are many tools available to you, for free, that will generate unique passwords for the websites you use and store those passwords for you so you don’t have to remember them. They’re called password managers, and we’ve written about several of them before.

Many of the password managers on the market will integrate with your browser so you don’t even need to look up or copy/paste the password in, they’ll automatically fill the correct password in for you.

Examples of password managers include 1Password and LastPass, or if you’re an Apple or Google device user you could also try the Apple iCloud Keychain or Google’s Password Vault. Whichever one you choose, the key thing is that it’s easy for you to use. A password manager that works for you is one that takes away the burden of creating (and remembering) unique passwords, so using those passwords becomes a piece of cake. Just make sure you have a super strong, super long password on your password manager!

2. Keep your software up to date

One of the main ways that bad guys can do damage to computers is by taking advantage of flaws in software. These flaws allow the criminals to make the software do things it normally wouldn’t, and often they’ll give an attacker a way into gaining control over the computer and the files on it. The people who make software know that attackers take advantage of these flaws though, so they often make updates and fixes to patch those flaws and keep the bad guys out.

That’s why it’s so important to update the software or apps that you use as soon as the updates are available: It gives you the best, most updated defenses against people who might want to break into your device or computer. You wouldn’t let a leaky roof keep dripping, would you?

3. Make backups of your files

So much of our lives are on our computers and phones now, from precious photos and videos of loved ones to crucial files and finances for work. For almost all of us, it would be devastating if suddenly we couldn’t access these files, or if these files were lost completely.

The easy solution here is to make sure you keep backups of your files, either via a dedicated cloud backup service (like Carbonite), on a cloud storage device (like iCloud or Dropbox) or on an external hard drive that you own (like TimeMachine), or on a mixture of all three!

The key thing is that you backup your files somewhere off the device where those files normally live, so if something happens to that device – you lose it, it breaks, or it gets infected with ransomware – copies of your files are still safe and sound elsewhere.

Getting a file backup service may take a few minutes to set up, but it gives you so much peace of mind should the worst happen.

4. Be mindful of what you share

A quiz on Facebook might seem innocent enough, or perhaps that’s what you might have thought before all the news about Cambridge Analytica came to light. Those quizzes that seem fun usually require giving the quiz a lot of access to your social profile – which usually houses data that could come in handy to someone who might need birthday or location details to impersonate you or break into your accounts.

Even if you’re not the type to do a quiz, public posts on your social profiles can give away a lot about you. One post on its own might not say much, but over time these posts can accumulate to paint a complete picture of you, your habits, your frequently-accessed locations, and other details that would be unsettling or possibly dangerous in the hands of someone with bad intentions.

The best way to avoid information getting into the wrong hands is to be vigilant about what you share and remember what you post online stays online forever (yes, even if you delete it). Err on the side of privacy and protect yourself first and foremost.

5. Use protective software to fight the nasty stuff

Sometimes legitimate websites get hijacked by malicious advertisements. Sometimes legitimate online services are attacked and their customers are affected. No matter how vigilant you are, a little extra defensive assistance can help keep nasty programs and ransomware at bay.

We’re a little biased here, but we think Sophos Home is pretty great. If any malicious program tries to install ransomware or spyware on your machine, it’ll stop it dead in its tracks. Sophos Home will also keep an eye out and stop anything that might try to disrupt your privacy – like programs that try to spy on you through your webcam, or steal your banking credentials as you type them in.

Sophos is offering 20% off Sophos Home Premium for all Naked Security readers. If you’d like to buy it, you can sign up here.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/vvG2-Kq9llg/

Traditional firewalls fall short in protecting organizations, says survey

Even with a firewall in place, nearly a quarter of IT managers don’t know what’s going on with 70% of their network traffic.

That’s one of several key takeaways from a new survey, sponsored by Sophos, that asked IT managers in mid-sized organizations across the globe about how their firewall technology is working for them.

The survey covered IT managers from countries including the US, Canada, France, Germany, UK, Japan, India, South Africa and Australia. Respondents were from organizations ranging in size from 100 to 5,000 employees, in industries spanning several verticals, including technology, retail, manufacturing, professional services, utilities, education, and healthcare.

The survey responses reveal several “dirty secrets” of how traditional firewalls aren’t living up to their old promises, and how they fail to deliver the kind of visibility or responsiveness that organizations need to defend against modern threats.

Of course, visibility is a key component to security, as you can’t control what you can’t monitor. So if a protective measure, such as a firewall, isn’t aiding in providing that network traffic visibility, IT managers find themselves hindered in monitoring and controlling threats, and lagging in mitigation and remediation response times.

When there’s an active threat on the network, lost time means more time for malicious actors or rogue apps to cause damage. Survey respondents said on average each infected computer on their network takes 3.3 hours to identify, isolate, and remediate, so that real cost in time and resources adds up very quickly.

More key findings: IT managers report that, on average, 45% of their network traffic is unidentifiable and uncontrollable. And some industries have more challenges gaining visibility into their network traffic than others – healthcare industry respondents cite 67% of their traffic on average is unidentifiable, for example.

This lack of visibility is understandably a concern for anyone responsible for keeping an organization and its data secure, as you can’t stop unauthorized apps that you don’t know are running. You also can’t confidently answer questions about regulatory compliance or even productivity if illegal or inappropriate applications or content exists quietly on your network, undetected.

No doubt that’s why 85% of survey respondents cited a lack of application visibility as a serious security concern for their organization.

Does this sound familiar? Is your firewall just a checked box in your network inventory? Does it give you real visibility and control into what’s really happening on your network?

See how you compare – read the full results of the survey online: The Dirty Secrets of Network Firewalls.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/96JLPiguI0M/

We ‘could’ send troubled Watchkeeper drones to war, insists UK minister

Comment The British Army’s troubled Watchkeeper drones “could still be deployed on operations”, a defence minister has insisted.

Labour MP Kevan Jones, a member of Parliament’s Intelligence and Security Committee, asked the Ministry of Defence “what assessed capability gaps have been created as a result of the Army’s Watchkeeper programme failing to achieve its Full Operating Capability 1 milestone?” following coverage by The Register and other news outlets of the drone missing a key certification.

The Thales-built Watchkeeper is supposed to be a battlefield drone used for intelligence, surveillance, target acquisition and reconnaissance, or ISTAR in the military lexicon. Instead the project has been delayed by years, is £400m over its initial planned budget of £800m and still hasn’t done any actual operational (warzone) flying, other than a token deployment for a few days in Afghanistan in the early part of this decade.

Among other problems, four of the remotely piloted aerial vehicles have crashed in the past few years, including one that was directly traced by an internal MoD inquiry to “flawed Vehicle Management System Computer software logic”.

Responding to Jones’s Parliamentary question, defence procurement minister Guto Bebb said: “Watchkeeper could still be deployed on operations should the operational imperative warrant it. As such, no capability gaps have been created as a result of the Army’s Watchkeeper programme failing to achieve its Full Operating Capability 1 milestone.”

This is the Parliamentary equivalent of Monty Python’s Black Knight proclaiming “‘Tis but a scratch!”

According to other responses from Bebb to related questions asked by Jones: “The Equipment Standard 2 modification, which is currently being released, will update this [VMSC software flaw]. However, procedural mitigations have already been put in place to reduce the likelihood of re-occurrence.”

The Watchkeeper fleet was initially supposed to be ready for operational tasks by April 2013. After missing its full operating capability date early this year, this will mark the fifth year of delays to the programme.

Jones also asked other questions of the MoD, which confirmed that only four Watchkeepers have crashed – though the ministry’s confirmation does leave a question hanging over five “missing” aircraft. The MoD ordered 54 Watchkeepers, of which 45 have been delivered. Four have crashed, as the minister told Parliament.

The formal Service Inquiries into last year’s two Watchkeeper crashes are ongoing. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/17/watchkeeper_drone_could_go_to_war/

New Malware Adds RAT to a Persistent Loader

A newly discovered variant of a long-known malware loader adds the ability to control the victim from afar.

VBScript has long been an attack vector that could bring malicious software to an infected machine. But what if it could do more? What if VBScript could open a door to allow a PHP application access that would take control of a computer, making it part of a botnet? That’s precisely the scenario in a newly described campaign called ARS VBS Loader, a variant of a popular downloader called SafeLoader VBS.

The new ARS VBS Loader, described by researchers at Flashpoint, downloads malware and provides remote-control access to a botnet controller, making it both a malware loader and a RAT, or remote access trojan. Paul Burbage, senior malware researcher at Flashpoint, says that he first noticed the new loader variant being sold on Russian malware sites in December 2017. It was, he says, being sold as a FUD ASPC (VBScript) loader — with “FUD” in this case meaning “fully undetectable.”

Burbage says that there are two characteristics of ARS VBS that make it highly unusual. The first is persistence; the second is the remote access capability.

“The persistence mechanism for this loader is pretty unique,” Burbage says. “It reports the statistics on its success back to the command and control server and is able to download additional malware from the server.” As a result, he says that the threat actors can switch things up, changing attacks and profiles on the fly once the infection is in place.

One of the things that the persistent loader can do is receive additional commands. That’s unusual for a loader because, Burbage says, “They tend not to have any command and control within the script.” He say ARS VBS was authored with the intent for it to be the RAT, and that combines with the persistence mechanism to make it especially dangerous.

Asked whether the botnet to which ARS VBS seems to be recruiting systems is dangerous, Burbage says that it’s far from the worst botnet he’s seen. “I’m not sure how effective that would be in the wild because it utilizes a PHP POST Flood,” he says, adding, “Most web sites easily defeat those.”

So far, this new loader variant is being spread by relatively unsophisticated means. “Most of the initial infection records we see are massive shotgun spam campaigns that aren’t carefully targeted,” Burbage says, noting that they succeed because users are still clicking on attachments coming from unknown sensors and VBScript payloads are still getting past anti-malware security systems. “It’s really hard to tell the difference between legitimate VBScript files that network admins might use for legitimate admin duties, and malware,” Burbage says.

“VBScript is baked in, or supported out of the box, with every Windows system,” he explains. “There might be a way to turn it off within an organization, but you’d lose the ability to perform authorized tasks.”

Related content:

Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/new-malware-adds-rat-to-a-persistent-loader/d/d-id/1331559?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Why We Need Privacy Solutions That Scale Across Borders

New privacy solutions are becoming scalable, smarter, and easier to address compliance across industries and geographies.

With data the lifeblood of virtually every company in every industry, ensuring privacy has evolved from the responsibility of the legal department to a fundamental corporate issue. But adopting a framework for how we think about privacy and achieve compliance as an organization — including every interaction with customers, partners, and employees — is a continuous and ongoing process that requires businesses to repeat and extend their efforts

In a world where tasks are increasingly becoming automated — performed more efficiently and without the intervention of humans — the idea of throwing more bodies at the “privacy problem” seems old-fashioned and expensive. Rather than taking this ancient approach, the market is looking closer at ways to achieve scale in privacy and develop optimal processes for achieving compliance. But why do we really need privacy solutions that solve compliance across borders?

Scaling Privacy at All Levels
Companies increasingly are harnessing data and putting it to use to drive business value at all levels of the organization. This ranges from marketers slicing and dicing customer data for greater insights and more-tailored campaigns, developers moving data between different IT environments when building new products, and sales working with customers across continents. The move to data-intensive and data-centric companies introduces new privacy issues that must be considered at all levels of the organization, starting with business application owners.

When rolling out a new product or service, application owners need to first assess what kind of data they will collect. Is the data personally identifiable? Is it considered high-risk by any of the regulations to which the organization is subject? Will you need consent if you decide to use the data to better inform your next campaign or product build-out? Where do you plan on safely storing the data and who else in your organization will have access to it — a colleague in another continent who falls under a different set of regulations?

With the dynamic nature of data, these privacy-related questions are never-ending and the privacy architecture is only as strong as its weakest link. To achieve economies of scale and business processes that don’t become bogged down by new government regulations, scalable privacy compliance solutions are emerging for easier deployment across borders.

Smarter Compliance
While scaling privacy is a matter of establishing processes and deploying internal solutions to achieve compliance, it’s also a matter of extending those processes in order to demonstrate compliance with the multitude of international regulatory rules. Nation-states adhere to their own set of privacy regulations with varying definitions of citizen data, how it should be protected, and the manner with which data can flow through and be accessed via domestic servers. Understandably, this makes business operations for global companies an intricate and complex process.

Regulators today, however, ranging from those in the US to Europe to Asia, increasingly recognize that multinational organizations doing business on a global basis can’t realistically meet data protection requirements on a siloed basis, but rather require scalable, interoperable solutions. We are already seeing moves made in the cloud industry with the EU Cloud Code of Conduct — with initial participants including Alibaba, Google, and IBM — and this year, we’re likely to see an increase in codes of conduct developing in specific industries or regions that recognize companies for their cross-border compliance efforts.

Whether as employees or consumers, we all stand to win with better and smarter processes to ensure data privacy compliance. Solutions are emerging that can help businesses map and monitor the flow of sensitive information through networks, data centers, and Web-based software, and provide response platforms that help respond to data breaches. Just as the security industry evolved from a white-hat, hacker-based practice 15 years ago to a multibillion-dollar market brimming with hyper-advanced technology, the privacy industry is evolving along the same trajectory with increasingly sophisticated technology solutions and processes. In time, those processes will become as commonplace as a security firewall.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop ITX 2018 agenda here.

As CEO of TrustArc, formerly known as TRUSTe, Chris has led the company through significant growth and transformation into a leading global privacy compliance and risk management company. Before joining TrustArc, Chris spent over a decade building online trust, most recently … View Full Bio

Article source: https://www.darkreading.com/endpoint/privacy/why-we-need-privacy-solutions-that-scale-across-borders-/a/d-id/1331505?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

8 Ways Hackers Monetize Stolen Data

Hackers are craftier than ever, pilfering PII piecemeal so bad actors can combine data to set up schemes to defraud medical practices, steal military secrets and hijack RD product information.PreviousNext

Image Source: Ginger_Cat via Shutterstock

Image Source: Ginger_Cat via Shutterstock

We are long past the era of the 14-year old teenage hacker trying to spoof a corporate or defense network for the fun of it, just because they can. While that still happens, it’s clear that hacking has become big business.

From China allegedly stealing billions of dollars annually in intellectual property to ransomware attacks estimated to top $5 billion in 2017, data breaches and the resulting cybercrime are keeping CISO and rank-and-file security managers on their toes.

Security teams need to be aware of the full range of what hackers do with this stolen data. The crimes range from stolen IP to filing fraudulent tax rebates to the IRS to setting up a phony medical practice to steal money from Medicare and Medicaid patients and providers.

“Hackers will often start by selling data on military or government accounts,” says Mark Laliberte, an information security analyst at WatchGuard Technologies. “People are also bad at choosing passwords for individual services and often reuse passwords, which lets hackers try those passwords on the other websites their victims use.”

Paul Calatayud, chief security officer, Americas, at Palo Alto Networks, says medical data has become especially vulnerable because many hospitals and medical practices use the same cloud-based ERP or human resources systems and hackers can piece together information and eventually enter a billing or patient information system.

For this slideshow, we explain how hackers monetize the stolen data. The following list is based on phone interviews with Laliberte and Calatayud.

 

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full BioPreviousNext

Article source: https://www.darkreading.com/attacks-breaches/8-ways-hackers-monetize-stolen-data-----------/d/d-id/1331560?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Could an Intel chip flaw put your whole computer at risk?

Remember the Chernobyl virus, also known as “CIH” after the initials of its author, a certain Mr Chen Ing Hau of Taiwan?

CIH was the first virus that succeeded in directly and deliberately damaging your computer hardware by purposefully reprogramming your BIOS chip with garbage machine instructions.

The BIOS is the chip that contains the low-level software that is the very first thing to run when your computer fires up, so trashing it stopped your PC from loading up at all.

Ironically, the CIH virus didn’t have to find and exploit any security holes – there was generally no formal protection against writing to the BIOS back in those days.

You didn’t need to hold down a special hardware switch, enter a user-selectable password, or update with a cryptographically signed blob of firmware code.

The only protection was a sort of “security through obscurity” system that required a specific but publicly documented sequence of memory accesses and timings to activate BIOS writes.

This was a precaution intended to prevent programming accidents, but not to keep out crooks.

Well, the spectre of CIH is back in the news following a recent security advisory, numbered INTEL-SA-00087, from chip maker Intel.

In the sort of awful jargon-splattered non-English that characterises so many technical documents these days, Intel writes:

Configuration of SPI Flash in platforms based on multiple Intel CPUs allows a local attacker to alter the behavior of the SPI Flash, potentially leading to a Denial of Service. This issue has been root-caused, and the mitigation has been validated and is available.

In plain English, we think this means the following:

Due to a low-level programming bug in your computer’s CPU, the memory chips relied upon during startup could be sneakily and unexpectedly filled with garbage.

This would almost certainly stop your computer working properly, and perhaps even stop it booting up at all.

Intel claims it has figured out what actually caused the bug in the first place, which means that it has not only come up with a fix, but is also confident that the fix deals with the problem properly, rather than just being a bodge that happens to work for now.

Good news and bad news

The good news is that Intel itself found and researched this problem, and there is no evidence that any crooks have yet figured it out.

In other words, it’s not a so-called zero-day, where crooks are already exploiting the bug in advance of anyone else knowing about it, and therefore in advance of any fixes being available.

More good news is that, according to Intel, messing with the startup flash memory in your computer, this vulnerability is classed as a DoS, short for denial of service.

Generally speaking, DoSes don’t allow crooks to break into your network, implant malware, snoop on your activities or modify data.

So, DoSes usually aren’t as risky as RCEs (remote code execution holes) or EoPs (elevation of privilege bugs), where crooks may be able to wander in and then poke around at will.

But there’s bad news here, too.

Most worrying is that Intel doesn’t make it clear how serious the DoS might be if this bug is ever exploited by cybercrooks.

If your SPI flash is unexpectedly modified and your computer won’t boot up normally, what then?

Back in 1998 and 1999, many motherboards damaged by the CIH virus couldn’t be fixed at all, because they had no emergency provision for restoring a minimal-but-working BIOS to permit a patch to be installed.

On a significant proportion of motherboards, the affected chips couldn’t be removed for reprogramming, couldn’t be reprogrammed in place, and couldn’t be forced to revert to a “last known good” configuration to make them updatable again – in short, many afflicted motherboards were toast.

Intel isn’t saying what proportion of modern devices might be in the same boat.

Also worrying is the fact that Lenovo, which sells a vast array of different computers equipped with vulnerable Intel chips, has gone one step further than calling this a DoS, using more worrying words than Intel’s:

An attacker could manipulate the vulnerability to prevent a system from booting, to cause it to operate in an unusual way, or execute arbitrary code during the system boot sequence.

In other words, Lenovo isn’t ruling out the possibility of a crook taking over the bootup process in a systematic way, rather than just trashing the flash to stop your computer working properly.

(There’s a big difference between a computer “that doesn’t work properly” and one that “definitely works improperly”!)

What to do?

Unfortunately, updating against bugs like this is a bit like fixing holes in Android – the owner of the technology not only has to identify the problem and figure out how to patch it, but also to convince a sprawling ecosystem of manufacturers, integrators, suppliers, vendors and so on to push out the actual fixes in their chosen ways.

So watch for updates from your device vendor or supplier and apply any patches or system updates as soon as you can.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mG5TPFepiBY/

“Privacy is not for sale,” says Telegram founder

Following the April 2017 suicide bombing on the St. Petersburg metro that killed 16 people, Russia threatened to block Telegram: the encrypted messaging app used to carry out the attack.

The FSB, the successor to the KGB, said in June that the app gave terrorists “the opportunity to create secret chat rooms with a high degree of encryption.”

At the time, Telegram’s founder, Pavel Durov, had resisted handing over the information the government had requested in order to put the app on its official list of information distributors. Durov said at the time that Russian authorities had also asked for the ability to decrypt user messages.

Durov’s argument: What would that achieve, besides prompting Telegram users to move to another app?

If you want to defeat terrorism by blocking stuff, you’ll have to block the internet.

Now, Russia’s made good on its threats. On Friday, the New York Times reported that Roskomnadzor – the Russian communications and technology watchdog – asked the court for the authority to block the app, effective immediately.

It took the court only 18 minutes to approve the request.

Also on Friday, Durov continued to defy the Russian government’s demands for an encryption key. In an online statement, he said that Telegram can’t be bullied by a ban. “Privacy is not for sale,” he wrote.

The full statement:

The power that local governments have over IT corporations is based on money. At any given moment, a government can crash their stocks by threatening to block revenue streams from its markets and thus force these companies to do strange things (remember how last year Apple moved iCloud servers to China).

At Telegram, we have the luxury of not caring about revenue streams or ad sales. Privacy is not for sale, and human rights should not be compromised out of fear or greed.

Durov admitted as far back as 2015 that the company was aware of terrorists using Telegram. TechCrunch quoted his remarks from TechCrunch Disrupt San Francisco:

Yes, there is a war going on in the Middle East. It’s a series of tragic events but ultimately the ISIS will always find a way to communicate within themselves. And if any means of communication turns out to be not secure for them, then they switch to another one. So I don’t think we’re actually taking part in these activities. I don’t think we should feel guilty about this. I still think we’re doing the right thing – protecting our users’ privacy.

Telegram says the app has 200 million users and is widely used by lawyers, reporters, government officials, and others.

Durov is himself a Russian. He fled the country in 2014 after losing control of the Russian version of Facebook – the social network Vkontakte – which he also created.

The New York Times reports that this ban “puts the Kremlin in a slightly awkward position,” given that Telegram is “widely used by government agencies,” including by staffers in President Vladimir Putin’s press office.

The block began to take effect on Monday. According to Reuters, Telegram was still functioning as normal as of mid-afternoon, but the company’s website had been blocked by two of Russia’s biggest service providers, MTS and Megafon.

Radio Free Europe Radio Liberty reports that Durov condemned the ban on Vkontakte, posting that “the quality of life of 15 million Russians will worsen” as a result. With a clenched-fist emoji, he also wrote:

We consider the decision to block [Telegram] unconstitutional and will continue to stand up for the right of Russians to confidential correspondence.

In Moscow, Telegram supporters threw paper airplanes at the headquarters of the FSB on Lubyanka Square. Twelve protesters were reportedly detained.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JcfNec5tCZw/

Gmail’s new ‘Confidential Mode’ won’t be completely private

Have you ever wished it were possible to delete an email from a recipient’s inbox days, weeks or months after it was sent?

If so and you’re a Gmail or G Suite user, it looks as if Google might be about to enable this kind of ‘self-destructing’ email feature on its platform.

We only have screenshots from an email sent to G Suite admins last week to go on, but what seems to be in the offing is the ability to set an expiration date for an email in a similar fashion to that already offered by specialist rivals such as ProtonMail.

“Confidential mode” time limits will be one week, one month or a chosen number of years from the moment it is sent, after which the email will disappear from both the recipient’s inbox and the sender’s outbox.

In addition, “options to forward, download or copy this email’s contents and attachments will be disabled” during the message’s lifetime, as will the ability to print it.

Senders will also be able to make recipients authenticate themselves by entering a onetime code sent from Google to a phone number.

Instead of sending a physical copy from one user to another, Confidential Mode will most likely host it on Google’s own servers, simply sending the recipient a link through which to view it.

That way, Google controls access to it and can delete it after the period set by the sender (ditto controlling access through authentication).

This design also makes it possible for a user on any email system to view the message without having to use Gmail (it’s possible Gmail account will be necessary at both ends for authenticated access to work).

The concept of self-destructing email sounds like something out of Mission Impossible but it’s worth mentioning its limitations.

The most obvious is that the sender has to decide in advance that the email is to be confidential. This can’t be applied retrospectively to any email.

A second is that there is nothing to stop the recipient from taking a screengrab of the email’s contents before it expires.

Moreover, while recipients won’t see the contents of a destroyed email, they might still be able to see that one was received and later deleted by the sender.

Confidential Mode sounds like a non-starter in industries required to keep emails for regulatory reasons but presumably G Suite will offer a mechanism to archive self-destructing emails sent this way.

This hints at what might be Confidential Mode’s biggest weakness for some people: just because the emails are deleted by Google from inboxes and outboxes doesn’t mean they don’t hypothetically exist somewhere.

Remember, from what we’ve seen so far, emails sent this way are not secured using end-to-end encryption in which keys are known only to the sender and receiver. That’s why Google calls it “confidential” rather than private.

All the same, its arrival could still be a big moment for an idea that has been lurking on the fringes for some years.

As already mentioned, ProtonMail (which Cambridge Analytica’s former CEO Alexander Nix claimed his company used to keep emails secret) offers self-destructing email complete with end-to-end encryption when emails are sent between account holders.

In the mobile space, a self-destruction app called Confide reportedly became popular among Washington politicos keen to cover their tracks after the election of Donald Trump in 2016. And Gmail users can already install Dmail as a Chrome extension to do a job very similar to what is being proposed for Gmail and G Suite.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3YrV1-R-HeI/