STE WILLIAMS

Why Privacy Is Hard Work

For Data Privacy Day, let’s commit to a culture of privacy by design, nurtured by a knowledgeable team that can execute an effective operational compliance program.

This past May, you heard an awful lot about the GDPR, which is short for the EU’s General Data Protection Regulation. For one brief, shining moment, the GDPR was searched more often than Beyoncé! But now that Data Privacy Day (January 28) has rolled around, Google Trends assures me, Beyoncé is back to being about 15 times more interesting.

And with Cardi B, it’s 30 to one. That’s so 2018.

Inside organizations around the world, however, interest is still very much piqued. The GDPR has moved from a looming deadline to a hovering reality, and it’s not going away anytime soon.

This is a good thing for the people who’ve been hired to focus on compliance. Commensurate with the GDPR and all those subsequent emails, we have seen an explosion in the privacy profession, with the ranks here at the International Association of Privacy Professionals swelling to 47,000 members strong. Not only are they data protection officers, a role mandated for many companies by the GDPR, but they are privacy analysts, engineers, auditors, and so much more.

Many organizations have stood up teams to assure compliance with not only the GDPR but also the many privacy laws in the US that are popping up, including California’s CCPA (California Consumer Privacy Act) of 2018. In January 2020, you can expect a little period where CCPA beats out Jay-Z, who isn’t quite as popular as either his wife or Cardi, I’d wager.

These privacy laws, and those bubbling up around the globe as we speak, are no trifling matter.

Our annual research with EY tells us that just 44% of organizations considered themselves “fully compliant” with the GDPR when it went live in May, and of those companies, 20% admitted they’d basically never be fully compliant. And that’s despite the average organization hiring three full-time employees and demanding time from another 2.5 staffers, just to handle the GDPR. Then, add to this the $3 million in average spending to adjust products and services and buy legal services and technology.

In 2018, the Global 500 spent an estimated $2.75 billion on GDPR compliance.

What, then, will be the impact of a U.S. federal privacy law that encompasses Internet firms and everyone else handling personal data?

In some ways, it may not be as impactful. The United States, despite some misconceptions, actually has a longer and deeper history of operationalizing privacy. It’s something academics have labeled “Privacy on the Ground,” and it relates to the compliance culture that popped up around HIPAA, the Fair Credit Reporting Act, the Children’s Online Privacy Protection Act, data breach response laws, and others that were relatively early privacy laws with regulatory teeth.

In other ways, however, a US omnibus law could be very disruptive, potentially making industries like data brokers and ad-tech retool or rethink their business models. Data-driven companies in tech, finance and healthcare may find that there is a new normal and increased risk when it comes to gathering, using, and sharing data. Even small businesses will likely find that they need to start thinking carefully about privacy.

It’s difficult to say what, exactly, will be the repercussions.  

One thing we do have relatively certainty on, however, is that managing privacy well is impossible without the right people. There is no “privacy tech” that can make an organization compliant overnight, nor any “policy” that can be put in place to solve the problem. Rather, organizations need widespread awareness of how personal data should be handled; a culture of privacy by design; and a team of privacy professionals who both know the law and how to execute an operational compliance program.

These things aren’t easy. But, for Data Privacy Day, perhaps organizations can commit to getting the right people in place and getting started on the tough job of privacy. 

Related Content:

As president and CEO of the International Association of Privacy Professionals (IAPP), J. Trevor Hughes leads the world’s largest association of privacy professionals, which promotes, defines and supports the privacy profession globally.  Trevor is widely recognized as a … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/why-privacy-is-hard-work/a/d-id/1333741?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

US Law Enforcement Shuts Down Massive Marketplace for Compromised Servers

At its peak, xDedic listed over 70,000 owned servers that buyers could purchase for prices starting as low as $6 each.

US law enforcement authorities in collaboration with their counterparts in Belgium, Ukraine, and the Europol have taken down xDedic, a Russian language website notorious for selling stolen identity data and access to tens of thousands of compromised servers.

In a statement, the Justice Department described the site as facilitating more than $68 million in fraud over the past several years. Its victims have spanned the globe and include organizations across numerous sectors, among them accounting and law firms; pension funds; local, state and federal government entities; hospitals; and emergency services providers.

“The xDedic Marketplace operated across a widely distributed network and utilized bitcoin in order to hide the locations of its underlying servers and the identities of its administrators, buyers, and sellers,” the statement read. The site allowed buyers to search for stolen data and compromised servers by geography, price, operating system, and a variety of other criteria.

Orders to seize xDedic’s domain were executed last week, effectively shutting down the site. Its Web page has been replaced with a splash screen announcing the FBI seizure pursuant to a civil forfeiture warrant from the US District Court for the Middle District of Florida.

The Justice Department statement Monday described the FBI and criminal enforcement unit of the IRS as leading the US investigation with help from other federal agencies, including the Department of Homeland Security. A joint investigative team established in January 2018 led the European side of the investigation. It comprises members of the offices of Federal Prosecutor, the Investigating Judge of Belgium, and the Prosecutor General of Ukraine.

The xDedic takedown is significant because of the scope of the operation. Security researchers who have been tracking the website for years have previously described it as one of the largest underground marketplaces, especially for hacked servers.

Massive Operation
In a June 2016 report, researchers from Kaspersky Lab estimated that buyers could purchase access to as many as 70,000 hacked servers from 173 countries around the world on xDedic for extremely low prices. The servers were available from a total of 416 unique sellers who were using xDedic as a sales platform.

At the time, prices for access to some servers — like those belonging to government entities in the European Union — started at just $6 per server. For that price, a buyer would get access to all data on the compromised server and the ability to use the server to launch further attacks against the victim organization, Kaspersky Lab had noted.

“Interestingly, the developers of xDedic are not selling anything themselves – instead, they have created a marketplace where a network of affiliates can sell access to compromised servers,” the security vendor said in its report. As part of its services, xDedic offered sellers and buyers live technical support, tools for patching hacked servers so the systems would allow multiple remote sessions, and tools for gathering system information.

The final xDedic lesson is that network owners must understand their Web-facing properties, including whether RDP services are enabled, says Kurt Baumgartner, principal security researcher at Kaspersky Lab’s Global Research and Analysis Team. They need to understand and maintain more complex but practical authentication schemes for these services, he says. “As we move further into the mesh of IoT and its accompanying default passwords, this lesson must be reinforced,” Baumgartner added.

US law enforcement authorities have led or participated in numerous similar takedowns of Dark Web marketplaces over the years. The two most significant were the efforts that resulted in the 2017 shutdown of AlphaBay and Hansa Market. Both sold a massive array of stolen and counterfeit goods that included not just hacking tools but other illegal products, including guns, toxic chemicals, and heroin.

The law enforcement actions have not significantly slowed down cybercrime activity, but they are reshaping the manner in which illicit hacking tools and other products are being sold and purchased online.

Last year, Digital Shadows noted an increase in the use by cybercriminals of smaller, decentralized markets and messaging services like Telegram to conduct transactions following the AlphaBay and Hansa Market takedown. With fear and suspicion rampant, cybercriminals are increasingly eschewing larger marketplaces for smaller invite-only groups and services, Digital Shadows noted in its report.

Related Content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/us-law-enforcement-shuts-down-massive-marketplace-for-compromised-servers/d/d-id/1333744?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Japan Authorizes IoT Hacking

A new campaign will see government employees hacking into personal IoT devices to identify those at highest security risk.

A new law in Japan allows the nation’s National Institute of Information and Communications Technology (NICT) to hack into citizens’ personal IoT equipment as part of a survey of vulnerable devices. The survey is part of an effort to strengthen Japan’s network of Internet of Things devices ahead of the 2020 Tokyo Olympic games.

The survey will begin in February with a trial run of 200 million Web cams and modems. NICT employees will attempt to log into the devices using default account names and passwords, and when they find a vulnerable device, they will alert the ISP and local authorities so the device owner can be contacted and given security recommendations.

While authorities have logged into IoT devices found to have been recruited into botnets and involved in offensive activities, this is the first time a national government has authorized such tactics in a prophylactic effort.

For more, read here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/japan-authorizes-iot-hacking/d/d-id/1333745?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

YouTube subscribers getting spammed by celebrity imposters

Fans of high-profile YouTubers are getting bombarded with scams made to look like the messages were sent by the video stars themselves, the BBC reports.

US YouTube personality Philip DeFranco addressed the issue in a video posted to his channel on Wednesday.

If you’ve gotten a message from me, or any other creator on YouTube, that looks something like this…

…he said, posting a sample of one of the scams, which thanked his fans for commenting on his videos and told them that he’s selecting…

…random subscriber from my subscriber list for gift and you have just won it!

DeFranco continued, putting up visuals of similar messages about “susprise gift” that have appeared on subscriber lists for YouTubers James Charles, Lewis Hilsenteger from Unbox Therapy Giveaway, jeffrestar, and Bhad Bhabie, among what the BBC says were many others.

Each of the messages contained a link.

DeFranco said most people would recognize that it was an “obvious scam.” Neither he nor the other creators the messages claim to come from had in fact sent the messages, he said.

Following DeFranco’s video warning about the issue, YouTube responded, thanking DeFranco by Tweet and saying that the company is working on a solution.

While it figures out how to stamp out scammers who impersonate celebs, @TeamYouTube said that subscribers can protect themselves by blocking any accounts that are spamming them.

The latest spin on salvation-via-skincare

There’s nothing new about social media-delivered, fake-celebrity-encrusted flimflam. In fact, Facebook last week got lawsuited into creating a scam ads reporting tool, and donating £3m to a consumer advocate group, by UK financial celeb Martin Lewis.

Lewis’s name and face had been slathered on all sorts of financial scams that he’d never endorse. He wound up dropping the lawsuit he brought against Facebook over the frauds: to Facebook’s credit, it responded without a court order.

Back in 2017, it was all about skincare. We were being assured that former First Daughter Chelsea Clinton had become the richest Clinton, all because of her skincare line (nope). Star Pauley Perrette purportedly left the crime show “NCIS” to pursue a skin care line (no, she did not). Princess Kate Middleton planned to take a break from the royal family to campaign for – what else? – her “breakthrough” skincare line (a royal “nosiree” to that).

Skin, skin, skin, skin, skin: you’d think that the A-list stars could have slid their way around town like eels with all the gloop purportedly being glopped on by Jennifer Aniston, Angelina Jolie, Kim Kardashian, Melania Trump, Christine Brinkley, and Mark Zuckerberg’s wife Priscilla Chan, not to mention all the non-celebrities who say they never used this stuff and never wrote the endorsements credited to them.

It’s like playing whack-a-mole

The problem with skinning these skincare weasels and/or their “Susprise gift” kin is that they breed like bunnies. The Federal Trade Commission (FTC) tried to crack down on the skincare fraudsters in 2015, getting a court to issue a temporary restraining order against a total of seven individuals and 15 companies that were selling Auravie, Dellure, LéOR Skincare, and Miracle Face Kit branded products by tricking consumers into providing their payment card information, instead charging them for the full price of the product, and then surreptitiously enrolling them in a monthly program that charged them about $85 per month.

At the time of the skintastic scams, our advice is to not blindly trust a company with your credit card details –  if in doubt, don’t give it out. Consider using a prepaid credit card if they’re permitted in your country (and lo and behold, the fake skincare sellers refused to accept them) so that the cash on your card is capped. Keep your eye on your statements or sign up for automated transaction reports, for example, via SMS, and dispute any fraudulent transactions as soon as you can.

With regards to that link in the fake gift giveaways from YouTube stars, nobody should go clicking on it, not even if it utilizes sonic vibration massage to refresh and restore a youthful look to the skin.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/_O-i1_kXpBA/

Even Microsoft can’t escape ‘reply all’ email storms

Of all the calamities that befall email users, few are more dreaded than the ‘reply all’ storm.

Ask the 11,543 Microsoft employees who reportedly found themselves experiencing the full force of a phenomenon known to science as the ‘cascade effect’.

It seems to have started innocently enough when someone made an unspecified change to Microsoft’s GitHub account, causing an email to be sent to the company’s entire base of registered users of a service it bought last summer for $7.5 billion.

But then, inevitably, a small number of recipients attempted to remove themselves from the thread by hitting reply all.

Doing this has two main effects. First, everyone on the list receives a copy, which if they unwisely respond with a reply all of their own risks the sort of messaging exponential that can bring an email system to its knees as a few thousand emails multiply into millions.

The second is that everyone on the list receives a copy of every message, complete with the sort of sarcastic comments that might embarrass the sender when they realise how many people just read it.

As Microsoft found out, being the world’s largest software company doesn’t make the reply all storm any easier to stop once the cascade has started tumbling.

For reasons that aren’t clear, every time someone hit reply all it re-subscribed every email group, comically overriding attempts by users to mute notifications.

Bedlam DL3 remembered

Eventually, a GitHub admin deleted the discussion, halting the flood but not before older heads dragged up the memory of an even larger Microsoft reply all screw-up from October 1997.

According to accounts, it started with the following employee email addressed to the mailing list that gave the incident its name:

To: Bedlam DL3
From:
Subject: Why am I on this mailing list? Please remove me from it.

At some point in the midst of 197GB of data generated by 15 million emails sent to 13,000 recipients, Bedlam DL3 hit rock bottom when someone sent the following email (using reply all):

Stop using REPLY ALL. You’re just making it worse.

There have been plenty of repeats since then, some even larger.

For example, the 2017 bombarding of 33,000 employees at Thomson Reuters that was honoured with its own hashtag #ReutersReplyAllGate.

Then there was an exasperated article in the New York Times inspired by a similar incident at the newspaper in 2016.

Headline: “When I’m Mistakenly Put on an Email Chain, Should I Hit ‘Reply All’ Asking to Be Removed?”

The body of the article contained just one word: “No.”

For anyone needing more advice, read this previous Naked Security story for guidance.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/T9_XugXD4Ho/

Twitter scammers jump in on real-time complaints to companies

Last week, a not-particularly-detail-oriented scammer inserted themselves into a complaint against an ISP that was publicly posted to Twitter. The scammer pretended to be the ISP – Virgin Media – and direct-messaged a reply, trying to weasel a credit card number out of the complainer…

…without noticing that the complaint was coming from an infosec company… that then tried to trick the scammer into clicking on a link that would snare the fraudster’s IP address…

…resulting in a round-robin “I think you need to click that AmEx link!” vs. “No, really, you need to send a different credit card number – this one’s not working!” back-and-forth.

The UK-based penetration testing and cybersecurity company, Fidus Information Security, posted this account from director Andrew Mabbitt after he attempted to turn the tables on the scammers.

It all started with Mabbitt’s publicly posted complaint directed at Virgin Media on Twitter, he writes:

Yesterday whilst complaining to Virgin Media on Twitter about my broken internet I encountered a very interesting scam attempt. Within minutes of posting a complaint I got two replies; one from Virgin Media themselves in a public message and another from somebody purporting to be from Virgin Media in my DM’s. [sic]

”Hi there,” said the polite (and fake) help desk

Here’s the prompt, seemingly helpful, seemingly “yes you’re really talking to Virgin Media” reply from the scammer:

Hi there. What’s your full name and address linked to your account so we can help you further with this please? ^BP

Nice try, Mabbitt thought, suggesting that the scammer must be watching for keywords in real-time in order to get fake help responses out fast – fast enough so that the person behind the complaint tweet is still hot under the collar.

The account that sent the scam message – @virginscmedia – was “obviously a huge give-away” that the message wasn’t legitimate, Mabbitt said. It’s since been suspended, but before it went bye-bye, its profile showed that it was created in January 2019. Rather a tardy timeline for a major media provider to create a presence on a major social media site like Twitter, eh?

Nor did the account have any followers. Nor was the account following anybody. Hmmm.

Mabbitt:

It’s… fairly obvious the people behind the account target everybody and anybody and are not very selective. After all, it’s fairly obvious from my Twitter that I work in Cyber Security.

He tried to test how gullible the scammer was. Specifically, Mabbitt responded saying the account was in his brother’s name: Wade Wilson (also known as superhero movie character Deadpool). Mabbitt also got meta and gave the imposters an address for London’s Metropolitan Police Service.

The scammer replied within 20 seconds, in full-on help-desk-speak:

Thanks for the information, Andrew. Please allow me a minute to locate the account so I can help. ^BP

So polite! Next, the phisher set the hook:

Before we proceed, for security purposes of your account please provide the card number, expiry date, csc card holder name linked to the Virgin Media account. If you don’t have access to this card it can be any card registered to the address. ^BP

Pick a card – any card!

So that’s what Mabbit did: he DM’ed what was purportedly an American Express card’s details. Mabbitt noted that it was an “odd attack” to launch against Virgin Media customers, given that most pay by direct debit rather than attaching a credit card to their accounts. The AmEx card details were actually a set of test details provided by PayPal, he said.

The lying scammer: That card’s registered under the same address, right?

The lying security analyst: It is indeed.

A bit of a lag ensued, likely while the scammer tried to authorize a payment on the fake card. When it didn’t go through, the scammer tried to get another card out of Mabbitt.

After the card didn’t authorize for the scammers, they tried to persuade their would-be victim into handing over details to another card. At the same time, Mabbitt had rigged a DM with a link that he was trying to get the fraudsters to click on. The link led to one of Fidus’s sites, about penetration testing, that would have captured their IP address.

The scammer wouldn’t budge. Mabbitt kept telling them that they had to click the link, and that what they were seeing was probably an authentication step that the scammer needed to click on. No, how about you click on it and send a screenshot, the scammer proposed. Nope, busy with a client, no can do, Mabbitt said.

They were adamant they needed another card, we were adamant we were going to get their IP address. It became a back and forward exchange.

The fictional “Error 522”

After it became clear that there would be no clicking and therefore no captured IP address, Mabbitt says Fidus faked a Cloudflare error message, hoping the scammer would click on it.

Never did I think we’d be faking both Cloudflare error messages and SMS’ to gain an IP address but we had come too far at this point to back out now.

Finally, the fake SMS, “Error 522” message worked. The scammer swallowed the bait. And after it took them to a site for a security firm, the scammer must have finally realized that they’d been had:

After sending a fake SMS message we received a click on our web server. At this point the game was up as the IP linked back to our website and we never received a reply back.

Fidus reported all this to Twitter, which suspended the account. It also informed UK police, “in the hope some action can be taken against those responsible.”

Please don’t feed the phish

We’ve written about numerous people who’ve scammed the scammers in myriad ways.

In 2016, there was Florian Lukavsky, director of application security services firm SEC Consult and an expert at these things. He scammed a group of whalers – those are phishers who go after the biggest fish of all: company execs with access to cash – by playing them at their own game. He played along with their scam, then sent them an infected PDF that he claimed was a transaction confirmation but which harvested personal information including Twitter handles and Windows credentials from the attacker’s machine. Inflicting malware like that would be illegal for most of us, but Lukavsky was working alongside police and passed the information on to them.

We’ve seen people do things like draw out conversations to waste the crooks’ time. One guy even cooked up an autobot to do the work for him: he’d forward calls to it, thereby automatically (and hilariously) wasting the fraudsters’ time.

As far as Ivan Kwiatkowski goes, his modus operandi was to infect a tech support scam caller with Locky ransomware.

There are a few big problems with surreptitiously manipulating other people’s computers, whether it’s to seek revenge or to pretend to fix them. For one, it’s illegal in most places. But that hasn’t stopped nations from adopting their own versions of (government-sanctioned) attacks on attackers in what’s known as hacking back.

In the US, the National Security Council recently sanctioned the practice for the military, in spite of issues raised by the infosec community.

As far as Mabbitt’s attempt to get an IP address out of his would-be attacker goes, as far as I know, it doesn’t stray into the legally risky realm of inflicting malware on a scammer. We hope the police, armed with an IP address, find the fraud perpetrator. IP addresses can be spoofed, though, so that’s not a given. In fact, the potential for spoofing the origin of an attack is one argument against hacking back.

At any rate, it’s not surprising that a phisher jumped on Mabbitt’s publicly tweeted complaint. It might feel good to complain about a company in a public tweet, since you know that they’re likely to respond. But it also sets you up: scammers will jump at the chance to pretend they’re helping you when they know you’re frustrated because you’re venting publicly, for everyone to see.

Instead, go the DM route for your customer care needs, or use another similarly private channel. We don’t need scammers to know what technology we’re using, when it’s not working, and when we’re frustrated with it. The less they know, the safer we are.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/68iF6ivx1cM/

BGP secure routing experiment ends in online row

An experiment to make the internet safer ended up breaking parts of it last week.

Researchers were testing a way to make the Border Gateway Protocol (BGP) more secure. BGP is the language that routes traffic between autonomous system networks (ASNs), which are the large networks that make up the internet. However, BGP is vulnerable to multiple attacks including route hijacking, in which someone corrupts BGP routing tables to change the way that traffic travels between autonomous systems.

The researchers were testing a concept called Decentralized Infrastructure for Securing and Certifying Origins (DISCO). This anti-route hijacking system is supposed to solve the problems associated with the existing approach, which manually assigns digital certificates to IP address blocks. The problem with the manual method, according to the researchers, is that it takes work, meaning that few people do it. When they do it the records are often wrong, adds the DISCO research paper. This can cause routing problems of its own.

DISCO takes an alternative approach by watching traffic over time to verify that it’s going to the right destination. Its inventors say that this eliminates the need to change BGP routers, and tested it out on the public internet to see how it worked.

Crashing routers

Not all routers handled the experiment well. It crashed routers running Free Range Routing (FRR), which is an IP routing protocol suite that began developing in March 2017. That project, forked from an existing routing suite called Quaggo, is now part of the Linux Foundation and is gaining significant traction.

DISCO researcher Italo Cunha explained what happened in a post to the North American Network Operators Group (NANOG):

Despite the announcement being compliant with BGP standards, FRR routers reset their sessions upon receiving it. Upon notice of the problem, we halted the experiments. The FRR developers confirmed that this issue is specific to an unintended consequence of how FRR handles the attribute 0xFF (reserved for development) we used. The FRR devs already merged a fix and notified users.

The FRR project updated its software to solve the issue and the DISCO team ran the experiment again. This time, another problem emerged.

An angry Ben Cooper, CEO of Australian colocation data centre provider PacketGG, fired back a message on the NANOG list:

Can you stop this?

You caused again a massive prefix spike/flap, and as the internet is not centered around NA (shock horror!) a number of operators in Asia and Australia go effected by your “expirment” and had no idea what was happening or why.

Get a sandbox like every other researcher, as of now we have black holed and filtered your whole ASN, and have reccomended others do the same.

Others were more sympathetic, arguing that the problem appeared to be PacketGG’s BGP routing software. It hadn’t been updated to support the latest version of the BGP routing protocol, they implied:

“Get a sandbox like every other researcher” is not a fair statement, one can also posit “Get a compliant BGP-4 implementation like every other network operator”.

The disagreement was all for nothing, though, as the DISCO team had already announced that the project was to be permanently cancelled.

The failure leaves the internet still vulnerable to BGP hijacking attacks. In one of the most recent reported attacks, China allegedly routed traffic from Western countries through its own infrastructure to spy on their communications.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ZF_DbDPzfJo/

How to protect yourself this Data Privacy Day

Today is Data Privacy Day. And, to celebrate, we asked our security experts to share their top tips for protecting your privacy online.

Enable multi-factor authentication (MFA) – Benedict Jones

Given that cybercrime now makes up the majority of fraud-related incidents, I recommend enabling multi-factor authentication wherever possible. This adds an additional layer of protection against someone trying to access your personal accounts.

Use a webcam cover – Victoria Townsley

It’s not impossible for hackers to access your webcam. Keep yourself protected and have peace of mind by using a webcam cover.

Use complex passcodes for your devices – Alice Duckett

It’s not just passwords for your email addresses and social media accounts that need to be secure, ensure that the login for your laptop and mobile phone have complex passwords that you change often. I recommend phone passcodes to be at least six digits.

Be aware of what apps you use – Matt Boddy

Always check the permissions an app is asking for before you download it to your personal device. It’s also important to delete any apps that you don’t use anymore.

Know what you’re sharing on social media – Anna Brading

Check your privacy settings on social media. Make sure you are aware of who can see your posts, and lock down your accounts as much as possible.

Check your digital tattoo – James Burchell

It’s not all about what you’re posting online, it’s important to be aware of what you’re using online. Do you have old social media or email accounts that you don’t use anymore? Delete them.

Don’t send sensitive data in an email – Rajeev Kapur

Be careful about what information you send via email. Ask yourself, could there be an implication if someone was to access this information?

Don’t reuse your passwords – Mark Stockley

The simplest upgrade you can make to your personal security is to have unique passwords for everything you use.

Be careful what you share on social media – Herb Weaver

Information such as your date of birth or address gives cybercriminals usable information about you. Equally, sharing when you’re going on a trip can alert local criminals that your home will be empty.

Keep your software up to date – Rawan Missouri

Keep your software up to date on all your devices. Updates patch flaws that cybercriminals can take advantage of. If you don’t update, you risk leaving yourself vulnerable to attacks.

And finally…

Every day is Data Privacy Day – Paul Ducklin

Today might be the official Data Privacy Day, but remember it’s Data Privacy Day tomorrow, and the day after, and the day after that. It’s like Quit Smoking Day – you take it on for the rest of your digital life.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/fuuTutb7KhA/

How my Instagram account got hacked

Every so often I receive an unsolicited friend request on social media from an attractive woman doing a suggestive pose in her profile picture.

I’m not just showing off that I get the occasional friend request from an attractive lady. The person in the profile picture of these accounts probably looks nothing like the person requesting to follow or befriend me.

Quite often these are hijacked accounts used by a cybercriminal to exploit your sexual desires.

I’m going to share a deep dark secret with you

Today it’s Data Privacy Day, and to celebrate I’m going to tell you the story of how my leaked data was used against me by hackers to login to my Instagram account.

In April 2012, Instagram was launched on Android devices. When the popularity of the Android app grew, I signed up to an account and uploaded a single picture to see what the fuss was about. I then removed the app and didn’t sign into the app again until 2015.

When I signed in, I could see that my account had been following thousands of people unknown to me.

Yes, that’s right ladies and gentlemen, I may have once been an attractive woman doing a suggestive pose to lure people into following me back or click on a link.  Well, perhaps my hacked Instagram account could have been.

I had a million and one questions running through my head as to how this could happen.  In 2015 my career in IT Security was budding, and to save myself the embarrassment of having a hacked account, I immediately changed my password and unfollowed all unknown accounts.

4 years on, I think I know what happened

In the news every so often, we see a company suffering a data breach. These data breaches may include things like passwords and email addresses.  Between 2012 when Instagram was launched and 2015 when I logged back into my account there were a number of breaches of note, including Yahoo, Adobe, eBay, JP Morgan, LinkedIn and Target.

It’s very likely that whoever logged into my dormant Instagram account was using a method that is referred to as credential stuffing.

Credential stuffing is when a hacker takes passwords from the data breach of Company A to login to a web app of Company B.  This relies on the victim (me) having reused my password.

Now, I know what you’re thinking, “But Matt, don’t you have a different password for every account?”  Well now I do, yes.  Unfortunately, the Matt of 2012 wasn’t as well versed in the field of IT Security as the Matt of present.

Security advice

How do you know that your Instagram or other social media account has been compromised?

If you find that your Instagram is starting to follow people you wouldn’t expect, your profile picture has changed without your knowledge and your bio suddenly reads “click here for a private chat 😉” or something equally flirtatious, then you may have been compromised.

Instagram advises you to change your password immediately and revoke access to any suspicious third-party apps.

Whether or not your account has been compromised here’s what you can do to strengthen the security of your social media presence:

  • Don’t reuse passwords for different services! This is a great chance to set up and use a password manager. It’ll generate and store a unique password for every website you sign up to.
  • Setup 2FA! Instagram along with lots of other social media platforms now supports the use of two-factor authentication so that you don’t have to solely rely on your password.
  • Delete the account, not just the app! If you’ve decided you’re finished with an app, don’t just remove it like I did.  Find out how you can delete your account and do that as well!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/KWtM7pUGdpQ/

3 Ways Companies Mess Up GDPR Compliance the Most

The best way to conform to the EU’s new privacy regulation is to assume that you don’t need to hold on to personal data, versus the opposite.

Long gone are the days of the physical perimeter, where a company’s IT infrastructure was entirely on-site. Today’s increasingly decentralized enterprises depend on a workforce that operates both at home and on mobile devices, working together with the help of cloud-based services. Yet the death of the traditional perimeter does not mean the end of security architecture. Instead, we need to recognize that it is all about trust.

As little as a decade ago, most organizations assumed that their security protection was robust, and few had deployed security operations centers or other cyber monitoring solutions. The concept of “zero trust” was a useful spur to action: work on the assumption that any of your resources might be compromised and put monitoring solutions in place so that you can take remedial action if you find something is amiss.

But when designing underlying cyber protections, too many architects are taking zero trust to be the primary objective. This is a misinterpretation. In the first instance, we should look to protect our resources from attack. What zero trust reminds us is that we are fallible and that we should put in place backup plans in the form of monitoring and incident response for the (hopefully rare) cases where our protection plans fail.

Zero Trust at the Endpoint
Where we see this misinterpretation most frequently is in the context of the user endpoint, where many enterprises are making plans that can be summarized as “Don’t worry about the endpoint: We’ll just assume zero trust.” There are cases where this is a reasonable decision, made in the full understanding of the risk. But in too many cases, the risks are poorly understood.

In many ways, this is a legacy of decades of remote access solutions built around the traditional security perimeter. In the threat environment of years past, the critical risk for remote access was that an unauthorized individual would seek to connect to the remote access portal. The critical controls were passwords and two-factor authentication. But in the future threat landscape, this risk is joined by another: a legitimate user connects but the machine they are using to do so is not fully under their control.

This is not hypothetical. It is a risk that has played out in the real world, albeit in a different context — Internet banking. Here is a real-world case study where high-value systems are accessed from endpoints that have few, if any, controls, and which must indeed be treated as zero trust.

Man-in-the-Browser: A Cautionary Tale
In the early days of Internet banking, the risk was unauthorized access, and banks developed varying levels of protection ranging from passwords, of which only some characters are used for each logon, to two-factor authentication. But the more sophisticated attackers then turned to a far more pernicious mode of attack: man-in-the-browser.

With a man-in-the-browser attack, a user connects using his or her valid authentication methods. But the web browser has been compromised, and what the user subsequently sees is not what the website says, but rather what the attacker displays. What the website sees is not the user’s input, but the attacker’s input.

Even two-factor authentication (2FA) techniques can be subverted in a man-in-the-browser attack. We have seen real-world instances where users have entered their 2FA details to approve a valid transfer, but what is actually approved instead is a malicious transfer set up by the attacker.

Furthermore, 2FA works best when used sparingly. If 2FA is used too frequently, two things happen. First, users get frustrated and efficiency suffers. And second, users become too accustomed to entering their 2FA details and are more easily convinced to enter them by an attacker (such as the man-in-the-browser) — making it easier for an attacker to bypass the control.

Benefits Risks
Clearly, banks have decided to persist with Internet banking despite these risks; the business benefits are worth the risk. But despite heavy investments in cyber and fraud monitoring, there are significant losses suffered every year. The calculus is that (given transfer limits) any individual loss will be manageable and that the aggregate costs can be passed on to customers.

In other contexts, however, that calculus may be different. Individual cyber incidents that affect an enterprise’s core systems may have far higher impact than the loss of funds from any single bank account. In these cases, man-in-the-browser (or equivalent attacks) could be catastrophic — anything that the valid user can do, the attacker can, too.

Where this is the case, we must see zero trust as a backup in case of failure rather than the primary plan. In today’s enterprise architecture, user endpoints are probably the hardest elements to secure. But regardless of how frustrating it may be, where security really matters, the enterprise is only ever as secure as the endpoints it allows to access its sensitive core systems.

The General Data Protection Regulation (GDPR) has been in effect since May 2018, and companies that have done their due diligence to comply with the regulation may feel confident they have their bases covered. However, GDPR compliance rules are not as simple as they might seem at first glance, and there are special use cases that every company should consider. If compliance officers rush through checking the boxes and do not carefully assess the scope of GDPR, and how it relates to the company’s data collection practices, they most certainly will have holes in their compliance plan.

Here are three examples of frequently overlooked compliance issues that could put companies at risk.

1. It’s not just about consumer data
GDPR was designed to create more protections for consumers whose data is collected by different companies. But the scope of the regulation is much more expansive and can be applied in ways many companies didn’t account for in their initial compliance plans. In addition to consumer personal data, companies are also required to handle the personal data of employees, job applicants and non-customers (e.g., people who fill out a form but don’t purchase) with a new standard of care.

The regulation mandates that all data processing activities have a legal justification, so the best practice is to collect only the data that is necessary for essential data processing activities for consumers, job applicants, and everyone in between. Companies should evaluate their data processing practices with the goal of data minimization in order to stay compliant with GDPR.

Recommendation: Don’t just review data capture practices; review data retention practices for all data. Make sure you’re properly disposing of old resumes, employee personal data, and any other records whose usefulness has expired.

2. Policy vs. Reality
Any company that aims to process personal data must establish policies governing how data is collected, stored, and processed to stay compliant with GDPR. While good data governance is the cornerstone to GDPR compliance, simply having policies in place is not sufficient for compliance. Companies must go a step further to ensure that employees fulfill the obligations of data processing defined under GDPR. Functionally, this means companies are obligated to make sure that what people do on a day-to-day basis aligns with the GDPR policies. And if the behavior of employees doesn’t meet a company’s standards, then corrective action must be taken. 

Often, breach of policy is unintentional — for example, if a customer support agent is on a call with a customer and saves personal information about the customer in a system where it does not belong. Or if an enterprising employee experiments with new software or establishes free software-as-a-service accounts and forgets to report them to the compliance officer at the company. While these scenarios may seem like little issues, they expose companies to big risk because both examples are GDPR violations.

Recommendation: To mitigate risk, we recommend running frequent “mini” audits. Our security and compliance team has learned firsthand that compliance is easiest to incorporate into daily workflow when audits are part of workflows. While most companies run quarterly audits at best, annual audits at worst, mini audits that are time-boxed will signal to your company that compliance isn’t a quarterly event but, rather, a continuous practice. Better yet, automate the audit process with tools so when policy and reality drift apart, the deviation is spotted right away.

3. Edge Cases
The data that encapsulates “personal information” under GDPR isn’t always as straightforward as basic demographic information. For example, job title is an unexpected category of personal information. Around 99.9% of the time, job title is not considered personal information that is protected under GDPR, but it certainly can be depending upon the situation. For example, consider this job title: Chancellor of Germany. There is only one person in the world today that holds this position, meaning the identity of the individual can be revealed by this particular detail. So, in this case, job title must be considered personal information under GDPR, and is therefore a protected class of data. The catch is if one job title counts as personal information, then all job titles must be considered as potential personal information and treated as such.

Recommendation: As part of your regular data audits, allocate some time to look at the information you collect that you don’t mark as personal information. Just using the “non-personal” information, can a clever person deduce if a data point belongs to a specific person? If so, then you might want to rethink what’s personal information and what is not.

Complying with GDPR is more involved and extensive than it initially appears, but it is not an impossible standard. The best advice is to assume that you don’t need any data versus the opposite, that you do. In this way — in the spirit of GDPR — companies will inevitably provide the highest-caliber personal data protection for their users and ensure accountability for personal data processing throughout the organization.

Related Content:

Jason Wang is the founder and CEO of TrueVault, a data security company that is transforming how companies handle personal data. Businesses use personal data to shape customer experience, but security risks mount as more sensitive data is collected. TrueVault tackles this … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities-and-threats/3-ways-companies-mess-up-gdpr-compliance-the-most/a/d-id/1333734?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple