STE WILLIAMS

LoJack Attack Finds False C2 Servers

A new attack uses compromised LoJack endpoint software to take root on enterprise networks.

Researchers have identified a new attack that uses computer-recovery tool LoJack as a vehicle for breaching a company’s defenses and remaining persistent on the network. The good news is that it hasn’t started specific malicious activity — yet.

Arbor Networks’ ASERT, acting on a third-party tip, found a subtle hack involving legitimate LoJack endpoint agents. The threat actors didn’t change any of the legitimate actions of the software. All they did was strip out the legitimate C2 (command and control) server address from the system and replace it with a C2 server of their own.

Richard Hummel, manager of threat research at NETSCOUT Arbor’s ASERT, says that the subtlety of the action means that most existing security software and systems will not provide any indication that something is going wrong. “If they’re running as an admin, they might not see it,” Hummel says. “This doesn’t identify as malicious or malware, so they might just see a subtle warning.”

The addresses of the new C2 servers are pieces of the analysis that led researchers to believe that the campaign is attributable to Fancy Bear, a Russian hacking group responsible for a number of well-known exploits. On initial execution, the infected software contacts on of the C2 servers, which logs its success. Then, the new LoJack application proceeds to … wait. It simply does what LoJack does, with no additional communication or activity.

Hummel says that the lack of activity and lack of steps to prevent legitimate activity means that most security software won’t recognize that the app is anything but legitimate.

Once in place, the nature of the LoJack system’s activities means that it’s very persistent, remaining in place and active through reboots, on/off cycles, and other disruptive events.

“This is basically giving the attacker a foothold in an agency,” Hummel says. “There’s no LoJack execution of files, but they could launch additional software at a later date.” And the foothold that the software gains is a strong one.

“If they’re on a critical system or the user is someone with high privileges, then they have a direct line into the enterprise,” Hummel explains, adding, “with the permissions that LoJack requires, [the attackers] have permission to install whatever they want on the victims’ machines.”

There is, so far, nothing about the attack that contains a huge element of novelty. As for the code, Hummel says that this particular mechanism for attack has been around since 2014, when the software that is now LoJack was called Computrace. Even then, Hummel says, “researchers talked about how LoJack maintains its persistence.”

And while Hummel’s team has suspicions about infection mechanisms, they aren’t yet sure what’s happening. “We did some initial analysis on how the payload is being distributed and other Fancy Bear attacks, and we can’t verify the infection chain,” Hummel says, though he’s quick to add, “We don’t think that LoJack is distributing bad software.”

With stealth and persistence on its side, how does an enterprise prevent this new attack from placing bad software on corporate computers? Hummel says everything begins with proper computer hygiene. “Users will be prompted for permission warnings — don’t just blow by them,” he says.

Next, the IT security team can scan for five domains currently used by the software:

  • elaxo[.]org
  • ikmtrust[.]com
  • lxwo[.]org
  • sysanalyticweb[.]com (2 forms)

Each of these domains should be blocked by network security mechanisms.

It’s rare to have warning of an active infection method prior to damaging attacks using the infection target. Organizations now have time to learn about the tactic and disinfect compromised endpoints before the worst occurs.

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/lojack-attack-finds-false-c2-servers/d/d-id/1331691?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Are You Protecting Your DevOps Software ‘Factory’?

New study highlights insecurities in DevOps toolchain implementations.

new study out today shows that DevSecOps could stand to use a healthier dose of OpSec, as many DevOps tools are left exposed on the public Internet with little to no security controls.

So much of the education about the intersection of DevOps and security focuses on application security testing and secure development practices. But DevSecOps is about more than just securing the software product itself. It’s also crucial to protect the “factory” that produces those applications — namely, the development infrastructure and DevOps toolchain.

Unfortunately, according to a new study conducted by researchers with IntSights, a statistically significant number of DevOps organizations are falling down on that second part of the equation.

“Given DevOps tools sit in the cloud, they are more vulnerable to reconnaissance by hackers,” says Alon Arvatz, co-founder of IntSights. “As opposed to traditional IT tools and servers that are still protected by the company network, a misconfigured DevOps tool will expose its data directly to the Internet, meaning hackers don’t need to use any special hacking tools, just simple scanning tools that are available online.”

Arvatz and his team examined nearly 26,000 URLs of different DevOps tools and servers from a range of organizations and did a simple test by trying to connect to them through a browser.

“No fancy attack tools, or port scanning, or any preliminary data, except for using open OSINT [open source intelligence] tools and websites to create the list,” the report explains.

They found that more than 23% of those tested were accessible from the Web, with a range of access levels.

“Some were totally exposed without any user/password combination, exposing company data, user lists, internal server names, etc.,” the report explained. “Most were protected with a simple login page, and a minority with a more robust cloud access security broker.” 

The trouble is that even those tools and servers that did have nominal security controls in place still left enough breadcrumbs and openings to make it easier for attackers to gain entry. For example, many organizations used DevOps tool names for a Web-facing server — such as Jenkins, Kibana, Trello, Jira, and so on. Additionally, most tools don’t have built-in multifactor authentication, leaving the security of the system up to a simple username and password combo.

Ideally, many standard technologies and practices of DevOps can be used advantageously for security purposes. For example, the use of infrastructure as code and automation of systems can provide a very efficient means for helping organizations consistently lock down their development and production application infrastructure. 

“Because we have this infrastructure as code, we’re getting a lot of reuse,” explains Paula Thrasher, director of digital services for General Dynamics, a large federal IT integrator, who says that prior to a DevOps-oriented pipeline, her teams could expect 60% reuse of design patterns and infrastructure. Now that number is pushing 90%. “Which basically means 90% of the stuff in our production is a reusable standard, and not a special-snowflake bespoke server. That’s huge, because it just takes down the attack surface.”

Of course, on the flip side, that means that one mistake is amplified across an entire organization. As an organization scales, a misconfiguration makes it into every single instance a team fires up rather than just the one in a single insecure bespoke server. So, the stakes are higher. 

Arvatz explains that DevOps can do a lot to help raise security posture in an organization and most of the tools involved have the capability to be used securely. But at the same time, organizations need security-conscious administrators in charge.

“Some [DevOps] tools do offer inherently higher security in their basic configuration, and most cloud platforms offers robust security defenses, but some tools rely entirely on their operator knowledge and expertise,” he says. “The general shortage of experienced employees in the DevOps and security fields and the fact that most if not all tools are cloud-based make them prone to human errors.”

In addition to renaming DevOps tools on Web-facing servers and implementing multifactor authentication, Arvatz and his team suggest a number of other best practices for these tools. These include using proxy servers, stopping the use of default ports, and meticulously keeping up on the patching of infrastructure. They also suggest blocking access to the servers altogether from the Web when feasible, though they acknowledge that may not always be practical.

Related Content:

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/operations/are-you-protecting-your-devops-software-factory/d/d-id/1331692?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The hacker who broke into jail and had to stay for 7 years

A 27-year-old Michigan man who tried to hack a “Get out of jail early” card for his friend is now going to be in jail himself for 87 months – 7 years 3 months.

On Thursday, US Attorney Matthew Schneider’s office announced that besides the jail term and 3 years of supervised release to follow, Konrads Voits is giving up all his bitcoins, some of his electronics—including a laptop—an integrated circuit component, and several mobile phones.

In total, Voits has been ordered to pay restitution in the amount of $238,517.

That will be going to Washtenaw County, whose jail network Voits hacked in an attempt to alter his buddy’s prison record. In December, Voits pleaded guilty to damaging a protected computer.

The Attorney General’s office says Voits used a classic phishing scheme laced with typosquatting. According to his guilty plea, in January 2017, Voits set up a phishing domain. It looked just like a legitimate county domain name for Washtenaw, except Voits swapped the final W for a double V.

Then, he called and emailed employees of Washtenaw County, claiming that he was “Daniel Greene” and that he needed help with court records. Over the phone, he pretended to be “T.L.” or “A.B.”, a county IT employee. The emails tried to entice employees into clicking on a hyperlink so they’d be whisked off to Voits’s malware-poisoned site, while the object of the phone calls was to get his victims to type that phishing site domain into their browsers so as to download an executable malware file.

It was to “upgrade the county’s jail system,” Voits claimed.

Voits hit the jackpot when he called county jail employees, posing as members of the jail’s IT staff, and tricked workers into installing a fake update package for the county jail’s application.

Voits got full access to the county network, including to the XJail system—which is a program used to monitor and track county prison inmates—as well as to search warrant affidavits, internal discipline records, and personal information of county employees. Voits managed to steal passwords, user names, email addresses and other personal information of more than 1,600 county employees.

Once he had full access to the county’s network, he accessed jail records for several inmates and altered the record of at least one of them to try to get him out early.

According to the guilty plea, mopping up after Voits’s intrusion was heinous and costly. The county had to hire an incident response company to suss out the extent of the damage he had caused, reimage numerous hard drives, verify the accuracy of the electronic records for nearly every single current inmate, and purchase ID theft protection for the 1600 employees whose data Voits got his hands on.

The AP reports that Voits’s scheme was thwarted by an employee who checked records by hand. The attorney general’s office didn’t give details beyond the fact that the county’s IT employees responded quickly.

As a result, Voits’s plan failed: nobody was released early, though jail employees and county IT employees had to put in extra hours to investigate and to clean it all up.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/pmdaG93NLa4/

Surveillance watchdog learns that old domains never die

Forgetting to renew your domain name can be embarrassing. Letting the domain slip out of your control because, well, maybe your organization doesn’t think it’s necessary anymore … can have equally embarrassing repercussions, as the latest domain stumble makes clear.

As far as the memory-lapse variety goes, we’ve seen a telco get fined $3 million for the flub-up, which led to deaf, hard-of-hearing and speech-disabled people losing access to emergency services for three days.

The Dallas Cowboys did it. Microsoft did it. Twice (buh-bye, Hotmail!). Foursquare did it. Dell’s auto-backup and recovery vendor let a domain slip into the grasp of a typosquatter that started showing up in malware alerts about two weeks later. Ketchup king Heinz did it with a label-design contest, Fundorado.com, that wound up as a porn site.

And now, we have the retired-domain screwup. This most recent example of URL dysfunction takes shape at the site for the UK’s Interception of Communications Commissioner (IOCCO).

It might have seemed like the agency didn’t need the domain anymore. The IOCCO was a watchdog created by the UK government to produce annual reports on the government’s use of its surveillance powers. It was folded into the Investigatory Powers Commissioner’s Office (IPCO) in September 2017 as part of the UK’s Investigatory Powers Act, also known as the Snooper’s Charter.

IPCO or no IPCO, the IOCCO website can still be found on Google:

IOCCO

The IoCCO’s Twitter account is still up too, and, at the time of writing, linking to the now defunct domain – a domain now hosting a company advertising Easy Solutions to Premature Ejaculation To End Your Headache NOW!

To wit:

Not the IOCCO
According to WHOIS records, the domain is now registered to a man in Washington DC with a phone number that’s no longer in service.

The mothballed website can be seen in all its original, nonsexual fustiness in the national archives.

The IPCO didn’t respond to media requests for comment.

Perhaps the IPCO thought that the old IOCCO domain name had outlived its usefulness, or maybe renewing the domain was a detail easily overlooked (or it simply wasn’t part of anyone’s job anymore).

Unfortunately, there are plenty of spammers and typosquatters who are more than happy to jump on a suddenly available, well-trafficked domain name and redirect visitors to a malware-infested hell hole or a spammy solution to their sexual problems.

For anyone worried about this happening to their business the lesson is that domains are cheap and your organization’s reputation is expensive: just set your old domains to auto-renew and keep them all, forever.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Ujto2SLZcNk/

Twitter sold user data to Cambridge Analytica’s Aleksandr Kogan

It might surprise many Twitter users to be told this, but every tweet they post could, at some point, end up being sold on to private companies to pore over.

Tweets might seem innocuous individually but put them together with millions of others and deeper patterns emerge – or at least that’s what a lot of companies believe.

There’s no secret about this access, and it’s been happening through a developer API that reveals everything said on the platform since the first ever tweet in 2006.

This is the context for a slightly embarrassed admission by Twitter that in 2015 people with access included a Cambridge Academic called Aleksandr Kogan, whose company Global Science Research (GSR) created the personality quiz now infamous for its role in the Facebook-Cambridge Analytica data harvesting scandal.

There is no suggestion of wrongdoing on Twitter’s part for granting access to GSR, but the fact it felt it necessary to mention the relationship at all tells its own story.

Said Twitter:

In 2015, GSR did have one-time API access to a random sample of public tweets from a five-month period from December 2014 to April 2015.

The use of “random” is significant because it implies this was not a targeted, demographic trawl. Furthermore:

Based on the recent reports, we conducted our own internal review and did not find any access to private data about people who use Twitter.

Which sounds reassuring because it seems to be saying that the tweets couldn’t have been correlated to the real user profiles behind them.

This stands in contrast to the accusation that Facebook allowed the personally-identifiable information of at least 87 million users to end up in the hands of a third party without permission.

Still, it’s not hard to imagine Twitter would be nervous about associations, which is why senior director for product management Rob Johnson went to some pains last week to clarify its developer access parameters in more detail.

Johnson said that Twitter never sells access to direct messages (frankly, it would be disturbing if they did), that protected tweets are not shared with developers, likewise deleted tweets.

Pointing to Twitter’s detailed list of restrictions, he added:

We prohibit developers from inferring or deriving sensitive information like race or political affiliation or attempts to match a user’s Twitter information with other personal identifiers in unexpected ways.

But having strict terms and conditions is all very well as long as Twitter has some way of monitoring how the data is being used and enforcing its policies.

What we saw with Facebook is that once you’ve granted somebody access to your data it’s very difficult to control what they do with it or who they then give it to.

Hanging over all of this is the EU General Data Protection Regulation (GDPR), the most significant piece of data protection legislation ever, which amongst its many provisions for EU citizens changes the model of consent that has allowed data to be traded without that being apparent.

Separately, Twitter has published its GDPR-happy privacy policy, as well as pointing out how users can download their data archive to view, using any desktop browser.

If Facebook’s Cambridge Analytica troubles have achieved one thing it is surely to have encouraged more people to read these documents.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2cq6jvGOu_4/

Google Maps open redirect flaw abused by scammers

This morning I received a Skype message from a friend that I’d not heard from in quite a while. They didn’t have much to say though, they just sent me a link, out of the blue.

Being a career geek I’m no stranger to no-frills, taciturn, often blunt, communication with my peers… but this was from somebody friendly, somebody who worked in marketing for goodness sake.

There was no way this was legit.

Skype

If you look closely you’ll see that the scammer’s have used Google’s soon-to-be-discontinued URL shortening service, goo.gl.

URL shorteners work by sending your browser through at least one HTTP redirect. They’re an obvious choice for somebody wanting to hide a phishy, scammy or otherwise iffy link because they’re a form of obfuscation that users have been trained to understand and trust.

In this case, the fact that the link comes from somebody you know adds an extra veneer of legitimacy.

Of course Google doesn’t stand for iffy links, so spammy goo.gl URLs are almost as easy to report as they are to create.

Crooks can get around that by using the shortener to redirect to another HTTP redirect, perhaps on a legitimate but compromised domain, before bouncing victims to a not-at-all-trustworthy-looking domain that’s hosting the scam.

With a little help from curl -I, I followed the chain of URL redirects to see where I’d end up.

There were two redirections in the chain before the final you-wouldn’t-click-it-if-you-saw-it Russian URL hosting an English language scam. The scam was the usual breathless guff and faux endorsements – in this case lies about the folks on Shark Tank – trying very, very hard to convince me that a turmeric diet pill can overcome my daily efforts to eat all the biscuits.

Must be some pill.

Much more interesting than the shortener at the start, or the scam at the end, was the URL in the middle of the redirect chain.

It turns out that its URL shortening service isn’t the only way that Google is assisting this scammer unwittingly.

Terminal

Between the legitimate Google URL shortener you’d probably trust, and the Russian URL you probably wouldn’t, the redirection chain bounces you through another Google URL belonging to Google Maps.

The crooks have turned a service designed for shortening and sharing Google Maps URLs into an impromptu redirection service for sharing whatever the heck they like, thanks to an open redirection vulnerability in the maps.app.goo.gl service.

Open redirect vulnerabilities allow attackers to abuse code that’s intended to perform an HTTP redirect to a specific something into code that redirects to anything.

For example, here’s a Google Maps URL that redirects to example.org:

https://maps.app.goo.gl/?link=https%3A%2F%2Fexample.org

Because this isn’t an official Google redirection service there’s no easy-to-use interface where the URLs can be reported.

The scammers also don’t have to risk unmasking themselves in the Google surveillance apparatus when they set up their URLs because they don’t have to use a Google-owned interface or API to do it, they can just concoct them at will.

Open redirects aren’t as dangerous as SQL injection, XSS or CSRF vulnerabilities but they are common, easily avoided and, undoubtedly, useful to crooks.

And not just for obscuring scam URLs.

Just last week I wrote about how open redirects on a multitude of US government websites are being abused to stuff Google Search results pages with links to porn sites.

Open redirects

The lesson for Google is the same as it was for those government websites, and anyone else with code that allows HTTP redirects: Any and all user input should be treated as hostile until it has been checked and sanitised.

To avoid being abused, code that performs redirections should only send users to URLs that match a specific pattern or list of links thought to be OK.

In the case of Google maps that should be simple – if the URL in the link parameter isn’t a Google Map, there’s no reason to allow the redirection.

Google appears to have known about this flaw since September 2017.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aeSHZHjNzAs/

North Korea’s AV Software Contains Pilfered Trend Micro Software

Researchers get hold of a copy of Kim Jong Un regime’s mysterious internal ‘SiliVaccine’ antivirus software, provided only to its citizens, and find a few surprises.

A rare hands-on analysis of the antivirus software that North Korea provides its citizens shows the proprietary tool is based on a 10-year-old version of Trend Micro’s AV scanning engine that also was customized to ignore a specific type of malware rather than flag it.

Researchers at Check Point today published new research from their exclusive study of the so-called SiliVaccine AV program that is used only inside the cloistered nation. North Korea blocks its citizens from the public Internet and runs its own intranet; only North Korea’s ruling elite are allowed access to the global Internet.

Check Point obtained a sample of the malware from a freelance journalist specializing in North Korean technology who had received a suspicious email message with a link to the AV program. The researchers say it’s unclear just how North Korea got its hands on Trend Micro’s AV engine, but since Trend doesn’t do business with North Korea, it’s most likely a case of stolen intellectual property.

Jon Clay, director of global threat communications at Trend Micro, told Dark Reading that the software was not stolen via a hack of Trend Micro systems. Rather, Trend Micro suspects its software was pirated somehow. “We strongly believe this is a case of software piracy, in which our software is being used illegally. North Korea has been repackaging software for sale locally for years, including Adobe Reader in 2013,” for example, he says.

“This was not a data breach and no evidence suggests they are using stolen source code,” he says. “It appears they obtained a public version of our scan engine DLL and modified it.”

What is clear is that the North Korean AV was built to appear as its own software. “Every aspect was well-written and they had a lot to hide … the signatures are encrypted and the fields are protected,” says MichaelKajiloti, malware research team leader at Trend Micro.

SiliVaccine uses Trend-Micro AV pattern files but renamed Trend’s malware signature names with names of its own, for example, and the Trend Micro engine’s identity is well-masked, according to Check Point.

“They went the extra mile to hide the fact they stole intellectual property,” says Mark Lechtik, one of the Check Point security researchers who studied SiliVaccine.

But Trend Micro’s Clay maintains that North Korea’s SiliVaccine does not have access to Trend Micro’s AV signature updates, and that the AV program instead is using homegrown signatures of its own.

Malware Whitelist

SiliVaccine operates with another hidden twist: it whitelists a specific malware signature that Trend Micro identifies as MAL_NUCRP-5, which detects files that employ behavior patterns used in various types of malware, including fake antivirus installers and droppers, Check Point found. That may allow the North Korean government to run malware on its citizens’ machines without their knowledge, possibly for some type of surveillance, according to the researchers. “Or the signature gives them the option to create any malware they want to target citizens and build it in such a way that the AV will never catch it,” says Kajiloti.

Lechtik says Check Point’s team concluded that the development of SiliVaccine has been ongoing for several years. “I highly doubt it was reverse-engineered,” he says. “We think its more likely that it’s much more a part of their” getting access to the software, he says.

Check Point shared its findings with Trend Micro, which confirmed that the software uses a module based on an older version of its AV scanning engine from more than ten years ago – VSAPI Scan Engine 8.9x – and that no source code is included in the software. Trend believes it’s a case of software piracy, and that the fraudsters reverse-engineered the software as its own.

“It appears that a compiled code library was illegally copied, repacked, and then wrapped with additional application code not originating from Trend Micro to build a normal AV scanning application called SiliVaccine,” Trend Micro’s Clay says. “The authors of the SiliVaccine product intentionally removed a specific heuristic detection in their product’s version of the pattern file.”

In the end, there doesn’t appear to be any risk to legitimate users of Trend Micro’s AV software since it’s such an old version, and SiliVaccine has its own encrypted files that can’t be used by existing Trend Micro AV products. “The result is that it would be impossible for a Trend Micro product to accidentally or even intentionally use a SiliVaccine modified pattern file since Trend Micro products perform pattern integrity checks,” Clay says.

Clay says the incident suggests that North Korea has programmers with reverse-engineering skills. “As such, any software vendor should be concerned that North Korea could do the same with their code.”

It also indicates they didn’t want to develop their own AV scanner: “They needed an AV scanner and did not want to put in the time or effort to develop their own so they illegally obtained a publicly available scanner and modified it for their own use,” Clay says.

Dark Hotel Clue

Journalist Martyn Williams in July 2014 received a sketchy email from a purported Japanese engineer with a news tip that included a Dropbox-hosted zip file with SiliVaccine software and a file posing as a patch for the AV program. The phony patch turned out to be a camouflaged piece of JAKU malware, which is a Trojan dropper which has been tied to DarkHotel, a North Korean cyber espionage group.

The JAKU file was also signed with a certificate from the same “company” that had also signed malware files for the Dark Hotel nation-state hacking group thought to be out of North Korea.

“We can’t really say the JAKU bundled in was part of SiliVaccine; it might be … but more likely Martyn [Williams] was the target here” of a cyber espionage campaign, Lechtik says.

JAKU to date has infected 19,000 victims mostly via malicious BitTorrent share files. It’s typically known for targeting and monitoring individuals in South Korea and Japan who work for non-governmental organizations, engineering firms, government, as well as academia.

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/endpoint/privacy/north-koreas-av-software-contains-pilfered-trend-micro-software-/d/d-id/1331686?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

A Data Protection Officer’s Guide to GDPR ‘Privacy by Design’

These five steps can show you how to start building your foundational privacy program for the EU’s General Data Protection Regulation.

Part 2 of our series DPO’s Guide to the GDPR Galaxy 

Today, we’re mapping our GDPR journey and focusing on the key steps to implement the basic building blocks of a “privacy by design” program.

1. Assemble your privacy building blocks. If you don’t already have a foundational privacy program, now is the time to roll up your sleeves and start one. While we’ve been focusing on GDPR, chances are this isn’t the only privacy law your organization is subject to, which is why it’s critical to set clear goals. The very first goal you should make is creating the privacy vision and mission statement. The intention is to create a clear, concise message to communicate the purpose of the program to all key stakeholders and to circulate it and gain buy-in from your business executives.

2. Define the scope of your privacy program. Once you’ve established your privacy vision and mission statement, you can begin to determine the program scope. If GDPR is your sole focus, then you would want to find out if you are in scope for GDPR by reviewing the 99 articles that comprise the law. For organizations outside of the European Union, one rule of thumb to follow here is that if you process data that belongs to EU citizens, then your organization falls under GDPR. From there, you must figure out if you are a controller (the natural or legal authority that, either alone or jointly, determines the purposes and means of processing personal data), processor (the natural or legal authority that processes personal data on behalf of the controller), or possibly both.

3. Build your privacy and GDPR army. Start by referencing GDPR Articles 37–39 to determine if you need to appoint a data protection officer (DPO). If after reviewing those articles, your company decides to appoint a DPO, even if not legally required to, you must then adhere to GDPR requirements. If you don’t need a DPO, it’s a good idea to appoint a privacy champion. Ensure your DPO or privacy champion is empowered to execute on the program vision and mission statement and that this person has the support of the C-suite.

GDPR responsibilities don’t solely rest on the shoulders of your DPO or privacy champion, so you’ll need to look at all affected areas of your organization. With GDPR, that’s probably every department if you have employees in the EU. There are many resources on the internet to help you with this decision if you need a quick gut check.

Begin building support and gathering information from key internal partners such as security, human resources, legal and information technology teams, and any other relevant departments. Work with your partners/stakeholders to help them understand why it’s necessary to document how and what type of data flows through each group in order to meet the rigorous records of processing activities requirements in GDPR Article 30.

4. Next stop on your privacy program road map — creating an actual road map! Every program needs a framework to build from and expand over time. Frameworks help provide structure and guidance such as requirements in policy or process. They help reduce risk and show compliance with best practices. Some examples of privacy frameworks to reference and leverage when building your own include, but are not limited to, Privacy by Design, Fair Information Practice Principles (FIPPs), and ISO 29100 Privacy Data Protection Standards.

As you start to work with your internal technical stakeholders, it’s imperative they are committed to building privacy into every aspect of the product. Walk them through GDPR Article 25, which focuses on data protection by design and by default, and be sure to include security in the conversation because you can’t have privacy without security. Baking those in from the very beginning is an important foundational piece of a solid privacy program and of GDPR. As a best practice (and per GDPR Article 32), the controller and processors must implement appropriate technical and organizational measures to ensure a level of security that is appropriate to the risk.

As a key part of your road map, privacy impact assessments and, when appropriate, data protection impact assessments (DPIAs) must be performed if processing data could result or lead to identity theft, damage to a person’s reputation, discrimination, or any other sort of physical, material, or nonmaterial damage. (For more information on DPIAs, see GDPR Article 35.)

5. Establish processes and metrics for accountability and benchmarking. Other elements that are key to a successful privacy and security program include identifying and establishing metrics and instituting a life cycle management process (think software development life cycle but for privacy) to ensure that controls are working as designed and are effective to continue to support and maintain the program. Additionally, there must be measures in place to respond to incidents and events such as breaches, legal holds, and requests for information.

At any point in the process of building or enhancing your program, you may decide to perform a gap assessment against your requirements or hire an independent third party to carry out the assessment. Performing a gap assessment against your privacy framework will highlight the areas that still need addressing. There is nothing wrong with self-assessments; however, using a third party helps demonstrate to customers that the program has been reviewed by independent, impartial parties.

Related Content:

Jen Brown is Sumo Logic’s compliance and data protection officer (DPO) and is responsible for leading compliance, risk, and privacy efforts for the company, including GDPR, PCI DSS, ISO 27001, HIPAA, SOC2, and FedRAMP, as well as several other regulations. Prior to Sumo … View Full Bio

Article source: https://www.darkreading.com/endpoint/a-data-protection-officers-guide-to-gdpr-privacy-by-design/a/d-id/1331640?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Password Reuse Abounds, New Survey Shows

Despite heightened awareness of the security implications many users still continue to reuse passwords and rarely if ever change them, a LogMeIn survey shows.

When it comes to the password behaviors of computer users, there’s bad news and there’s more bad news.

A new survey by LogMeIn of some 2,000 individuals in the United States, Australia, France, Germany, and the UK has revealed what can only be described as broad apathy among a majority of users on the issue of password use.

Though 91% of the respondents profess to understand the risks of using the same passwords across multiple accounts, 59% said they did so anyway. For 61%, it is the fear of forgetfulness that was the primary reason for password reuse. Fifty percent say they reuse passwords across multiple accounts because they want to know and be in control of their passwords all the time.

The situation is equally depressing around the issue of password change. More than half – 53% – of the respondents confess to not changing their passwords in the past 12 months even though they were aware of the risks, and despite news of a data breach involving password compromise. Not only did nearly six in 10 of the users polled use the same password across accounts, they rarely if ever change the password over time. In fact, 15% of the respondents say they would rather do a household chore or sit in traffic (11%) than change their passwords.

Exacerbating the situation is the sheer number of online accounts that users have these days. Nearly eight in 10 (79%) of the LogMein survey takers have between one and 20 online accounts for work and personal use.

Forty-seven percent don’t do anything differently when creating passwords for personal and work use; less than one-fifth (19%) create more secure passwords for work. A surprisingly high 62% reuse the same password for work and personal accounts.

The LogmeIn survey results are another reaffirmation of the notoriously poor password behaviors of online users. Previous studies have revealed a similarly lackadaisical attitude when it comes to selecting and managing passwords controlling access to online accounts.

Default and easily guessable passwords (12345 anyone?) have led to countless individual and corporate account takeovers and compromises in recent years, and prompted widespread calls for a move away from password-based authentication mechanisms altogether.

“I’d say the biggest surprise is that even though people are aware of the major cyberattacks and increases in costly data breaches, it’s still not translating to better password security practices,” says Sandor Palfy, CTO of identity and access management at LogMeIn.

Risky Business

This password neglect is creating huge risks and undermining overall security both for individual users and for employers. “The lesson for enterprises is most of their employees do not recognize the critical role that passwords have in protecting their personal and work information,” Palfy says.

Many employees seem unaware or uncaring of the fact that weak passwords can potentially put the organization at risk, he says. “Enterprises need to rethink security policies and implement ways to centralize, automate, and securely store employee passwords.”

The most common mistake that organizations can make is to underestimate the danger posed by weak passwords. The reality is that each password is like an entry point into the enterprise and needs to be secured like any other entry point. “Organizations need to take steps to both regularly educate and communicate password best practices to all employees, including how and why to use strong passwords,” Palfy says.

Organizations also need to ensure they have the right password management tools and processes for enforcing strong password practices across the enterprise. “The organizations that can rapidly and effectively address that challenge are well-positioned to keep their businesses safe,” Palfy notes.

Related Content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/informationweek-home/password-reuse-abounds-new-survey-shows/d/d-id/1331689?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Bitcoin hijackers found at least one sucker for scam Chrome extension

Security researchers have caught a Bitcoin-hijacking Chrome extension that only managed to grab one BitCoin transaction before being exposed.

Trend Micro researchers said the malicious extensions used an attack technique that first emerged last year, dubbed FacexWorm, and added that they noticed re-emerging activity earlier this month.

FacexWorm propagates in malicious Facebook Messenger messages, the company said, and only attacks Chrome; if another browser is detected, the user is directed to an innocuous advertisement.

Victims were tricked into installing the malicious extension as a codec extension, offered when they clicked a Facebook Messenger link to a YouTube video.

“FacexWorm is a clone of a normal Chrome extension but injected with short code containing its main routine,” the post said. “It downloads additional JavaScript code from the CC server when the browser is opened. Every time a victim opens a new webpage, FacexWorm will query its CC server to find and retrieve another JavaScript code (hosted on a Github repository) and execute its behaviours on that webpage”.

FacexWorm infection chain

The FacexWorm infection chain. Click to enlarge

To that are added the ability to steal account credentials for Websites of interest to FacexWorm, while redirecting victims to cryptocurrency scams. The Trend post added that it also “injects malicious mining codes on the webpage, redirects to the attacker’s referral link for cryptocurrency-related referral programs, and hijacks transactions in trading platforms and web wallets by replacing the recipient address with the attacker’s.”

In case it got nowhere trying to hijack transactions, the extension would also try to pick up pennies with referral scams targeting Binance, DigitalOcean, FreeBitco.in, FreeDoge.co.in, HashFlare, and others.

Once infected with the extension, a user searching for cryptocurrency-related words in the URL bar – “blockchain” or “ethereum”, for example – would be hijacked to a fraudulent page. That page asks users to send 0.5 to 10 ether to the attackers wallet “for verification”, promising 5-100 ether in return. “We have so far not found anyone who has sent ETH to the attacker’s address,” Trend’s researchers said.

It seems there’s a limit to peoples’ folly, after all. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/01/bitcoin_hijackers_found_at_least_one_sucker_for_scam_chrome_extension/