STE WILLIAMS

How to Build a Path Toward Diversity in Information Security

Hiring women and minorities only addresses half the issue for the IT security industry — the next step is retaining these workers.

BLACK HAT USA – Las Vegas – Some 1.8 million information security professionals will be needed in the next five years worldwide, further driving home the need to expand the pool of potential candidates by bringing more women and minorities into the mix, speakers on the “Making Diversity a Priority In Security” panel said here today at Black Hat.

Not only are companies looking to fill vacant job openings, but they are increasingly seeking to add diversity to the workforce.

When you look at diversity, it goes beyond a person’s gender and race and it brings to the table the benefit of a diversity of thought, says panelist Anthony Johnson, managing director and business information security officer at JPMorgan Chase Co.

Panelist Aubrey Blanche, global head of diversity and inclusion at Atlassian, noted that empirical research has shown that when employees are working with people who are different than they are, they process information differently. As a result, one potential benefit may be coming up with ideas and innovation by studying an issue from a different perspective, Blanche says.

It’s this potential benefit that prompts some companies to hire women and people of color for information security roles, even though their level of experience is less than other candidates, the panelists noted.

“You can say hire more people, but that doesn’t solve the problem. You need to have a diversity program that gets the pipeline flowing,” said Johnson.

Some of the panelists said their organizations are working on initiatives to encourage high school, middle school, and even elementary school-aged students, to learn about the cybersecurity field.

Palo Alto Networks, for example, teamed up with the Girl Scouts of the USA. Palo Alto Networks announced last month it would assist in delivering a national Girl Scout Cybersecurity badge for students in kindergarten through the 12th-grade.

“We partnered with the Girl Scouts to offer cybersecurity badges to K-12 girls, so all these girls will be exposed to cybersecurity,” says Rick Howard, Palo Alto Networks chief security officer.

Another way to entice hiring managers, internal recruiters, and others involved in the hiring process to reach out and interview a diverse pool of job applicants, is to tie it to performance bonuses or some form of financial reward, says Mary Chaney, vice president of the International Consortium of Minority Cybersecurity Professionals (ICMCP).

Job descriptions often present a list of must-have and want-to-have requirements that preclude women and minorities. One way to bridge that gap is to write more approachable and realistic job descriptions that open the door for entry-level applicants as well.

“Women don’t apply for jobs, even if they are 80% qualified. They won’t apply because they don’t meet the other 20%,” Chaney says.

Maintaining a Diverse IT Security Workforce

Hiring women and minorities only addresses half the issue for the IT security industry. The next step is retaining these workers, according to the panel.

The number one reason women and minorities leave is because of mistreatment, Blanche says. One way her company sought to address attrition was by eliminating the subjective portions of performance evaluations, she added.

“Sometimes if a woman’s voice is silenced during a meeting, after meeting after meeting, she goes silent,” Chaney explained. She adds that women likely stay where they are valued and have a good support system.

Palo Alto Network’s security team has marching orders from Howard that sexist jokes and comments will not be tolerated, he noted.

In the Black Hat keynote address here earlier in the day, Alex Stamos, CISO of Facebook, noted that two male engineers were treating a female security team member with disdain and disrespect. Stamos chastised the two engineers for it and was surprised when the female employee called him over to discuss the meeting and asked Stamos not to rush to her defense in the future. She explained it would be harder for her to gain respect and credibility with the two male engineers and the team if Stamos continued to rush to her defense.

Atlassian’s Blanche said one way she dealt with finding her voice to speak up in meetings – after feeling she was frequently dismissed – was to call on a peer who created “space for her.” In the meetings, he would ask Blanche what she thought, and over time she began to participate in the discussions.

“Over time I could say something and didn’t feel like I would die,” she said.

Related Content:

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/how-to-build-a-path-toward-diversity-in-information-security/d/d-id/1329476?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Microsoft adds all of Windows – including Server – to extended bug bounty program

Microsoft has extended its bug bounty program for Windows Insider to include the whole of the OS, extended its operation indefinitely and added Windows Server Insider to the eligibility list.

Redmond’s previously offered bounties for specific Windows features only. Now you can score sweet Seattle-sourced dollars for finding a problem with any aspect of Windows. Rewards of up to US$15k are yours for the reaping.

Microsoft’s also trying to get you to devote most attention to its preferred ‘focus areas”. Hyper-V is currently top priority, as a bad bug in that code can earn you up to US$250k, $50k more than is on offer for any other bug and an increase on previous payments for those who find critical remote code execution, information disclosure and denial of services vulnerabilities in the virtualization code.

Windows Defender Application Guard is also a new focus, as it was added to the program just this week. There’s $30k on offer for those who find critical vulns in the slow Windows Insider release track.

Mitigation bypass and Microsoft Edge are the other focus areas and attract bounties of up to $100k and $15k respectively.

Microsoft’s being quite generous with this program, because it will pay ten per cent of prize on offer to the first researcher who finds a flaw its own people have already discovered. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/27/microsoft_extends_windows_bug_bounty_program/

‘SambaCry’ malware scum return with a Windows encore

Malware authors continue to chip away at Samba bugs similar to those that helped spread WannaCry/WannaCrypt.

Kaspersky researchers writing at Securelist say they’ve spotted a Windows variant of SambaCry, which was first spotted in June. The new variant has been dubbed “CowerSnail”.

The researchers strongly suspect CowerSnail comes from SambaCry’s developers as it points to the same CC server.

The authors have designed their malware to be cross-platform, writes Kaspersky’s Sergey Yunakovsky: it’s compiled using Qt, with a library framework that provides “cross-platform capability and transferability of the source code between different operating systems.”

The only penalty the developers suffer in trying to make the malware cross-platform is that the user code is only “a small proportion of a large 3 MB file”.

Yunakovsky reckons Qt was chosen so the creators could stick with familiar environments, and save themselves the pain of learning the details of Windows APIs, preferring to “transfer the *nix code ‘as is’”.

Unlike SambaCry, the CowerSnail authors don’t try to turn targets into cryptocurrency miners. Instead, infected machines get in touch with the CC (over the IRC protocol) and create “standard backdoor functions”.

These include receiving updates, executing shell commands, and self-removal if needed. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/27/sambacry_malware_scum_return_with_a_windows_encore/

Reminder: Spies, cops don’t need to crack WhatsApp. They’ll just hack your smartphone

Police in Germany will forego seeking decryption keys for secure messaging apps, like WhatsApp, and instead simply hack devices to snoop on suspects.

Given the grumblings coming from Australia, the UK, and other Five Eyes states about encrypted messaging, we suspect these nations will follow suit – if they’re not there already.

While everyone freaks out about strong encryption, and how you can’t change the laws of math to only allow the good guys to decrypt messages, don’t forget: if crypto can’t be tamed, the authorities will just exploit software and firmware bugs to compromise targets’ phones, PCs and tablets.

When politicians talk of mandatory backdoors, this is probably what they mean: not necessarily backdoors in the cryptography, but back passages into suspects’ software and apps.

According to leaked documents from the German Interior Ministry this week, the Euro nation’s authorities will use a new version of remote communication interception software (RCIS) – better known as spyware – to pull communications directly from targets’ devices, rather than intercepting and decrypting traffic.

It is claimed the updated RCIS tools can be covertly installed on PCs and handsets, silently hijacking the gear so that communications can be monitored after they’ve been received and decrypted.

Dubbed “RCIS 2.01,” the toolkit is slated for release later this year, and works with desktop and mobile operating systems – including iOS, Android and Blackberry. It can access chats in WhatsApp and Telegram, we’re told. Exactly how this spyware lands on a device is not clear: we imagine it can be physically installed, smuggled in an app or other download, injected wirelessly via a baseband or operating system exploit, or similar.

As broadcaster Deutsche Welle notes, thanks to a law passed last month, German police have the authority to hack and install software on the handsets and desktops of people they suspect to be terrorists.

Of course, governments keep a secret stash of bugs to exploit. The US and its Five Eyes allies have a suite of zero-day vulnerabilities and intrusion tools to attack handsets and desktops in order to eavesdrop on targets.

With government officials still struggling to convince the public of the need to give law enforcement the ability to decrypt communications on demand, the report out of Germany may well point the way toward future efforts to thwart encrypted chat apps. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/26/german_cops_pwn_phones/

Greek police arrest chap accused of laundering $4bn of Bitcoin

Police in Greece have arrested a Russian national they accuse of running the BTC-e Bitcoin exchange to launder more than US$4bn worth of the cryptocurrency.

According to Greek language news outlet the Daily Thess, FBI agents tracked 38-year-old Alexander Vinnik for more than a year before his arrest.

Another local outlet, Capital, says the Russian embassy in Greece has confirmed the arrest. The US will now begin extradition proceedings against Vinnik.

Since 2011, 7 million Bitcoin went into the BTC-e exchange and 5.5 million withdrawn, the police claim.

Vinnik was cuffed in possession of two laptops, two tablet computers, mobile phones, a router, a camera, plus four credit cards, all of which were seized.

The group Wizsec, which came to prominence in 2015 investigating the Mt Gox collapse, has gone on the record accusing Vinnik of laundering the proceeds of the Bitcoin theft that helped bring down the once-dominant exchange.

The group stops short of accusing Vinnik of stealing the Bitcoin himself, and while it doesn’t claim direct credit for the arrest, Wizsec says it and others have been working on the case and passing their findings to law enforcement.

In this post, Wizsec claims that after Mt Gox’s “hot wallet” private keys were stolen in 2011, Bitcoin from that wallet were transferred to wallets “controlled by Vinnik”. By 2013, “the thief had taken out about 630,000 BTC from Mt Gox”, the post says.

Wizsec spotted some of the stolen keys turning up at BTC-e wallets, and the same wallets were used to launder “coins stolen from Bitcoinia, Bitfloor and several other thefts” between 2011 and 2012.

The post says Vinnik transferred some coins back onto Mt Gox, where the “accounts he used could be linked to his online identity ‘WME’” – an identity the Wizsec folk believe was Vinnik.

Earlier this month, the trial of Mt Gox’s head Mark Kapeles kicked off in Japan. Documents filed with the Tokyo District Court say nearly 307,000 BTC went to BTC-e wallets.

In the July 26 court hearing in Thessaloniki, Vinnik denied the prosecutor’s accusations.

After a Greek appeals court hearing, the country’s Minister for Justice will make a final decision on extradition, and the Daily Thess says the maximum period of detention in Greece is two months. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/27/greek_police_arrest_alleged_russian_bitcoin_launderer/

Adobe’s Move to Kill Flash Is Good for Security

In recent years, Flash became one of the buggiest widely used apps out there.

Adobe this week announced plans to finally kill off its Flash media player by the end of 2020, citing obsolescence as one of the primary drivers for its decision. But the reason many want to see the end of the product is security.

For over two decades, Adobe Flash has powered video and interactive content on the Web, especially in areas such as gaming, education, and advertising. But in the past few years, it also became one of the buggiest apps out there.  

Statistics maintained by Mitre show that in 2016 alone a total of 266 vulnerabilities were disclosed in Flash Player, a vast majority of them critical, remotely executable flaws and denial-of-service flaws.

Despite widespread concern over security issues, the number of vulnerabilities in Flash Player actually increased in recent years instead of trending down. In fact, more than half of all vulnerabilities in Flash since 2005 — or 652 vulnerabilities out of 1,030 — were disclosed in just the last three years.

Many of the flaws in Flash have enabled widespread attacks against users running Windows, Chrome, and other platforms. In 2015, Flash accounted for some 17% of all zero-day vulnerabilities discovered that year. Four of the five most exploited zero days in 2015 were in Flash.

Eight of the top 10 security flaws leveraged by exploit kit makers in 2015 were in Flash, according to Recorded Future.

“Flash had the most vulnerabilities of any application — not operating systems — in 2016, and that is after years of Adobe ‘fixing’ Flash,” says John Pescatore, director of emerging security threats at the SANS Institute. So the company’s decision to pull the plug on the product is a good one, he says.

“To me, Flash was pretty much just built on a rotted foundation. No amount of added plywood or new shingles was ever going to make it structurally sound or anywhere near safe.”

As far back as 2010, Apple’s Steve Jobs cited Flash’s relative lack of security as one of multiple reasons why Apple would not pre-install the technology on iPhones, iPads, and iPods.

In recent years, all major browser makers — including Google, Apple, Microsoft, and Mozilla — have announced plans to gradually phase out support for the technology. Browsers such as Safari and Microsoft Edge already require users’ explicit permission to run the Flash plugin on websites instead of allowing it to run by default. Google has said it will do the same with Chrome soon.

Adobe itself portrayed its decision to end-of-life Flash as being driven by technology trends. In an alert Tuesday, Adobe said technologies such as HTML5, WebGL, and WebAssembly have matured to a point where they have become viable alternatives to Flash for multimedia content on the Web.

Most browser makers have begun integrating capabilities directly into their browsers that were once available only via plugins like Flash. Given this progression, Adobe has decided to terminate support for Flash at the end of 2020, the company said.

Microsoft, Google, and Apple issued simultaneous alerts endorsing the move while reminding users of previously announced plans for phasing out support for Flash in each of their products soon.

Facebook, a platform for which many developers have built Flash-powered games, noted how the evolution of WebGL and HTML5 standards had almost made Flash obsolete, and it urged developers to follow the deadlines set by browser makers. Games built on Flash will continue to run through the end of 2020, but developers should make plans for migrating to other technologies soon, the company noted.

“Adobe Flash has been heavily leveraged in advertising, media, and e-learning spaces,” says Mark Butler, CISO of Qualys. “But unfortunately, Adobe has not kept pace with the necessary security updates in order to outweigh the benefits of using the product.”

Organizations that rely on Flash should consider moving to HTML5 quickly as it meets the functional needs that Flash previously met. Unlike Flash, HTML5 doesn’t require any plugins and allows for seamless inclusion of audio and video files into code, he says. HTML5 is also an open technology that all new browsers have begun to incorporate.

“Security best practice dictates removal or maintaining current patch levels of Adobe’s Flash and Java software versions,” he said.

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/adobes-move-to-kill-flash-is-good-for-security/d/d-id/1329472?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How Attackers Use Machine Learning to Predict BEC Success

Researchers show how scammers defeat other machines, increase their success rate, and get more money from their targets.

BLACK HAT USA – Las Vegas – Researchers from Symantec demonstrated how threat actors can employ machine learning models to drive the success rate of business email compromise (BEC) attacks.

BEC scams are targeted attacks on high-level executives. Attackers rely on social engineering to craft emails and convince execs to perform financial transactions, such as wire transfers, on short notice. The more a victim trusts a fraudulent email, the more likely an attack will succeed.

These scams have targeted more than 400 organizations and caused more than $3 billion in losses, said security response lead Vijay Thaware during the presentation. Attackers exploit three “defects” in human psychology: fear, curiosity, and insecurity.

BEC doesn’t require a lot of funding, and most of the information attackers need is available for free online. Twitter, LinkedIn, and Facebook give a well-rounded picture of targets’ lives. Company websites reveal corporate hierarchies, names of C-suite execs, and the amount of time each has been with the organization, all information that could be useful to attackers.

“It’s all about how you present yourself over the Internet,” said Thaware. “This data can reveal many things about us.”

To illustrate his point, he presented a screenshot of a basic Google search: “chief financial officer” + “email.” It was an easy and effective way to get execs’ contact information, and in some cases their email addresses were available directly from the results page.

Ankit Singh, threat analyst engineer, explained how this reconnaissance and profiling prepares threat actors to launch BEC attacks. They can use machine learning to increase the success rate of access and get more money from their targets.

“Machine learning can help the attacker to bypass signature-based detection systems,” he explained. “It can be used to predict various outcomes of new data based on patterns of old data.” These models can also defeat other machines and anti-spam telemetry, he added.

Singh said this project involved supervised machine learning. In his demonstration, he showed how emails sent to BEC targets were marked as a “success” if the attack worked and “failure” if it didn’t. The demo included targets’ personal information like age, sex, number of LinkedIn connections, and number of followers and posts on Twitter.

All of this personal information was fueled into the training model, which could make predictions about whether an attack would be successful. If the attack worked, its information would be fed back into the model and improve the accuracy for future attacks.

“We feed data back into the model so the machine can learn what kind of profile is not attackable,” said Singh.

He emphasized the importance of timing during a BEC attack; threat actors can use targets’ schedules to plan their attacks on organizations. When they know who is doing something at a specific time, they can better plan when he would send an email and what he might say.

Singh demonstrated this idea, for example, an executive traveling to an event, and showed how the Twitter timeline, keynote plan, and travel plan could be used to indicate when he might be in transit or working.

To make their fraudulent email more believable, attackers can register domain names similar to those of the companies they are trying to imitate. This can be done for little money and effectively trick individuals and organizations, he explained.

Singh advised his Black Hat audience to be “very, very suspicious” when replying to emails. More than enough of their personal data is available publically and can be used for social engineering. As attackers start to label successful and unsuccessful attacks, their model can better determine when their actions will work.

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/how-attackers-use-machine-learning-to-predict-bec-success/d/d-id/1329475?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Garbage in, garbage out: a cautionary tale about machine learning

Here’s the thing about machine learning: use the right datasets and it’ll help you root out malware with great accuracy and efficiency. But the models are what they eat. Feed them a diet of questionable, biased data and it’ll produce garbage.

That’s the message Sophos data scientist Hillary Sanders delivered at Black Hat USA 2017 on Wednesday in a talk called “Garbage in, Garbage Out: How Purportedly Great Machine Learning Models Can Be Screwed Up By Bad Data”.

The machine learning movement

A lot of security experts tout machine learning as the next step in anti-malware technology. Indeed, Sophos’ acquisition of Invincea earlier this year was designed to bring machine learning into the fold.

Machine learning is considered a more efficient way to stop malware in its tracks before it becomes a problem for the end user. Some of the high points:

  • Deep learning neural network models lead to better detection and lower false positives.
  • It roots out code that shares common characteristics with known malware, but whose similarities often escape human analysis.
  • Behavioral-based detections provide extensive coverage of the tactics and techniques employed by advanced adversaries.

But it would be dishonest to suggest that machine learning is the silver bullet – the security remedy that can do no wrong. As Sanders noted, no technology is perfect and its creators should always analyze weaknesses and come up with bigger and better models.

Biased data

In her talk, Sanders explained the problem this way:

  1. Model accuracy claimed by security machine learning researchers is always wrong.
  2. It’s almost always biased in an overly optimistic direction.
  3. Estimating the severity of that bias is important, and will help ensure your model isn’t garbage.

She said:

Standard model validation results can be misleading. We want to know how our model is going to actually do in the wild, so we can make sure it doesn’t fail horribly. This is impossible. But we can still estimate. If we have access to an unbiased sample of deployment-like data, we can simulate our model’s deployment errors via time decay analysis. However, if we don’t have access to deployment-like data, then it’s impossible to accurately estimate how well our model will do on deployment, because we don’t have the right data to test it on.

The next best option, she said, is to test how sensitive one’s models are to new datasets they weren’t trained on, and pick training datasets and model configurations that perform consistently well on a variety of test sets, not just the test datasets that originate from the same parent as the model’s training dataset.

That helps give us a sort of very rough ‘confidence interval’ surrounding deployment accuracy, and also improves the likelihood that our model won’t do poorly on deployment.

Minimize the probability of failing spectacularly

Since machine learning in security is still relatively new, there’s no bullet-proof answer to how to root out the garbage. But Sanders suggested some starting points.

In order to select the best training set and best model configuration possible, one must map the limitations of their fitted model so they have a more accurate starting point, she said.

To get a more accurate measurement, Sanders ran Black Hat attendees through some sensitivity results from the same deep learning model designed to detect malicious URLs, trained and tested across three different sources of URL data.

By simulating the errors, we can better develop training datasets and model configurations that are most likely to perform reliably well on deployment, Sanders said.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/EChZL88oii0/

Revealed: 779 cases of data misuse across 34 British police forces

A freedom-of-information request by Huntsman Security has discovered that UK police forces detected and investigated at least 779 cases of potential data misuse by personnel between January 2016 and April 2017.

Despite the high number of cases, the same request also revealed that the vast majority of the 34 police forces approached1 are taking steps to improve their monitoring systems. For all but one of the forces, those plans include monitoring IT systems to ensure they aren’t being accessed or used for unethical purposes.

The findings, out Wednesday, come just months after the government’s official PEEL: Police Legitimacy 2016 report, which found that forces needed to do more to investigate and prevent staff abuse of IT systems and sensitive personal data, in order to assure public confidence.

Published in January 2017, the PEEL report investigated whether police forces and personnel were treating their privileged status ethically, and how this affected their legitimacy. The study reviewed issues such as whether personnel were accessing and abusing stored personal data – concluding that more than a third (37 per cent) of forces “required improvement” one way or another. Forces across the UK needed to be more proactive in finding and investigating cases. The report recommended that forces ought to do more to proactively identify data handling issues rather than waiting for complaints from members of the public or within the organisation.

The FoI also found that the number of cases being investigated per day has risen, even without counting ongoing investigations. Compared to 603 investigations taking place in 2016, there were 176 in the first 100 days of 2017.

While at a glance this might imply the problem is getting worse, it could also show improved ability to detect when staff are accessing data that they shouldn’t, Huntsman Security said. The FoI also showed that the majority of forces have begun introducing better monitoring systems, as recommended in the PEEL report. Better monitoring will assist police in becoming more adept at identifying misuse of systems and applications.

“Public trust and legitimacy is critical for the police: without these, a modern police force risks losing the confidence of the people it aims to serve,” said Peter Woollacott, chief exec of Huntsman Security. “If there is any prospect of the safety and security of information being at risk, then every action should be taken to safeguard it before damage is done. The PEEL report highlighted that forces cannot rely on abuses being reported.

“Implementing systems that don’t themselves intrude on privacy, but can identify when someone is accessing data that they shouldn’t be, is a good way for forces to ensure all personnel are behaving in an ethical manner when it comes to sensitive data.” ®

1Huntsman Security contacted 45 police forces across the UK. Ten forces did not respond, while one responded but did not provide any information.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/26/uk_police_data_handling_foi/

Details of 400,000 loan applicants spilled in UniCredit bank breach

Italian bank UniCredit admitted on Wednesday that a series of breaches, undetected for nearly a year, exposed the personal data of 400,000 loan applicants.

In an English-language statement, UniCredit blamed an unnamed third-party provider for exposing Italian customer data – including International Bank Account Numbers (IBANs).

A first breach seems to have occurred in September and October 2016 and a second breach which has just been identified in June and July 2017. Data of approximately 400,000 customers in Italy is assumed to have been impacted during these two periods. No data, such as passwords allowing access to customer accounts or allowing for unauthorised transactions, has been affected, whilst some other personal data and IBAN numbers might have been accessed.

Milan-based UniCredit said that it had closed the breach and informed authorities while embarking on a security audit that will likely tap into at least some of the €2.3bn budget previously allocated towards upgrading and strengthening its IT systems.

The breach at Italy’s biggest lender was detected 10 months after the initial compromise, according to UniCredit, a matter of some concern. Affected customers are at heightened risk of follow-up phishing attacks that leverage the spilled data in order to coax out yet more sensitive information.

Nick Pollard, security intelligence and analytics director at Nuix, noted that the breach took place less than a year before tougher data protection rules in the shape of the General Data Protection Regulation (GDPR) comes into force in Europe. “This latest data breach goes to show the importance of a unified regulation such as GDPR in making third parties accountable for security concerns. GDPR ensures that data is accounted for, protected and access to it is managed,” he said.

“The recent UniCredit data breach is a prime example of knowing where the data is, but not ensuring it is properly protected and managed. 400,000 customers’ data was put at risk by a third-party supplier. Whilst the fact they know this shows they are doing a better job than most, the delay in revealing this goes to show that any business with large amounts of data must have full understanding of where, how and who manages it.”

Donato Capitella, senior security consultant at MWR InfoSecurity, added: “This compromise of UniCredit customer data confirms the risks that organisations face by interconnecting their own IT systems with the ones belonging to their third-party suppliers.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/26/unicredit_bank_breach/