STE WILLIAMS

Train to become an expert cyber crime fighter

Promo As cyber threats seem to multiply and mutate at ever-increasing speed, it becomes difficult to be sure you are able defend your organisation against an attack that could come from any direction.

Security training leader SANS is running a series of courses at the Grand Connaught Rooms in London from 16 to 21 April that promise to give IT professionals the immersion training they need to defend their systems against the cyber criminals.

SANS London will deliver a range of six-day courses covering the latest cyber security topics and preparing attendees for valuable GIAC certification.

Teaching by expert security practitioners will be backed by intensive hands-on sessions, and SANS makes a point of re-assuring students they will be able to use their new skills as soon as they return to work.

There’s a bunch of courses available here, including the following:

  • Defeating advanced adversaries: implementing kill chain defenses Recent attacks are analysed through in-depth case studies that illustrate the types of attacks and outline the advanced persistent threat attack cycle. A hands-on exercise will require students to compromise a virtual “SyncTechLabs”.
  • Windows forensic analysis This will teach how to recover, analyze, and authenticate forensic data on Windows systems for use in incident response, internal investigations, and civil/criminal litigation.
  • Intrusion detection in-depth This course emphasises that Institute of Development Studies alerts are a starting point for examination of traffic, not a final assessment. You will learn to investigate activity to decide whether it is noteworthy or a false indication.
  • Hacker tools, techniques, exploits and incident handling This course addresses the latest attack methods and provides a step-by-step process for responding to computer incidents. It also explores legal issues such as employee monitoring, working with law enforcement and handling evidence.
  • Web app penetration testing and ethical hacking This course aims to teach how to better secure organisations through penetration testing and will help you demonstrate the true impact of web application flaws. It culminates in a web application pen test tournament, powered by the SANS NetWars Cyber Range.

You can read more details about these courses and sign yourself up for some top grade training from SANS right here.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/02/train_to_become_an_expert_cyber_crime_fighter/

A Sneak Peek at the New NIST Cybersecurity Framework

What’s This?

Key focus areas include supply chain risks, identity management, and cybersecurity risk assessment and measurement.

The National Institute of Standards and Technology’s (NIST) updated Cybersecurity Framework, scheduled for release later this year, should provide some welcome new advice for organizations struggling to manage cyber-risk in the current threat environment.

 The key areas where the framework will provide guidance is about supply chain risks, identity management and cybersecurity risk assessment and measurement.  NIST released two draft framework updates containing the changes last year – the second in December 2017. It is currently reviewing public comments and will release a finalized version in the spring. 

A De Facto Standard
First published in Feb 2014, the Cybersecurity Framework was originally developed to help critical infrastructure operators assess cyber risk and implement business-appropriate countermeasures for dealing with those risks. Over the years, the framework has been adopted by critical infrastructure organizations along with other industries of all sizes. It’s most important contribution has been to create a common vocabulary for identifying, protecting, detecting, responding and recovering from cyber threats. The guidelines in the framework have become a standard for cyber-risk management for many enterprises and, since last May, a mandated requirement for US federal agencies.

The updates in version 1.1, according to NIST, are designed to amplify the framework’s value and make it easier to use. Here are some key features:

Descriptions, Definitions Processes
The new version of the NIST Cybersecurity Framework will introduce simple descriptions and definitions for identifying all the stakeholders and associated cyber-risks in an organizational supply chain. It will also highlight methods for identifying security gaps within the supply chain itself, and other management processes .

Measuring Risk
Risk-assessment is another area where organizations can expect to find fresh insight. There is now a revised section on measuring and demonstrating cybersecurity effectiveness, along with a new section on self-assessing cyber-risk. The section will highlight how organizations can identify, measure and manage cyber-risk to support their broader business goals and outcomes. The updated framework will also provide a basis for organizations to not only assess their current cybersecurity risk but to convey it in a standard way to suppliers, partners and other stakeholders in order  to reduce the chances of miscommunication.

Identity Access Control
This section has been revised to provide more clarity around concepts like user authentication, authorization and identity-proofing. The goal is to help organizations identify the best processes for ensuring access in the face of exploding cloud, mobile technologies and other computing paradigms.

The NIST Cybersecurity Framework was, and continues to be, completely voluntary. Except for federal agencies, no organization is required to follow any of the implementation practices contained in the framework. But considering how widely the framework is used these days, smart organizations will want to consider the distinct possibility that someday their security practices will be assessed against it.

 

Laurence Pitt is the Strategic Director for Security with Juniper Networks’ marketing organization in EMEA. He has over twenty years’ experience of cyber security, having started out in systems design and moved through product management in areas from endpoint security to … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/juniper/a-sneak-peek-at-the-new-nist-cybersecurity-framework/a/d-id/1331144?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

A Sneak Peek at the New NIST Cybersecurity Framework

What’s This?

Key focus areas include supply chain risks, identity management, and cybersecurity risk assessment and measurement.

The National Institute of Standards and Technology’s (NIST) updated Cybersecurity Framework, scheduled for release later this year, should provide some welcome new advice for organizations struggling to manage cyber-risk in the current threat environment.

 The key areas where the framework will provide guidance is about supply chain risks, identity management and cybersecurity risk assessment and measurement.  NIST released two draft framework updates containing the changes last year – the second in December 2017. It is currently reviewing public comments and will release a finalized version in the spring. 

A De Facto Standard
First published in Feb 2014, the Cybersecurity Framework was originally developed to help critical infrastructure operators assess cyber risk and implement business-appropriate countermeasures for dealing with those risks. Over the years, the framework has been adopted by critical infrastructure organizations along with other industries of all sizes. It’s most important contribution has been to create a common vocabulary for identifying, protecting, detecting, responding and recovering from cyber threats. The guidelines in the framework have become a standard for cyber-risk management for many enterprises and, since last May, a mandated requirement for US federal agencies.

The updates in version 1.1, according to NIST, are designed to amplify the framework’s value and make it easier to use. Here are some key features:

Descriptions, Definitions Processes
The new version of the NIST Cybersecurity Framework will introduce simple descriptions and definitions for identifying all the stakeholders and associated cyber-risks in an organizational supply chain. It will also highlight methods for identifying security gaps within the supply chain itself, and other management processes .

Measuring Risk
Risk-assessment is another area where organizations can expect to find fresh insight. There is now a revised section on measuring and demonstrating cybersecurity effectiveness, along with a new section on self-assessing cyber-risk. The section will highlight how organizations can identify, measure and manage cyber-risk to support their broader business goals and outcomes. The updated framework will also provide a basis for organizations to not only assess their current cybersecurity risk but to convey it in a standard way to suppliers, partners and other stakeholders in order  to reduce the chances of miscommunication.

Identity Access Control
This section has been revised to provide more clarity around concepts like user authentication, authorization and identity-proofing. The goal is to help organizations identify the best processes for ensuring access in the face of exploding cloud, mobile technologies and other computing paradigms.

The NIST Cybersecurity Framework was, and continues to be, completely voluntary. Except for federal agencies, no organization is required to follow any of the implementation practices contained in the framework. But considering how widely the framework is used these days, smart organizations will want to consider the distinct possibility that someday their security practices will be assessed against it.

 

Laurence Pitt is the Strategic Director for Security with Juniper Networks’ marketing organization in EMEA. He has over twenty years’ experience of cyber security, having started out in systems design and moved through product management in areas from endpoint security to … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/juniper/a-sneak-peek-at-the-new-nist-cybersecurity-framework/a/d-id/1331144?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

HTTPS cert flingers Trustico, SSL Direct go TITSUP after website security blunder blabbed

The websites for HTTPS certificate reseller Trustico, and one of its partners, SSL Direct, took a dive on Thursday – after a critical and trivial-to-exploit security flaw in Trustico.com was revealed on Twitter.

The vulnerability could be leveraged by miscreants to execute arbitrary commands on the website’s host server. A lack of input sanitization allowed carefully crafted commands, submitted as a URL in a web form, to be run on the underlying Linux-powered system, as root no less, meaning anyone who found and exploited the bug could take over the dot-com’s web servers.

On Wednesday, UK-based Trustico hit the headlines after its CEO emailed the private keys to 23,000 Trustico-sold, Symantec-branded SSL/TLS certs to certificate authority DigiCert, forcing the latter to revoke the certs as per the industry’s security standards. DigiCert owns and operates the Symantec umbrella of HTTPS certificate issuers.

Trustico stopped selling Symantec-branded certificates in mid-February, and will in future resell Comodo’s HTTPS certs, ahead of Google Chrome and Mozilla Firefox automatically rejecting Symantec-branded SSL/TLS certificates later this year. Trustico appears to have wanted to move its customers onto Comodo-issued certificates, and one way of doing this was to demand DigiCert revoke 50,000-odd Symantec-branded certificates sold via Trustico.

DigiCert will now cancel the 23,000 certs linked to the emailed private keys. What’s happening with the other 27,000 isn’t clear amid all this messy drama. Trustico said it recovered the “private keys from cold storage,” having kept them for revocation purposes. Generally speaking, the only people who ought to retain a HTTPS certificate’s private key is the holder and owner of the certificate, and not usually a reseller or other intermediary.

Trustico’s staff have insisted the Brit biz has done nothing wrong: it just wanted the certs revoked. DigiCert was not impressed.

Now the website goes down

On Thursday morning, Serbian security researcher Predrag Cujanović tweeted details of a critical flaw in Trustico’s website. The site was pulled offline – it just returns a 503 error – a move that also took out the website of SSL Direct, which uses Trustico as its “technology and solution provider.” SSLDirect.com was sharing Trustico.com’s server, it appears.

“This vulnerability was public already (that’s how I found it), I only pointed out how bad it is (a web service running as root user),” Cujanović later explained. “There was no protection in place and I didn’t read any sensitive information.”

Perhaps someone ran rm -rf --no-preserve-root / on the box. No, don’t try that at home. Or work.

At time of going to publication, Trustico’s website was still down, and there was no official word on the cause from the company, which has been silent on social media and has not returned our requests for comment. ®

Updated to add

Trustico director Zane Lucas has been in touch to say the website’s server was not connected to systems holding customer information, and the vulnerable web app was a tool for inspecting websites’ certificates rather than a service involving customer data. The site was taken down while the biz investigates, we’re told.

“We can’t go into the specifics, but what I can say to you is that we shut down the development tools and the web server they were running on temporarily to investigate the tool in question,” Lucas told us.

“We haven’t found any evidence of a breach, though we disabled the tools pending a full investigation.

“It should be noted that the server that the tools are running on are not connected with any databases or services that contain customer data. The tools in question are development tools that customers can use to learn the intricacies of an SSL certificate, as indicated on the page – they are not designed for production use.”

Bootnote

TITSUP, abbr.: Total Inability To Sell Usual Products

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/01/trustico_website_offline/

Microsoft lobs Skylake Spectre microcode fixes out through its Windows

Microsoft is pushing out another round of security updates to mitigate data-leaking Spectre side-channel vulnerabilities in modern Intel x64 chips.

Redmond said those who run Windows 10 Fall Creators Update and Windows Server Core with Skylake (aka 6th-generation Core) CPUs can go through the Microsoft Update Catalogue to get KB4090007, which contains Intel’s latest microcode patches to address Spectre design flaws in the processor silicon.

Specifically, the update will give those machines patches for CVE 2017-5715, also known as Spectre Variant 2. The branch target injection flaw would potentially allow malware on a PC or server to steal sensitive data, such as passwords, from kernel, hypervisor, or application memory.

The Skylake fixes are part of a larger line of microcode updates for the Spectre flaws that Intel is planning to roll out in the coming weeks. Chipzilla said people should obtain the security patches from their computer manufacturers, or via Microsoft.

Microsoft also gave an update on its work to address the compatibility issues that have arisen between some antivirus apps and its Meltdown/Spectre mitigations.

Redmond said that while it believes the “vast majority” of commercial anti-malware products are now able to handle the mitigations without triggering a blue screen of death, there are still some packages that may have problems, meaning Microsoft will continue to check which antivirus packages are in use and whether it is compatible with the fixes before a system is allowed to install the updates.

“We will continue to require that an AV compatibility check is made before delivering the latest Windows security updates via Windows Update until we have a sufficient level of AV software compatibility,” Microsoft explained. “We recommend users check with their AV provider on compatibility of their installed AV software products.”

Microsoft’s next scheduled security update for all of its products (read: Patch Tuesday) is March 13. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/01/intel_microsoft_skylake_spectre/

Securing the Web of Wearables, Smartphones & Cloud

Why security for the Internet of Things demands that businesses revamp their software development lifecycle.

Wearables are only a small part of the Internet of Things (IoT), a complicated mesh of smart devices, mobile phones, and several applications working together in a digital ecosystem.

The IoT “user experience” is a product of interactions between wearables, smartphones, and applications and analytics software hosted in the cloud. Securing this web of hardware and software is a tricky challenge for companies accelerating into the IoT, an environment that didn’t exist until a few years ago, says Deep Armor founder and CEO Sumanth Naropanth.

“What we’re seeing industry-wide is that this class of products is somewhat initiating a paradigm shift in the entire security development lifecycle,” he explains. “[Businesses] are now responsible for changing the old security development lifecycle (SDL) frameworks and best practices into something more agile.”

At this year’s Black Hat Asia, taking place March 23–26 in Singapore, Naropanth will discuss security and privacy research related to the development of IoT devices, including a custom SDL designed to incorporate wearables, phones, and the cloud. The session will elaborate on flaws and privacy issues related to IoT, and best practices for building new connected products.

[Learn more about the IoT security shift in Black Hat Asia session “Securing Your In-Ear Fitness Coach: Challenges in Hardening Next Generation Wearables,” in which Naropanth will discuss gaps in IoT security and necessary changes to the software development lifecycle.]

Sumanth says it’s time for businesses to think about the bigger picture and secure the broader IoT ecosystem rather than getting bogged down with ingredient-level IoT security. This means not only securing individual devices but the software and services connecting them.

“Looking at a fitness tracker or IoT device, what you see is really not everything that exists,” he explains. “It’s like the tip of the iceberg.” Behind the small activity monitor on your wrist is an array of APIs, Web portals, cloud services, and more often than not, a mobile application.

Speed vs. Security
Businesses in the IoT market are learning how to be more agile, Naropanth explains. This is especially relevant to startups, which often view security as an “expense without returns.”

This isn’t true when in wearables and IoT. Now companies are worried about things like securing the hardware, doing secure boot, updating mobile phones securely, and doing crypto on a very limited IoT software stack — all before launching their products before anyone else. Most enterprises working on IoT and wearables are actually taking it seriously, he adds.

“The challenge for them is more about how to balance their time to market with adequate enough security so the product is at least reasonably secure when it goes out the door,” Naropanth says.

A key component of this updated SDL is evaluating the ecosystem. Your company may be building a fitness tracker that has to work with Wi-Fi, Bluetooth, or a cloud-based component someone else has developed. It’s your job to navigate the interoperability challenges related to the hardware and software connecting to the wearable.

Developing for the IoT: What to Keep in Mind
Sumanth explains two best practices for IoT businesses to prioritize as they create and connect new products. The first: getting the security and product teams on the same page.

“Catching security weaknesses earlier and earlier in the product lifecycle — it helps everyone,” he says. “For the company and for the enterprise, it saves a lot of money.” It’s better to catch security weaknesses early than when code is about to ship. If errors are found later on, a larger team is needed to address the problem,” he adds.

“We strongly encourage product teams to engage with the security team early in the process. It helps us find the weaknesses early on and reduce the number of bugs that get caught later.”

Naropanth also recommends IoT developers to look at existing vulnerabilities from an IoT point of view. Often, old flaws can have a “butterfly effect” in the IoT and lead to wearable devices getting bricked. Furthermore, vulnerabilities in some parts of the IoT — for example, a smartphone — can affect other connected devices.

Related Content:

 

 

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/iot/securing-the-web-of-wearables-smartphones-and-cloud/d/d-id/1331174?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

‘Chafer’ Uses Open Source Tools to Target Iran’s Enemies

Symantec details operations of Iranian hacking group mainly attacking air transportation targets in the Middle East.

Iran’s hacking activity has increased against targets in its geographical neighborhood and one group has taken aim at commercial air travel and transport in the region.

Symantec says the group, which it calls Chafer, has increased both its level of activity and the number of tools used against organizations in the Middle East.

Chafer is not a new group: Reports of its activities go back more than two years. And according to Symantec, in addition to air travel, Chafer’s hit list includes airlines, aircraft services, software and IT services companies serving the air and sea transport sectors, telecom services, payroll services, engineering consultancies, and document management software.

Vikram Thakur, technical director and a lead researcher at Symantec, says that Chafer thus far has been engaged in intelligence-gathering activities rather than any activity that could be seen as directly disruptive. “Chafer is looking for information on how the airlines work; what things cost, the process, how to acquire things. We don’t have any insight on precisely what they want,” Thakur says, emphasizing that there are many different uses for the kind of information harvested by the group.

Adam Meyers, vice president of intelligence at CrowdStrike, says that the motivation behind the information-gathering may not be economic. “The thing that you need to keep in mind is that regionally there have been a lot of issues around air traffic, for example some of the kerfuffle between the UAE and Qatari aircraft,” he explains. “So understanding who’s traveling where is important.”

Equally important is understanding the tools Chafer (which Crowdstrike calls Helix Kitten, and others call Oil Rig) is now using for its attacks. “Malware authors and attackers are making much higher use of open source and multi-purpose tools,” Thakur says, including several that companies could find themselves using as part of their legitimate network and application delivery infrastructures.

According to the Symantec’s research, among the new tools Chafer uses are:

  • Remcom: An open-source alternative to PsExec, which is a Microsoft Sysinternals tool used for executing processes on other systems.
  • Non-sucking Service Manager (NSSM): An open-source alternative to the Windows Service Manager which can be used to install and remove services and will restart services if they crash.
  • GNU HTTPTunnel: An open-source tool that can create a bidirectional HTTP tunnel on Linux computers, potentially allowing communication beyond a restrictive firewall.
  • UltraVNC: An open-source remote administration tool for Microsoft Windows.
  • NBTScan: A free tool for scanning IP networks for NetBIOS name information.

These are in addition to other open source tools, such as Pwdump and Plink, that the group has been using for some time.

“Companies should be looking at these tools on a case-by-case basis to see if they’re being used by their administrators or have been put in place by hackers,” Thakur says. “They need to look at their own network to see if [these tools are] out there.”

Chafer’s most recent attacks are based on spear-phishing techniques that entice victims to open an Excel spreadsheet with a malicious VBS file which runs a PowerShell script. Once opened, the script installs several data-gathering applications and begins the process of spreading laterally through the network. The attack makes use of the helminth malware that has been used, and continues to be developed, by Chafer and related groups.

While Chafer so far has limited its attention to targets in the Middle East, those limits are based on organizational limits, not technical walls. “There’s no technological barrier that they can’t cross to expand their target list. It’s very doable,” Takur says. “If you compare their activity today versus three years ago, they’ve already expanded their mandate. We feel that, with a little time, they could easily expand out of the Middle East.”

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop ITX 2018 agenda here.

Curtis Franklin Jr. is executive editor for technical content at InformationWeek. In this role he oversees product and technology coverage for the publication. In addition he acts as executive producer for InformationWeek Radio and Interop Radio where he works with … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/chafer-uses-open-source-tools-to-target-irans-enemies/d/d-id/1331175?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Number of Sites Hosting Cryptocurrency Miners Surges 725% in 4 Months

The dramatic increase in cryptocurrency prices, especially for Monero, is behind the sudden explosive growth, says Cyren.

A new report from security vendor Cyren this week confirms assumptions about the recent explosive growth in the number of websites that host cryptocurrency mining software.

Cyren monitored a sample of 500,000 websites between September 2017 and January 2018 and found a 725% increase in the number of domains running cryptocurrency scripts on one or more pages over that period.

As fast as that growth has been, it’s still accelerating. According to Cyren, the number of sites knowingly or unknowingly hosting software for mining cryptocurrency registered a threefold jump between last September and October. It plateaued in November before nearly doubling in December and then doubling again in January.

In other words, half the total increase has happened in just the last two months, suggesting that the growth is accelerating, the company said. A total of 7,281 — about 1.4% of the 500,000 websites that Cyren monitored — ran cryptocoin mining scripts as of January 2018.

Much of the growth is being fueled by the insane run-up in cryptocurrency prices in recent months. For instance, the value of Monero, the most widely mined cryptocurrency at the moment, increased by 250% during the four-month period when Cyren was monitoring the websites.

Tinna Thuridur Sigurdardottir, malware analyst at Cyren, says the sites hosting cryptocoin mining tools include both high-traffic and low-traffic destinations. “Most of the sites we’ve seen are not in the top 10,000 sites globally,” says Sigurdardottir. “But there are instances of top 10,000 sites.”

Sites can host cryptocurrency mining tools knowingly or — as in a growing number of cases — unknowingly.

A growing number of website operators have begun voluntarily installing cryptocurrency mining software on their sites as a way to supplement revenues generated by ads. As Sigurdardottir notes, two well-known websites doing this are Showtime and Salon magazine.

The operators make money by allowing the mining software to use the systems belonging to website visitors to mine for cryptocurrency. Some sites alert users to the mining activity, while many others do it surreptitiously.

In many other cases, cybercriminals have begun installing mining tools in websites without the knowledge of the operators. They are then quietly using the computing resources of people visiting these sites to mine for digital currency. 

Mining tools often consume a lot of CPU resources and can seriously affect system performance. Not all cryptocoin miners are scripts, Cyren said in its report. Some are executables that could be used at any time to download and install something other than a cryptocoin mining script.

Cryptocurrency miners are extremely easy for site owners to install and super easy for attackers as well once they’ve hacked a website through usual techniques, Sigurdardottir says. The proliferation of simple JavaScript code from sources such as Coinhive and other Monero miners like Crypto-Loot and deepMiner makes embedding script on a website relatively simple, she says.

Often, all website owners — or attackers — need to do embed a miner in a site is insert two lines of JavaScript. “Coinhive even offers generic code for a website owner to copy and paste into the website’s HTML code, replacing ‘SITE_KEY’ with their own Coinhive site key,” Sigurdardottir notes. “Once the code is embedded, all that is required is a visitor coming to the site and spending time there.”

Sigurdardottir says it is impossible to know whether sites that are hosting cryptocurrency mining tools are doing so unknowingly or unknowingly without speaking to each operator individually. But attackers have broken into everything from thousands of government sites to basic WordPress sites to embed mining software in recent months, she says.

One entity that has been making these miners widely available is Coinhive.com, whose Coinhive Monero miner is easily the most widely deployed in-browser miner in use. Though Coinhive by itself is a legitimate mining tool, many anti-malware products have begun blocking it because the tool is often embedded in sites without the site owner’s knowledge.

Other less widely distributed miners include Crypto-Loot and Coinhave, both of which are also Monero miners, says Sigurdardottir. “Monero is the most common currency,” she says. “Monero bills itself as a ‘secure, private, and untraceable cryptocurrency,’ employing a technology that makes it virtually impossible to track transactions to any individual or IP address.”

Related content:

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/number-of-sites-hosting-cryptocurrency-miners-surges-725--in-4-months/d/d-id/1331176?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Equifax finds ANOTHER 2.4 million Americans hit by breach

Just when you thought the Equifax clustermuck couldn’t get any muckier, the credit broker found another 2.4 million Americans affected by its 2017 breach.

The regurgitation of these fresh people’s data isn’t quite so unpalatable as it was when the original 145.5 million Americans (and another 15.2 Brits… plus some 100,000 Canadians) had their taxpayer IDs exposed, given that less sensitive personal data was involved, Equifax said.

In a statement posted this morning (1 March), Equifax said that these doxxed newbies only had partial drivers’ license information and their names stolen. That means that in the “vast majority of cases,” the data sets didn’t include home addresses, the respective drivers’ license states, dates of issuance or expiration dates…

…as opposed to the people originally identified in the breach, whose personal details, including taxpayer numbers and addresses, were stolen, leaving them vulnerable to identity theft.

The credit monger said that the 2.4 million aren’t part of that previously identified mass of affected Americans: a group whose number represented nearly half the country’s population. It identified the new group “as a result of ongoing analysis of data stolen” from the breach.

In its announcement on Thursday, Equifax gave a few details about the forensic examination that’s been under way since 29 July, when it first discovered the breach. (The incident was publicly announced on 7 September.)

Namely, forensic investigators have been using names and taxpayer IDs – Social Security Numbers (SSNs) – as “key data elements” to figure out who was affected by the cyberattack. That’s partly because forensics experts determined that the attackers’ main focus was to steal those SSNs. Because the SSNs of the newly identified victims hadn’t been stolen along with their partial drivers’ license information, they haven’t been informed before now.

Actually, this isn’t even about newly discovered stolen data, Equifax interim CEO Paulino do Rego Barros Jr. said in the post. Rather, it’s about…

…sifting through the previously identified stolen data, analyzing other information in our databases that was not taken by the attackers, and making connections that enabled us to identify additional individuals.

Well, OK. But it’s still the first time that those 2.4 million Americans are hearing about it, so it’s still new to a whole lot of somebodies. They’ll all be hearing about it directly from Equifax, the company says, and they’ll be offered the free identity theft protection and credit file monitoring services the credit broker has been offering to other affected people. The notifications will include information about how to register for those services.

Newcomers to the growing club of those who’ve been Equifax-ified should note that critics don’t much like the services that Equifax has offered in the wake of this string of nonpearls.

Those crummy pearls include the breach itself, the PIN screwup that put frozen credit files at risk, Equifax’s leaky customer portal in Argentina, the plunking of a breach info site onto the easy to typosquat and bafflingly convoluted domain equifaxsecurity2017.com (which Equifax then proceeded to scramble at least 3 times, sending customers to a fake phishing site for weeks). Then too, let’s not forget the insufficient, underprepared operators at the call centers, leaving alarmed customers facing delays and agents who couldn’t answer questions.

On Wednesday, after Massachusetts Senator Elizabeth Warren introduced legislation targeting credit bureaus’ bottom lines, she said that Equifax is “still making money off their own breach.”

Equifax may actually make money off this breach because it sells all these credit-protection devices, and even consumers who say, ‘Hey, I’m never doing business with Equifax again’ — well, good for you, but you go buy credit protection from someone else, they very well may be using Equifax to do the back office part.

So, what to do in light of the new findings? The same things we all should have done in light of the old findings: check our credit reports, and consider putting credit freezes in place.

It’s astonishing how many Americans haven’t bothered to take those precautions since the breach was announced in September. It’s dismaying that that includes friends and family who apparently don’t read news (AHEM!) about this or other breaches. Nor do they take such credit-protective, identity-theft-thwarting advice to heart.

According to a recent study from CreditCards.com, half of US adults said they haven’t looked at a credit report since the Equifax pratfall. Another 18% said they’ve never checked out their credit report or credit score.

What the muck!!?! Somebody please set up some credit-score-checking afternoon teas or something!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/iaIzZetn7bA/

Machine learning self defence: how to not shoot yourself in the foot

Thanks to Dr. Richard Harang and Madeline Schiappa of SophosLabs for their work on this article.

In case you hadn’t noticed, machine learning is big, really big. I don’t just mean “blockchain on the crest of the Hype Curve” big, I mean actually, you know, useful.

Like the much talked about blockchain, machine learning is being touted as a technology that could change everything. Unlike the blockchain, it probably will.

It’s already proven itself to be a disruptive technology in tasks as diverse as spotting bank fraud, driving cars, understanding human speech and identifying malware.

As a result, organisations are planning to spend tens of billions of dollars on it in the next few years, which means that in the near future lots of people will be building and using machine learning-based solutions for the very first time.

Hopefully they’ll do it securely but, unfortunately, computing’s gold rushes have a terrible record when it comes to cybersecurity. New technologies often usher in new ways to get compromised by hackers (or even old ways we thought we’d seen the back of  – yes, I’m look at you Internet of Things).

We wondered: what kind of threats will organisations’ that plan to roll their own machine learning solutions need to be aware of?

Naked Security sat down with SophosLabs’ data scientists Dr. Richard Harang and Madeline Schiappa to learn about the threats that machine learning solutions face and how they can be protected.

We’ll look at how hackers might disrupt or corrupt your machine learning in later articles, but we start with arguably the biggest threat you face: yourself.

Machine learning is new, subtle and complex, and the potential for self-inflicted wounds is high.

Before we get into why, let’s recap what machine learning actually is.

Machine Learning

Traditional software is, essentially, a set of rules that governs how a computer should behave in a particular context. It’s very good at dealing with well structured data and tends to produce software that’s good at the things we’re bad at: executing highly complex sets of instructions within strict parameters, perfectly, over-and-over, at tremendous speed.

Machine learning is a branch of AI (Artificial Intelligence) that uses software models that are taught by example and figure out their own rules. According to Arthur Lee Samuel, the computing pioneer who invented the term, machine learning is:

[a] field of study that gives computers the ability to learn without being explicitly programmed.

Where conventional software is characterised by rigidity, transparency and provably correct behaviour, machine learning is fuzzy, flexible, opaque and only likely (rather than certain) to behave a particular way.

Machine learning models can generalise in a way that software based on programmed rules can’t, giving us an entirely new way to approach problem solving with computers.

Garbage in, garbage out

Every computer programmer, going all the way back to the very first one – Charles Babbage – knows that no amount of programming wizardry can get the right answer from the wrong information.

Babbage, who invented the first mechanical computer way back in the nineteenth century, might not have used the epithet GIGO (Garbage In, Garbage Out) but he certainly understood it:

On two occasions I have been asked, “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

GIGO presents a particularly acute challenge for complex machine learning systems because the garbage can be very difficult to detect, which can lead to subtle errors and unintended consequences.

Get your training data or testing a little bit wrong and you’ve accidentally trained your model to recognise incidental, correlated phenomena rather than the things that actually matter.

Or, to put it another way, you’ve unwittingly made your face recognition system racist.

Why?

Modern machine learning models work so well because they’re extremely good at finding subtle and complex correlations in your training data. This allows them to learn ways of recognising things – whether it’s faces, patterns of fraud or spam – that human programmers can’t match.

But this powerful capability can backfire in unexpected ways. If your training data contains a correlation that’s spurious or doesn’t exist in the wild, it can easily learn the wrong lessons.

And Big Data is full of spurious correlations (increasing global average temperatures are correlated with the decline in pirate numbers for example, as any good pastafarian can attest).

Let’s imagine we’re training a machine learning model to recognise spam email. Our training data is a database of emails that humans have diligently labelled as either ‘ham’, the emails we like, or ‘spam’, the emails we don’t. Our machine learning system will read the emails and figure out what separates the fresh pork from the canned meat.

Now let’s imagine that our training data contains a plausible but spurious correlation: by chance, every email with an image attachment that came from an IP address ending in 12 has ended up in the spam pile, just because.

There are many other spam emails in our training data that don’t have those two properties, it just so happens that every email that does is in the spam pile. And the humans making the training data didn’t mark those particular emails as spam because of the IP address and the image attachment, it’s just a coincidence.

According to the kinds of checks we might use on a traditional computer program, we haven’t introduced any garbage into our system. The emails in our training data aren’t garbage: they’re well formatted, standards compliant emails. Our labelling isn’t garbage either: the hams are ham and the spams are spam.

Nonetheless we have introduced some garbage.

If our model is sufficiently complex it may infer that the presence of a sender’s IP address ending 12 in an email with an image attachment is a sure fire indicator of spam when, in fact, outside of our training data it isn’t.

When it’s deployed in the real world our anti-spam engine will block a lot of perfectly good emails from people whose IP address ends with a 12.

We broke our example with bad data but we could just as easily have done it by accidentally labelling some of our spam as ham and some of our ham as spam.

In our example the flaw we introduced was quite subtle, but machine learning systems aren’t limited to making subtle errors.

Machine learning algorithms are non-deterministic and we can only say what a model is likely to do in the real world, not what it will do. Deciding if our software is working correctly means first deciding what ‘correctly’ will look like and then testing to see if that’s what our software does.

The less thorough our testing is, the bigger the flaws that can escape into the real world.

Sophos data scientist Hillary Sanders presented at the 2017 BlackHat conference, showing how even slight mismatches between the kind of data a model is trained on and the kind of data that a model sees in the wild can lead to a significant drop in performance.

Even with great labels and a lot of data, if the data we use to train our deep learning models doesn’t mimic the data it will eventually be tested on in the wild, our models are likely to miss out on a lot.

This kind of mismatch will likely lead your model to learn a core of rules that work well in most cases, as well as a number of less important rules that don’t apply to the real world.

And that’s a recipe for Heisenbugs (or shooting yourself in the foot and being unable to find the gun).

 

So, In a nutshell: A model doesn’t know what you don’t tell it, and you haven’t always told it what you think you’ve told it.

What to do?

There isn’t an easy solution to the GIGO problem but, then again, when are silver bullets not in short supply? We suggest you start here:

  • Use good data! It’s obvious, yes, but no less true for that. Be careful to only feed your model with lots of well labelled data from sources that represent what it will see in the real world.
  • Get used to cleaning because it’s a big, unglamorous and important job. When you discover a self-inflicted wound you’ll need to change the way you clean and label your data to account for your mistake. Rinse and repeat.
  • Try not to overtrain your model or you’ll overfit it. Remember, you aren’t trying to recognise your training data with perfect clarity, you’re trying to recognise things that share similarities with your training data.
  • Expect “label noise” and use training methods that reduce its negative effects. Says Schiappa: “This usually means using a robust loss function, to represent the cost of inaccuracy in the model during training”. Your training process should work to minimise this cost.
  • Pay attention to false positives and false negatives during testing, and let your model help you clean your data. Sometimes your labels were wrong, and the model was right.
  • Consider deep learning as your machine learning method of choice. Research indicates that it’s better at dealing with label noise than shallower learning methods (which is one of the reasons that Sophos uses deep learning to build its machine learning models).


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/9PUs5kHWVl4/