STE WILLIAMS

Required: Massive email fraud bust. Tired: Cops who did the paperwork. Expired: 281 suspected con men’s freedom

US prosecutors say 281 suspected criminal hackers around the world have been arrested as part of a massive takedown operation against so-called business email compromise operations. That’s the type of caper in which crooks hijack executives’ email accounts, or impersonate the staffers, to trick colleagues into wiring funds to fraudsters’ accounts.

In a coordinated effort dubbed Operation reWired, law enforcement officers from the US, UK, Nigeria, Turkey, Ghana, France, Italy, Kenya, and Malaysia arrested people they believe to have been running targeted phishing and fraud campaigns against businesses worldwide.

In total, 281 arrests were made and around $3.7m in cash was seized by police. In America alone, 74 people have been arrested, while Nigerian police cuffed 167 suspects. According to US attorneys, the massive law enforcement action included agents from the FBI, Secret Service, Postal Inspection Service, IRS, the State Department, and Homeland Security Investigations as well as more than two dozen US Attorneys offices.

Prosecutors believe that those arrested were running sophisticated scams that involved either taking over or impersonating the email account of someone who runs or partners with a targeted business, then instructing workers at the business to send a check or wire transfer to an account controlled by the scammer. Other arrestees are accused of laundering this money.

“The Department of Justice has increased efforts in taking aggressive enforcement action against fraudsters who are targeting American citizens and their businesses in business email compromise schemes and other cyber-enabled financial crimes,” said Jeffrey Rosen, deputy US attorney general.

FinCEN_logo

Email scammers extract over $300m a month from American suits’ pockets

READ MORE

“In this latest four-month operation, we have arrested 74 people in the United States and 207 others have been arrested overseas for alleged financial fraud.”

The alleged activities of those arrested did not end with business email schemes. The IRS says that some of the suspects are also believed to be responsible for 250,000 stolen identities and 10,000 fake tax returns that yielded roughly $91m in refunds.

Additionally, police believe the suspects also ran dating, real-estate, employment, vehicle sales, and lottery email scams to further pad their wallets.

The Department of Justice says that anyone who believes they have been duped by business email compromise scam should file a complaint with the IC3 to help police track and keep record of scamming operations. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/10/rewired_bec_criminal_takedown/

Emerging Trends

New Privacy Features in iOS 13 Let Users Limit Location Tracking

Apple will introduce other features that allow more secure use of iPhones in workplace settings as well.

Apple’s soon-to-be-released iOS 13 includes multiple features designed to give iPhone users substantially better control over their privacy and security settings for both personal and business use.

Apple today announced it will release iOS 13 on September 19, one day before the company’s scheduled rollout of the new iPhone 11 Pro and iPhone 11 Pro Max phones. The new operating system will be available as a free update to users of Apple’s iPhone 6S and later.

On the consumer side, one of the most significant privacy improvements in the operating system is a feature that allows users to limit the location data a mobile application can collect about them. They will have the option of deciding whether to share their precise location with a mobile app always, only when the app is in use, or never at all. In essence, users will have the ability to prevent a mobile app from running in the background when they’re not using it.

Other location-related features include a control that prevents apps from accessing location information without permission when a user is using Wi-Fi or Bluetooth, and another that alerts users when an app is using location data in the background. Significantly, the notification will include a map of the location data that an app has collected, along with the app’s own explanation for why it is collecting the data. A user can then decide whether to limit the app’s location collection ability.

“From an application perspective, the new iOS is adding some more detailed controls around the location data that apps receive from the operating system,” says Harrison Van Riper, strategy and research analyst at Digital Shadows. “Over the last few months there have been several high-profile location data exposures, so this could be a response to that.”

For third-party application vendors and services that depend on location tracking, the changes could make a huge difference. Facebook, whose privacy practices have come under searing scrutiny in recent months, is easily the most prominent among those that will be impacted by the changes. With iOS 13, users will be able to restrict the social media giant’s ability to track their movements and ensure it obtains their permission before accessing location data when the user is using Wi-Fi and Bluetooth.

In a measure of concern, Facebook in a blog Monday warned users about the impending changes with iOS 13. The company, whose revenue depends largely on its ability to monetize user data, stressed the platform works better with location information.

The company reminded users that location information powers features such as check-ins and event planning, besides enabling better ad targeting. “Features like Find Wi-Fi and Nearby Friends use precise location even when you’re not using the app to make sure that alerts and tools are accurate and personalized for you,” Facebook said.

Beyond Location
Meanwhile, other consumer-facing enhancements in iOS 13 include a new Sign-in With Apple ID feature that will give users the option of signing into applications and websites using their Apple IDs.  

“With the new feature, iOS 13 will highly encourage two-factor authentication,” Digital Shadows’ Van Riper says. “If a user was to select ‘Sign in with Apple’ to create a new account, their Apple ID has to have two-factor authentication enabled in order to use the feature.” They also will be required to use either Apple’s Touch ID or Face ID when creating the account, Van Riper notes.

Users who don’t want to share their email with a particular app will have the option of letting Apple create a unique single-use email address that forwards to the user’s actual email address. The feature will work on all supported Apple devices, as well as on Android and Windows applications.

All iOS applications that require a user sign-in will be required to support the new Apple sign-in option. “These are all great steps in the right direction in preventing unauthorized account takeovers and illegitimate account creations,” Van Riper says.

Importantly, since a third-party website will also be using temporary credentials generated by Apple, user data isn’t persistent and cannot be commoditized and resold as it can with similar options from Google and Facebook, says Russ Mohr, Apple evangelist at MobileIron.

Enterprise Use
With iOS 13, Apple has also introduced some new features for protecting user privacy in a workplace setting. The most important among them is “User Enrollment,” a feature designed specifically for bring-your-own-device (BYOD) environments. It creates an entirely new and encrypted partition on a device and associates a Managed Apple ID that is used only for work apps and data, Mohr says.

“Apple is clearly targeting a population of users that were hesitant to give IT visibility into their devices,” he notes. “With User Enrollment, iOS and MacOS users can be sure that IT departments can’t view their apps, can’t wipe their device, and aren’t privy to other [identity] information like SIM information or even their personal Apple ID,” he says.

With iOS 13, Apple is also providing new extensions that make single sign-on easier for enterprise organizations to implement. The technology is underpinned by the use of certificates and modern authentication, which is both easier for employees and more secure for the enterprise, Mohr says.

Unlike Google’s Managed Profiles on Android, Apple still does not allow two instances of the same app to run on a device. Support for such a capability would allow users that use an app for both personal and business — like email, for instance — to keep their data completely separate, Mohr says.

“It seems like Apple will go down the path of personas, though, as they have with the Notes app, which has some data tied to the business account and some to the personal account,” he notes. Currently, only business data is backed to a business iCloud account, and the data is automatically removed when the device leaves company management, he says.

Details of Apple’s new iPhones together with pricing and availability information are available here.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Phishers’ Latest Tricks for Reeling in New Victims.”

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/mobile/new-privacy-features-in-ios-13-let-users-limit-location-tracking/d/d-id/1335775?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Security Pros’ Painless Guide to Machine Intelligence, AI, ML & DL

Artificial intelligence, machine learning, or deep learning? Knowing what the major terms really mean will help you sort through the morass of words on the subject and the security uses of each.

In the hands of enthusiastic marketing departments, the terms “artificial intelligence,” “machine learning,” and “deep learning” have become fuzzy in definition, sacrificing clarity to the need for increasing sales. It’s entirely possible that you’ll run into a product or service that carries one (or several) of these labels while carrying few of its attributes. 

Image: metamorworks via Adobe Stock

Talk of machine intelligence can often lead to its own special rabbit-hole of jargon and specialized concepts. Which of these will form an important part of your future security infrastructure — and does the difference really matter?

Three Branches
Broadly speaking, machine “intelligence” is a system that takes in data, produces results, and gets better — faster, more accurate, or both — as more data is encountered. Within the broad category are three labels frequently applied to systems: machine learning, deep learning, and artificial intelligence. Each has its own way of dealing with data and providing results to humans and systems.

The differences between how the three function make them appropriate for different tasks. And the sharpest difference divides AI from the other two. Put simply, AI can surprise you with its conclusions, while the other two can “only” surprise you with their speed and accuracy.

Machine Learning
Machine learning uses statistical models (often marketed as “heuristics”) rather than rigid algorithmic programming to reach results. Looked at from a slightly different perspective, machine learning can use an expanding universe of inputs to achieve a specific set of results.

There are many techniques that fit within the category of machine learning. There are supervised and unsupervised learning, anomaly detection, and association rules. In each of these, the machine can learn from each new input to make the model on which it bases its actions richer, more comprehensive, and more accurate.

With all of these, the key is “a specific set of results.” For example, if you wanted a machine learning system to differentiate between cats and dogs, you could teach it all kinds of parameters that go into defining cats and dogs. The system would get better at its job given more data to build its models, and ultimately could predict — based on an ear or a tail — whether something was a dog or cat. But if you showed it a goose, it would tell you it was a cat or dog because those are the only options for results.

When the goal is sorting diverse input into specific categories, or directing specific actions to be taken as part of an automation process, machine learning is the most appropriate technology.

Deep Learning
Deep learning stays within the realm of machine learning, but in a very specific way. “Deep learning” implies that neural networks are the family of techniques being used for processing. While neural networks have been around for quite a while, developments in the last decade have made the technique more accessible to application developers.

In general, neural networks today used a layered technique to pass input through multiple layers of processing. This is one of the ways in which the neural network is designed to mimic animal intelligence. And that mimicry makes neep learning applicable to a wide range of applications.

Deep learning is frequently the technology behind speech recognition and image recognition applications outside of security. Within security, deep learning is often seen in malware and threat detection systems. The number of connections between nodes in the neural network (which can range up into the hundreds of millions) make deep learning a technique often used in applications where most of the learning and processing happens in a central, cloud-based system, with the application of that learning performed at the network’s edge.

To use our earlier examples, deep learning would also be able to learn how to tell cats from dogs, and could be trained to tell breeds of dogs apart, as well as breeds of cats. It could even get to the point of being shown mutts (or “All-American Dogs” as the American Kennel Club dubs them) and assigning them a likely breed based on physical characteristics. But it would still be separating cats and dogs — the poor goose would still be left out in the cold.

Artificial Intelligence
Both machine learning and deep learning involve systems that take an expanding set of data and return results within a specific set of parameters. This makes them technologies that can readily be incorporated into automation systems. Artificial intelligence, on the other hand, is capable of reaching conclusions that are outside defined parameters. It can surprise you with the results it finds.

If you ask many academic AI researchers, they will say that there is no “real” AI on the market today. By this, they mean there’s no general AI — nothing that remotely resembles HAL from “2001” or the Majel Barrett-voiced computer in Star Trek.

There are, however, AI systems that apply advanced intelligence to specific problems. IBM’s Watson is the most widely known, but there are many application-specific AI engines in use by various vendors. Much of the concern about “deep fake” audio and video is fed by AI capabilities used in different applications and services. Robotics, including autonomous vehicles, are another.

To complete our example, an AI system would be able to take all the model information built in deep learning and extend it. Given a bit more information, it might be able to tell that a new image showed a mammal or some other type of animal — and if presented the photo of a fire-hydrant could tell the human operator that this was a novel “animal” never seen before and deserving of more study. AI can go beyond narrow categories of results.

Within cybersecurity, AI is being used to help analysts sort through and classify the vast array of input data coming into the security operations center (SOC) every day. The important note is that, today, the possibility for an unexpected result means that AI is used to assist or augment human analysts rather than merely drive security automation.

Not Quite Skynet
With each of these types of machine intelligence, operators have to be aware of the possibility of two huge issues, one driven by internal forces and the other driven from external agents. The internal issue is called “model bias” — the possibility that the data used for learning in the system’s model of its world will push it in a particular direction for analysis, rather than allowing the system to simply reach the mathematically correct answers.

The external problem comes through “model poisoning,” in which an external agent makes sure the model will deliver inaccurate results. The poisoning can provide results that are embarrassing — or devastating, depending on the application, and the IT or security staff has to be aware of the possibility.

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/security-pros-painless-guide-to-machine-intelligence-ai-ml-and-dl-/b/d-id/1335773?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Two Zero-Days Fixed in Microsoft Patch Rollout

September’s Patch Tuesday addressed 80 vulnerabilities, two of which have already been exploited in the wild.

This month’s Patch Tuesday arrived with fixes for 80 vulnerabilities, 17 of which are categorized as Critical and two of which had been exploited in the wild before Microsoft issued patches. Three of the bugs patched today were already known at the time fixes were released.

Updates delivered today cover apps and services including Microsoft Windows, Microsoft Edge, Internet Explorer, Office and Microsoft Office Services and Web Apps, ChakraCore, Visual Studio, Skype for Business, Microsoft Lync, .NET Framework, Exchange Server, Yammer, and Team Foundation Server. Of the bugs patched, 62 were ranked Important, one is Moderate, and three were publicly known.

The two zero-days are elevation of privilege vulnerabilities. CVE-2019-1215 is a local privilege escalation bug in the Winsock2 Integrated File System Layer. The flaw exists in the way Winsock handles objects in memory. A locally authenticated attacker could exploit this by running a specially crafted application; if successful, they could execute code with elevated privileges.

CVE-2019-1214 is a flaw that exists when the Windows Common Log File System (CLFS) improperly handles objects in memory. This could be exploited by an attacker who logs on to the system and runs a specially crafted application. If successful, they could run processes in an elevated context. Both zero-days patched today affect all supported versions of Windows.

“Elevation of privilege vulnerabilities are utilized by attackers post-compromise, once they’ve managed to gain access to a system in order to execute code on their target systems with elevated privileges,” says Satnam Narang, senior research engineer at Tenable, of how someone could abuse this access.

Four critical vulnerabilities in Remote Desktop Client were patched this month: CVE-2019-1290, CVE-2019-1291, CVE-2019-0787, and CVE-2019-0788. The former two affect all supported versions of Windows; the latter two affect only non-Server editions of the operating system. Remote Desktop Client patches are a Patch Tuesday pattern of the past few months: May brought the wormable BlueKeep vulnerability, for example, and August brought DejaBlue. Experts say the latest flaws aren’t as urgent.

Unlike BlueKeep and DejaBlue, in which attackers target vulnerable remote desktop servers, the recently patched vulnerabilities require attackers to convince users to connect to a malicious remote desktop server, or to compromise vulnerable servers and host malicious code then wait for users to connect.

“A user would have to somehow be convinced to connect to such a server, either via social engineering or by using something like a DNS poisoning attack,” says Greg Wiseman, senior security researcher for Rapid7, of these client-side vulnerabilities. Microsoft did not confirm if any of the RDP vulnerabilities patched today were wormable, as those from earlier months are.

The three publicly known bugs include CVE-2019-1235, an elevation of privilege vulnerability in Windows Text Service Framework, and CVE-2019-1253, another elevation of privilege flaw that exists when the Windows AppX Deployment Server improperly handles junctions. CVE-2019-1294 is a Windows Secure Boot Security Feature Bypass bug affecting Windows 10 and Server 2019; it exists when Windows Secure Boot improperly restricts access to debugging. An attacker with physical access to a target could exploit this to disclose protected kernel memory.

CVE-2019-1235 is an additional patch for vulnerabilities in Windows CTF, part of the Windows Text Services Framework. The bug was discovered by Google Project Zero’s Tavis Ormandy, who in August published research on flaws that had existed for nearly twenty years. The unsecured CTF protocol could let attackers with access to a target machine take control over any app or the operating system. Microsoft issued an initial patch in August with CVE-2019-1162.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Phishers’ Latest Tricks for Reeling in New Victims

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/risk/two-zero-days-fixed-in-microsoft-patch-rollout/d/d-id/1335776?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Third-Party Features Leave Websites More Vulnerable to Attack

A new report points out the dangers to customer data of website reliance on multiple third parties.

In an effort to make websites attractive and easy to use for their customers, companies have also made them attractive targets for criminals. That’s one of the broad conclusions in a new report that points out where the companies with the largest Web presence have introduced vulnerabilities to go along with their ease of use.

One of the major threats described in Tala’s “2019 State of the Web Report” is credential theft via attacks like Magecart. The report, based on surveys of American companies in the Alexa 1000 list of firms with the largest Web presences, shows that more than 60% of the websites use dynamic JavaScript loaded by static JavaScript — a significant potential attack surface. And the websites analyzed used an average of 31 third-party features, apps, or services.

The data collected by the websites is exposed to an average of 15.7 third-party domains, which the report points out are 15.7 opportunities for an attacker to attempt to steal data. According to the report, the threat prompted the PCI Security Standards Council and the Retail Hospitality ISAC to issue a joint bulletin in August warning of the danger.

For more, read here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Phishers’ Latest Tricks for Reeling in New Victims.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/third-party-features-leave-websites-more-vulnerable-to-attack/d/d-id/1335777?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The NetCAT is out of the bag: Intel chipset exploited to sniff SSH passwords as they’re typed over the network

Video It is possible to discern someone’s SSH password as they type it into a terminal over the network by exploiting an interesting side-channel vulnerability in Intel’s networking technology, say infosec gurus.

In short, a well-positioned eavesdropper can connect to a server powered by one of Intel’s vulnerable chipsets, and potentially observe the timing of packets of data – such as keypresses in an interactive terminal session – sent separately by a victim that is connected to the same server.

These timings can leak the specific keys pressed by the victim due to the fact people move their fingers over their keyboards in a particular pattern, with noticeable pauses between each button push that vary by character. These pauses can be analyzed to reveal, in real time, those specific keypresses sent over the network, including passwords and other secrets, we’re told.

The eavesdropper can pull off this surveillance by repeatedly sending a string of network packets to the server and directly filling one of the processor’s memory caches. As the victim sends in their packets, the snooper’s data is pushed out of the cache by this incoming traffic. As the eavesdropper quickly refills the cache, it can sense whether or not its data was still present or evicted from the cache, leaking the fact its victim sent over some data, too. This ultimately can be used to determine the timings of the victim’s incoming packets, and the keys pressed and transmitted by the victim.

The attack is non-trivial to exploit, and Intel doesn’t think this is a big deal at all, though it is still a fascinating technique that you may want to be aware of. Bear in mind, the snooper must be directly connected to a server using Intel’s Data Direct I/O (DDIO) technology. Also, to be clear, this is not a man-in-the-middle attack nor a cryptography crack: it is a cache-observing side-channel leak. And it may not work reliably at all on a busy system with lots of interactive data incoming.

How it works

DDIO gives peripherals, particularly network interfaces, the ability to write data directly into the host processor’s last-level cache, bypassing the system RAM. In practice, this lowers latency and speeds up the flow of information in and out of the box, and improves performance in applications, from web hosting to financial trading, where I/O could be a bottleneck.

Unfortunately, as boffins at VUSec – the systems and network security group at Vrije Universiteit Amsterdam in the Netherlands – have found, that leap into the CPU cache opens up the potential for side-channel holes. Earlier this year, the white-hat team uncovered and documented the aforementioned method in which a miscreant can abuse DDIO to observe other users on the network, and, after privately disclosing the flaw Intel, went public today with their findings.

DDIO is, for what it’s worth, enabled by default in all Intel server-grade Xeon processors since 2012.

The technique, dubbed NetCAT, is summarized in the illustration below. This particular exploitation approach, similar to Throwhammer, requires the eavesdropper to have compromised a server that has a direct RDMA-based Infiniband network connection to the DDIO-enabled machine in use by the surveillance target. This would require the snooper to gain a strong foothold in the organization’s infrastructure.

VUsec's diagram of the NetCAT security hole

Block diagram of the snooping technique … Credit: VUSec

Once connected, the spy repeatedly fills the processor’s last-level cache over the network, flooding the cache with its own data. The snoop observes any slight variations in the latency in its connection to detect when its data had been evicted to RAM by another network user, a technique known as prime+probe. A portion of this last-level cache is reserved for this direct IO use, thus the prime+probe method is not troubled by code and application data running through the CPU cores.

All this means the underlying hardware may inadvertently divulge sensitive or secret information. More technical details on NetCAT are due to be formally published in May next year.

“Cache attacks have been traditionally used to leak sensitive data on a local setting (e.g., from an attacker-controlled virtual machine to a victim virtual machine that share the CPU cache on a cloud platform),” explained the VUSec team, which was awarded a security bug bounty by Intel for its discovery.

“With NetCAT, we show this threat extends to untrusted clients over the network, which can now leak sensitive data such as keystrokes in a SSH session from remote servers with no local access.

“In an interactive SSH session, every time you press a key, network packets are being directly transmitted. As a result, every time a victim you type a character inside an encrypted SSH session on your console, NetCAT can leak the timing of the event by leaking the arrival time of the corresponding network packet.”

Armed with the timing of the packets, an attacker could potentially match intervals to specific keystrokes by comparing observed delays to a model of the target’s typing patterns.

The end result, as shown below, is a method for spying on SSH sessions in real time with nothing more than a shared server:

Youtube Video

If all of this sounds rather complex and unfeasible in real life, well, it is. The scenario VUSec laid out is mostly just a proof-of-concept rather than a likely attack scenario, as Intel pointed out in its statement to El Reg.

“Intel received notice of this research and determined it to be low severity (CVSS score of 2.6) primarily due to complexity, user interaction, and the uncommon level of access that would be required in scenarios where DDIO and RDMA are typically used,” a Chipzilla spokesperson said.

Ghostly figure hovering in a wooded glen

Deja-wooo-oooh! Intel chips running Windows potentially vulnerable to scary Spectre variant

READ MORE

“Additional mitigations include the use of software modules resistant to timing attacks, using constant-time style code. We thank the academic community for their ongoing research.”

As with most side-channel attacks, the process of actually exploiting the bug is rather tedious and unlikely, though it demonstrates a fundamental flaw that can be difficult, if not impossible, to address with anything short of a hardware redesign.

While VUSec agrees with Intel that adding software protections against timing attacks will make the spying harder to carry out, the only sure way to remove the vulnerability is to disable DDIO entirely, and lose the performance benefits.

“As long as the network card creates distinct patterns in the cache,” the VUSec team said, “NetCAT will be effective regardless of the software running on the remote server.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/10/intel_netcat_side_channel_attack/

AI Is Everywhere, but Don’t Ignore the Basics

Artificial intelligence is no substitute for common sense, and it works best in combination with conventional cybersecurity technology. Here are the basic requirements and best practices you need to know.

The fourth industrial revolution is here, and experts anticipate organizations will continue to embrace artificial intelligence (AI) and machine learning (ML) technologies. A forecast by IDC indicates spending on AI/ML will reach $35.8 billion this year and hit $79.2 billion by 2022. Though the principles of the technology have been around for decades, the more recent mass adoption of cloud computing and the flood of big data has made the concept a reality. 

The result? Companies based around software-as-a-service are best positioned to take advantage of AI/ML because cloud and data are second nature to them. 

In the past five years alone, AI/ML went from technology that showed lots of promise to one that delivers on that promise because of the convergence of easy access to inexpensive cloud computing and the integration of large data sets. AI and ML have already begun to see acceleration for cybersecurity uses. Dealing with mountains of data that only continues to grow, machines that analyze data bring immense value to security teams: They can operate 24/7 and humans can’t. 

For your cybersecurity team to effectively launch AI/ML, be sure these three requirements are in place:

1. Data: If AI/ML is a rocket, data is the fuel. AI/ML requires massive amounts of data to help it train models that can do classifications and predictions with high accuracy. Generally, the more data that goes through the AI/ML system, the better the outcome.

2. Data science and data engineering: Data scientists and data engineers must be able to understand the data, sanitize it, extract it, transform it, load it, choose the right models and right features, engineer the features appropriately, measure the model appropriately, and update the model whenever needed.

3. Domain experts: They play an essential role in constructing an organization’s dataset, identifying what is good and what is bad and providing insights into how this determination was made. This is often the aspect that gets overlooked when it comes to AI/ML.

Once you have these three requirements, the engineering and analytics teams can move to solving very specific problems. Here are three categories, for example:

1. Security user risk analysis: Just like a credit score, you can come up with a risk score based on a user behavior — and with AI/ML, you can now scale it for a very large-scale users.

2. Data exfiltration: With AI/ML, you’ll be able to identify patterns more readily — what’s normal, what’s abnormal. 

3. Content classification: Variants on web pages, ransomware strains, destination, and more. 

Adopting AI/ML in your cybersecurity measures requires you to think differently, plan and pace the project differently, but it doesn’t replace common sense and some of the conventional best practices. AI/ML is not a substitute for having a layered security defense, either. In fact, we’ve seen that AI/ML has been doing far better when combined with traditional cybersecurity technology. 

Here are three tenets to execute an AI/ML project:

1. “Not all data can be treated equal.” Enterprise data has custom privacy and access control requirements; the data often is spread around different departments and encoded with a long history of “tribal knowledge.”

2. “Wars have been won or lost primarily because of logistics,” as noted by General Eisenhower. In the context of the AI/ML battleground, the logistics is the data and model pipeline. Without an automated and flexible data and model pipeline, you may win one battle here and there but will likely lose the war.

3. “It takes a village” to raise a successful AI/ML project. Data scientists need to have tight alignment with domain experts, data engineers, and businesspeople.

In the past, there have been two main criticisms of AI/ML: 1) AI is a black box, so it’s hard for security practitioners to explain the results, and 2) AI/ML has too many false positives (that is, false alarms). But by combining AI/ML and tried-and-true conventional cybersecurity technology, AI/ML is more explainable, and you get fewer false positives than with conventional technology alone.

AI/ML already proved it can help businesses in a number of ways, but it still lacks context, common sense, and human awareness. That’s the next step toward perfecting the technology. In the meantime, cybersecurity defense still requires domain experts, but now these experts are helping shape the future with a new paradigm shift for AI/ML methodology.

Related Content:

 

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Phishers’ Latest Tricks for Reeling in New Victims.”

Howie Xu is Vice President of AI and Machine Learning at Zscaler. He was the CEO and Co-Founder of TrustPath, which was acquired by Zscaler in 2018. Howie was formerly an EIR with Greylock Partners and the founder and head of the VMware networking unit. View Full Bio

Article source: https://www.darkreading.com/cloud/ai-is-everywhere-but-dont-ignore-the-basics/a/d-id/1335714?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

US Power Grid Cyberattack Due to Unpatched Firewall: NERC

A firewall vulnerability enabled attackers to repeatedly reboot the victim entity’s firewalls, causing unexpected outages.

The North American Electric Reliability Corporation (NERC) reports that a cyberattack on the US power grid earlier this year was caused by a target entity’s network perimeter firewall flaw.

On March 5, 2019, an incident targeted a “low-impact” grid control center and small power generation sites in the western US, according to an EE News update. No signal outages lasted longer than five minutes, and the disruption didn’t cause any blackouts. Still, the 10-hour attack was great enough to prompt the victim utility to contact the US Department of Energy.

A “Lesson Learned” post from NERC says attackers exploited a vulnerability in the web interface of a vendor firewall, enabling attackers to repeatedly reboot the devices and cause a denial-of-service condition. The unexpected reboots let to communication outages in firewalls that controlled communication between the control center and multiple remote generation sites, and between equipment on these sites. All firewalls were network perimeter devices.

Analysis revealed the target utility hadn’t installed a firmware update that would have patched the vulnerability, and the outages stopped when the patch was applied. The victim reviewed its process for assessing and implementing firmware updates and has chosen to implement a more formal, frequent review of vendor updates monitored by internal compliance tracking software.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Phishers’ Latest Tricks for Reeling in New Victims

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/risk/us-power-grid-cyberattack-due-to-unpatched-firewall-nerc/d/d-id/1335772?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Data Is the New Copper

Data breaches fuel a complex cybercriminal ecosystem, similar to copper thefts after the financial crisis.

If you feel as if there’s a new data breach in the news every day, it’s not just you. Breaches  announced recently at Capital One, MoviePass, StockX, and others have exposed a variety of personal data across more than 100 million consumers. This has spurred lawsuits and generated thousands of headlines.

Other companies compromised this year include Citrix, which lost 6TB of sensitive data, First American Financial, (885 million records exposed), and Facebook (540 million records exposed). The attack vector or leaked data might vary, but these breaches all have one thing in common: the information exposed provides raw materials that fuel a complex cybercriminal ecosystem, and these headlines are just the tip of the iceberg.

Most victims don’t know how cybercriminals use their stolen data. One way to understand this is to consider the epidemic of copper theft that hit the country following the mortgage crisis. As buildings were left abandoned, thieves stole copper wiring and piping. The copper could then be sold for $3 a pound to buyers willing to not ask questions about where it came from. It’s a similar story with data, where the breach itself is rarely the end goal of cybercriminals but simply provides a means to obtain money through a multistage scheme. And unlike copper, the same data can be stolen, sold, and used, many times.

Copper thieves use crowbars and wrenches. Cybercriminals use programs that exploit software vulnerabilities and automatically test millions of passwords to opportunistically take over online accounts. Copper thieves find industrial middlemen to sell their wares, while cybercriminals find underground marketplaces to connect to other criminals who specialize in using stolen data in different ways. Addresses and birth dates are used in identity fraud, such as applying for loans. Stolen credit cards can be used to make fraudulent purchases, and stolen passwords are keys providing entry to other accounts, that when compromised, enable criminals to empty bank accounts or turn gift cards into cash.

Cutting Off the Supply
Curbing the trade of stolen copper is easier than cutting off the supply of stolen data. With copper, law enforcement goes after the resellers, fining them when stolen materials are found in their possession. For data, the mitigation options vary considerably depending on the type of information that is exposed.

With stolen credit cards, the damage can actually be somewhat contained. Increased EMV (chip-based) adoption and improved fraud-detection helps limit the impact of any given breach of credit card data.

Personal data being in the wrong hands is harder to mitigate. You can’t change your birth date. Your physical address is often publicly available information, accessible to cybercriminals with no data breach required. The fact that these data types, as well as “security questions” like mother’s maiden name, are still commonly relied on for authentication purposes reveals a systemic problem that must be addressed.

Credential theft (e.g., stolen email addresses and passwords) is the most pernicious and least understood type of breach. Most people have lost track of all of the different places where they have reused passwords. You can’t blame them: The average user has more than 100 accounts with various websites, apps, and services that they have created over time. This means that cybercriminals using automated fraud tools in credential stuffing attacks have a reliable rate of success when they try passwords from one site against another, often around 2%. With only 1 million stolen passwords from any one website, a criminal can quickly take over tens of thousands of accounts on a completely unrelated website and repeat this on other sites to ultimately breach more accounts than the original breach.

Protecting the Data
Governments are trying to address these problems. The EU’s General Data Protection Regulation prohibits some insecure data storage practices. The California Consumer Privacy Act grants consumers more control and insight into how their personal information is used online. The Digital Identity Guidelines from the US National Institute of Standards and Technology recommends that companies check passwords against lists of known stolen passwords. The US Federal Trade Commission settled its complaint against a company last year for having inadequate protection against credential stuffing, which led to compromised customer accounts. These efforts will all help over time.

The complexity of our online lives poses many challenges, and the global situation may get worse before it gets better. As long as there’s a market for copper or data, there will be criminals trying to steal them. But by improving corporate security standards, defending against the use of exposed information, and adopting better security practices, we can make it much harder for cybercriminals to turn stolen data into gold.

Related Content:

 

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Phishers’ Latest Tricks for Reeling in New Victims.”

Shuman Ghosemajumder is CTO at Shape Security, which operates a global defense platform to protect web and mobile applications against sophisticated cybercriminal attacks. Shape is the primary application defense for the world’s largest banks, airlines, retailers, and … View Full Bio

Article source: https://www.darkreading.com/risk/data-is-the-new-copper/a/d-id/1335721?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple