STE WILLIAMS

US military given the power to hack back/defend forward

Hacking back – what’s also called offensive hacking, or what the Defense Department is calling “defending forward” in its new cyber strategy, or what we can think of as plain old “attacking” but without the need for the military to get an OK from the president’s National Security Council – is back.

The new version of cyber strategy, first reported by CNN on Tuesday, says that the Department of Defense (DoD) will “defend forward” to confront threats before they reach US networks: in other words, the military has gained the power to launch “preventative” cyberattacks, be they to protect election systems or the energy grid.

Our primary role in this homeland defense mission is to defend forward by leveraging our focus outward to stop threats before they reach their targets.

“The United States cannot afford inaction,” the summary reads. As it is, the US is in a “long-term strategic competition” with China and Russia, it says, which have both launched persistent cyber campaigns that pose “long-term” risk to the country, its allies and its partners.

References to state-sponsored hacks

The strategy references China-sponsored hacking and Russian tinkering with US elections and US discourse.

North Korea also rated a mention. Earlier this month, the US unsealed a criminal complaint that charged a North Korea regime-backed programmer with multiple devastating cyberattacks, including the global WannaCry 2.0 ransomware in 2017, the 2014 attack on Sony Pictures, and the $81m cyber heist in 2016 that drained Bangladesh’s central bank.

From the new strategy, which is the DoD’s first formal cyber strategy document in three years:

China is eroding U.S. Military overmatch and the Nation’s economic vitality by persistently exfiltrating sensitive information from U.S. public and private sector institutions. Russia has used cyber-enabled information operations to influence our population and challenge our democratic processes. Other actors, such as North Korea and Iran, have similarly employed malicious cyber activities to harm U.S. citizens and threaten U.S.

The new strategy gives the military the power to unleash attacks within countries that are allies, as it goes after hackers who use such countries’ networks as a launching pad for attacks against the US, CNN notes.

A risky move?

The new strategy gives the military the power to act far more independently than it has until recently. Previously, if the National Security Agency (NSA) observed Russian hackers building a network in a Western European country, the president’s National Security Council would have to sign off on action before it was taken.

Jason Healey, a senior research scholar at Columbia University and former George W. Bush White House cyber official, told CNN that this won’t be necessary from hereon in.

It’s a risky move, Healey said:

It’s extremely risky to be doing this. If you loosen the rules of engagement, sometimes you’re going to mess that up.

The new strategy still prevents the US from attacking civilian infrastructure in other countries, citing a United Nations agreement “against damaging civilian critical infrastructure during peacetime.”

From the strategy:

The Department will work alongside its interagency and international partners to promote international commitments regarding behavior in cyberspace as well as to develop and implement cyber confidence building measures (CBM). When cyber activities threaten U.S. Interests, we will contest them and we will be prepared to act, in conjunction with partners, to defend U.S. interests.

This is only the most recent of the Trump administration’s moves to give the military a longer leash when it comes to cyberwarfare. Last month, Washington rolled back an Obama-era directive that outlined how to launch cyberattacks on foreign soil.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/fB1DuvVy8d0/

Congrats on keeping out the hackers. Now, you’ve taken care of rogue insiders, right? Hello?

Comment It’s exasperating how each high-profile computer security breach reveals similar patterns of failure, no matter the organization involved.

One such reoccurring theme is that IT departments find it can be hard to stop employees going rogue, or spilling their login details into the wrong hands, ultimately leading to damage or sensitive data leaking out. Now why is that?

Insider attacks are difficult to detect and thwart because businesses prefer their staff to access networks and be productive without security barriers, false positives, and complexity getting in the way and slowing them down.

However, lacking effective controls, organizations discover a network breach the hard way when customer data turns up on the dark web, or a sample is emailed to the boss as part of an extortion threat.

It’s an issue that lights up like a beacon in Verizon’s most recent Data Breach Investigations Report (DBIR). This dossier covers 2,216 reported network breaches, and 53,000 security incidents across 65 countries, in the 12 months to October 2017 – and concluded that 28 per cent of these were classified as involving insiders in one way or another.

Intriguingly, while cyber-espionage is often seen as a bigger menace, this accounted for only 310 security incidents leading to 151 known breaches. This stands in striking contrast to privilege misuse, defined as “any unapproved or malicious use of organizational resources,” which accounted for 212 breaches and 10,556 incidents – almost one in four of the total recorded.

The breakdown for the health sector in Verizon’s Protected Health Information Data Breach Report (PHIDBR) is even more stark, with more than half of all 1,368 network breaches traced to insiders. Where motivation could be discerned, money topped the list, but 94 incidents were blamed on “fun and curiosity,” a reference to employees peeking at the medical records of famous people or relatives and friends.

Groundhog Day

In the past, insiders were thought of as being employees sitting on the organization side of a firewall. This perspective has become almost meaningless. Today’s networks are accessed by numerous partners and contractors, who count as insiders despite being outside the network, as well as a mass of remote users. What matters is where a user’s credentials are, not where the user is.

Reading dossiers on IT security blunders is a depressing pastime, but it offers some important learning.

The first is that focussing cybersecurity defenses solely on external actors is a flawed strategy. The second is that breach reports, and the failures that led to the intrusions, tell us about the past, not what might be happening in the present. Many of the companies whose hacker invasions made it to Verizon’s pages had probably been doing things in the same way for years or even decades. Months or years later, many organizations will have no clear idea what if any role an insider played in a security breach.

Monitoring ‘exfiltraitors’

The logical answer to misbehaving insiders is user activity monitoring (UAM) and/or user and entity analytics (UEBA), but what is it that should be monitored? Traffic is one possibility. All traitorous insiders have to get their stolen data out of the network at some point, so defenders inspect traffic for outbound connections, the creation of new and possibly unauthorized accounts, unusual emails and database searches, large print jobs, suspicious use of USB drives, any one or combination of which might be tied to accessing and exfiltrating valuable data.

In practice, spotting a skilled insider adversary in this way can be like hunting for a needle in a field of haystacks. There are simply too many layers and protocols to be analyzed, too many security logs to check, and not enough time to translate this kind of monitoring into real-time detection. At the very least it can lead to sprawl, with organizations deploying layers of tech such as data loss prevention (DLP) to keep a lid on insider risks. Arguably, crude user profiling would be quicker, which involves picking based on risk and analyzing their computer activity for bad behavior.

Standing back, it’s not hard to understand why user behavior analytics (UBA) and its big brother, user and entity behavioral analytics (UEBA), have started to look like one path out of the morass. Instead of simply measuring an insider or insider account against a static series of rules, UEBA asks whether that user is behaving as they normally do or departing from that pattern. Getting to this point of understanding a “normal” state takes time, of course, but once in place offers the chance of more quickly detecting anomalous deviations from that.

Realtime detection v insiders

It’s a truism that frontline security systems, including UEBA, set out to detect threats in real (or near real) time. The complex part has always been designing what happens when an anomalous event is detected, how much of any subsequent action should be automated, and at what point human need to step in.

This is tough enough when detecting external threats but throws up even bigger challenges when pitted against insiders. Attacks arriving from the outside generate network traffic and traditional indicators of compromise (malware contacting unusual domains from a PC for instance), none of which are present when insiders do something risky or go rogue. The conceptual strength of UEBA is that it makes no distinction between internal and external – what counts is what is defined as a “normal” state for that user and user account. An event is either anomalous, non-anomalous, or somewhere in between.

Graphs showing deviation

Could you hack your bosses without hesitation, repetition or deviation? AI says: No

READ MORE

Making this work requires an organisation to first identify its valuable assets and create a baseline of access to them from every user. This means monitoring how data is copied to and from different points in the network, and especially out of the network as has become common when integrating with cloud services. Numerous indicators must be assessed, including the privilege level of the user, the size of a transfer, its time, place, the IP of the destination of data, and even failed login attempts.

In UEBA, this will also be correlated to devices and user accounts (admins having more than one) and compared to the history of behaviour associated with these. UEBA’s claim is that by overlaying machine intelligence built specifically to spot subtle changes in the patterns of connectivity and behaviour, an alert can be generated in real time (usually defined as anything from seconds to minutes) so that a security operations center team member can review and intervene.

The precise chain of interventions varies from system to system – some will intervene to stop data from being copied unless the user elevates his or her privileges – but human intervention is usually a priority. UEBA has been characterized as a three-dimensional way of understanding security monitoring but it is not yet an automated system for stopping bad things from happening without the need for trained eyes.

There’s always a ‘but’

The appeal of UEBA is that deploying it doesn’t require binning existing security technologies, which provide sensors feeding data into its big data. The question is how organizations differentiate one UEBA from another.

All work along similar-sounding principles, and yet, up close, not all might turn out to be the same thing. The first consideration is that a UEBA should handle the “entity” piece of the puzzle, which monitors things like devices, applications, servers IP addresses, and even data itself, which is essential when putting what users are doing into context. Another is big data itself – does the architecture underpinning this part of the system on stand up to technical scrutiny? A UEBA system should do as much of the latter as possible out of the box without the need for complex customizations.

Arguably, the biggest challenge of all is that UEBA should be something that a network’s security teams understands. Given how much of machine-driven UEBA depends on specialised, hard to-to-find skills, this can’t be taken for granted, especially when asking vendors to explain the basis for the opaque algorithms they use to conduct baselining. Containing the insider risk using a UEBA should always be about a system that delivers on its promises today, not at some idealised point in the future. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/20/insider_threat_real_time/

What’s that smell? Oh, it’s Newegg cracked open by card slurpers

Netizens buying stuff from Newegg had their bank card details skimmed by hackers who, for a whole month, stashed the Magecart toolkit on the dot-com’s payment pages.

From August 16 to September 18, shoppers’ sensitive card data was silently copied by the Magecart code during the site’s checkout process, and sent to neweggstats.com, which was created on August 13 by fraudsters to collect this information. The domain also had a digital certificate from Comodo to make it look nice and legit.

The spyware would only appear at a certain point when completing a purchase, allowing it to evade researchers until it was finally discovered, reported, and removed on September 18. The malware was put there by miscreants compromising Newegg’s systems.

And, yes, Magecart is the same malicious JavaScript toolset that was secretly deployed on Ticketmaster‘s website to nab revelers’ card data, smuggled onto British Airways’ site and app to snoop on roughly 380,000 flight bookings, and injected into Feedify’s libraries

to compromise hundreds more e-commerce outfits.

However, as infosec shop Volexity explained on Wednesday, the miscreants who targeted Newegg shrunk the card-siphoning Magecart code from the 22 lines deployed against BA to a mere eight lines, or 15 if the code is beautified.

Newegg attack JavaScript - image by Volexity

Part of the attack code, specifically, the JavaScript on the checkout page to nick payment card data … Click to embiggen

RiskIQ, which previously investigated the BA and Ticketmaster security breaches, also noted on Wednesday that the rest of the approach against Newegg was the same: “The elements of the British Airways attacks were all present in the attack on Newegg: they integrated with the victim’s payment system and blended with the infrastructure, staying there as long as possible.”

Newegg confirmed its systems had been hacked to squeeze Magecart into its webpages:

Here’s Volexity’s analysis of the data theft process:

As for the aftermath, you know the drill: check your bank statements for any weird or unexpected purchases, and report them, especially if you shopped with Newegg between the above dates. Your card details were probably nicked, and may be used by crooks with a penchant for spending sprees using other people’s money. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/20/newegg_hacked_magecart/

No, the Mirai botnet masters aren’t going to jail. Why? ‘Cos they help Feds nab cyber-crims

The three brains behind the Mirai malware, which infects and pressgangs Internet-of-Things devices into a botnet army, have avoided jail.

In December, Paras Jha, 22, Josiah White, 21, and Dalton Norman, 22, pleaded guilty in the US to breaking the Computer Fraud and Abuse Act after developing and masterminding the Mirai malware, as well as the Clickfraud botnet.

This week, Alaska’s chief district judge Timothy Burgess sentenced them to five years of probation, 2,500 hours of community service, and $127,000 in damages to their victims.

Such light sentences are uncommon in America for computer crime, however, there is one clear reason in this case: the trio became cyber-crimefighters for the FBI, and have already helped taking down other botnets. Couple that to a guilty plea agreement, thus avoiding a pointless trial, and jail time is taken off the table.

For instance, let’s take Jha. “Special Agent Elliott Peterson … who is recognized as one of the FBI’s top investigators for cybercrime, has described Paras’ cooperation not only as substantial, but extraordinary in its scope, breadth, results, and amount of time expended,” Jha’s lawyers told the Alaskan district court.

“Paras worked tirelessly to uncover identities, information, methods and timing of attacks, and other information that lead to the identity and location of individuals later charged with computer crimes. His cooperation has been more than substantial, it has been outstanding. We wholeheartedly agree with the government’s motion for an 85% reduction in the sentence, combined with continued education, and community service that includes continuing cooperation with the FBI.”

Jha was already known to the authorities as a hacker before Mirai burst on the scene: he was known for launching botnet-powered distributed denial-of-service (DDoS) attacks. Specifically, he began his botnet-herding career by taking down Minecraft players. He also unleashed a couple of network tsunamis to knock down his own university, including one to delay one of his examinations.

mirai

Internet of Things botnets: You ain’t seen nothing yet

READ MORE

After dropping out of Rutgers, Jha worked at a DDoS mitigation firm called Protraf, but carried on his own DDoS tool development in his spare time. The Feds claim he used these tools to take down organizations so that they could be approached and offered Protraf’s anti-DDoS services.

Jha said that the idea for the Mirai code came after he was challenged by a Dutch Minecraft player to build a better botnet. The code was highly successful, and Jha and his two mates charged fees to carry out DDoS attacks using their malware-infected army, before publishing the source code online to cover their tracks.

Since his arrest, Jha has become a reformed character, we’re told. He was treated for an undiagnosed case of ADHD, has scored a part-time job with a security company, and still helps out the FBI and police with computer crime cases – think Frank Abagnale with a keyboard.

“The plea agreement with the young offenders in this case was a unique opportunity for law enforcement officers, and will give FBI investigators the knowledge and tools they need to stay ahead of cyber criminals around the world,” US Attorney for Alaska Bryan Schroder said this week in announcing the sentencing.

“After cooperating extensively with the FBI, Jha, White, and Norman were each sentenced to serve a five-year period of probation, 2,500 hours of community service, ordered to pay restitution in the amount of $127,000, and have voluntarily abandoned significant amounts of cryptocurrency seized during the course of the investigation.

“As part of their sentences, Jha, White, and Norman must continue to cooperate with the FBI on cybercrime and cybersecurity matters, as well as continued cooperation with and assistance to law enforcement and the broader research community.

“According to court documents, the defendants have provided assistance that substantially contributed to active complex cybercrime investigations as well as the broader defensive effort by law enforcement and the cybersecurity research community.”

The trio began working for the Feds even before being charged with the Mirai case. Given the problems faced by the FBI in recruiting hackers, flipping botnet masters is an interesting new way to swell the ranks of defenders in US law enforcement. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/20/makers_of_mirai_free/

Congrats on keep out the hackers. Now, you’ve taken care of rogue insiders, right? Hello?

Comment It’s exasperating how each high-profile computer security breach reveals similar patterns of failure, no matter the organization involved.

One such reoccurring theme is that IT departments find it can be hard to stop employees going rogue, or spilling their login details into the wrong hands, ultimately leading to damage or sensitive data leaking out. Now why is that?

Insider attacks are difficult to detect and thwart because businesses prefer their staff to access networks and be productive without security barriers, false positives, and complexity getting in the way and slowing them down.

However, lacking effective controls, organizations discover a network breach the hard way when customer data turns up on the dark web, or a sample is emailed to the boss as part of an extortion threat.

It’s an issue that lights up like a beacon in Verizon’s most recent Data Breach Investigations Report (DBIR). This dossier covers 2,216 reported network breaches, and 53,000 security incidents across 65 countries, in the 12 months to October 2017 – and concluded that 28 per cent of these were classified as involving insiders in one way or another.

Intriguingly, while cyber-espionage is often seen as a bigger menace, this accounted for only 310 security incidents leading to 151 known breaches. This stands in striking contrast to privilege misuse, defined as “any unapproved or malicious use of organizational resources,” which accounted for 212 breaches and 10,556 incidents – almost one in four of the total recorded.

The breakdown for the health sector in Verizon’s Protected Health Information Data Breach Report (PHIDBR) is even more stark, with more than half of all 1,368 network breaches traced to insiders. Where motivation could be discerned, money topped the list, but 94 incidents were blamed on “fun and curiosity,” a reference to employees peeking at the medical records of famous people or relatives and friends.

Groundhog Day

In the past, insiders were thought of as being employees sitting on the organization side of a firewall. This perspective has become almost meaningless. Today’s networks are accessed by numerous partners and contractors, who count as insiders despite being outside the network, as well as a mass of remote users. What matters is where a user’s credentials are, not where the user is.

Reading dossiers on IT security blunders is a depressing pastime, but it offers some important learning.

The first is that focussing cybersecurity defenses solely on external actors is a flawed strategy. The second is that breach reports, and the failures that led to the intrusions, tell us about the past, not what might be happening in the present. Many of the companies whose hacker invasions made it to Verizon’s pages had probably been doing things in the same way for years or even decades. Months or years later, many organizations will have no clear idea what if any role an insider played in a security breach.

Monitoring ‘exfiltraitors’

The logical answer to misbehaving insiders is user activity monitoring (UAM) and/or user and entity analytics (UEBA), but what is it that should be monitored? Traffic is one possibility. All traitorous insiders have to get their stolen data out of the network at some point, so defenders inspect traffic for outbound connections, the creation of new and possibly unauthorized accounts, unusual emails and database searches, large print jobs, suspicious use of USB drives, any one or combination of which might be tied to accessing and exfiltrating valuable data.

In practice, spotting a skilled insider adversary in this way can be like hunting for a needle in a field of haystacks. There are simply too many layers and protocols to be analyzed, too many security logs to check, and not enough time to translate this kind of monitoring into real-time detection. At the very least it can lead to sprawl, with organizations deploying layers of tech such as data loss prevention (DLP) to keep a lid on insider risks. Arguably, crude user profiling would be quicker, which involves picking based on risk and analyzing their computer activity for bad behavior.

Standing back, it’s not hard to understand why user behavior analytics (UBA) and its big brother, user and entity behavioral analytics (UEBA), have started to look like one path out of the morass. Instead of simply measuring an insider or insider account against a static series of rules, UEBA asks whether that user is behaving as they normally do or departing from that pattern. Getting to this point of understanding a “normal” state takes time, of course, but once in place offers the chance of more quickly detecting anomalous deviations from that.

Realtime detection v insiders

It’s a truism that frontline security systems, including UEBA, set out to detect threats in real (or near real) time. The complex part has always been designing what happens when an anomalous event is detected, how much of any subsequent action should be automated, and at what point human need to step in.

This is tough enough when detecting external threats but throws up even bigger challenges when pitted against insiders. Attacks arriving from the outside generate network traffic and traditional indicators of compromise (malware contacting unusual domains from a PC for instance), none of which are present when insiders do something risky or go rogue. The conceptual strength of UEBA is that it makes no distinction between internal and external – what counts is what is defined as a “normal” state for that user and user account. An event is either anomalous, non-anomalous, or somewhere in between.

Graphs showing deviation

Could you hack your bosses without hesitation, repetition or deviation? AI says: No

READ MORE

Making this work requires an organisation to first identify its valuable assets and create a baseline of access to them from every user. This means monitoring how data is copied to and from different points in the network, and especially out of the network as has become common when integrating with cloud services. Numerous indicators must be assessed, including the privilege level of the user, the size of a transfer, its time, place, the IP of the destination of data, and even failed login attempts.

In UEBA, this will also be correlated to devices and user accounts (admins having more than one) and compared to the history of behaviour associated with these. UEBA’s claim is that by overlaying machine intelligence built specifically to spot subtle changes in the patterns of connectivity and behaviour, an alert can be generated in real time (usually defined as anything from seconds to minutes) so that a security operations center team member can review and intervene.

The precise chain of interventions varies from system to system – some will intervene to stop data from being copied unless the user elevates his or her privileges – but human intervention is usually a priority. UEBA has been characterized as a three-dimensional way of understanding security monitoring but it is not yet an automated system for stopping bad things from happening without the need for trained eyes.

There’s always a ‘but’

The appeal of UEBA is that deploying it doesn’t require binning existing security technologies, which provide sensors feeding data into its big data. The question is how organizations differentiate one UEBA from another.

All work along similar-sounding principles, and yet, up close, not all might turn out to be the same thing. The first consideration is that a UEBA should handle the “entity” piece of the puzzle, which monitors things like devices, applications, servers IP addresses, and even data itself, which is essential when putting what users are doing into context. Another is big data itself – does the architecture underpinning this part of the system on stand up to technical scrutiny? A UEBA system should do as much of the latter as possible out of the box without the need for complex customizations.

Arguably, the biggest challenge of all is that UEBA should be something that a network’s security teams understands. Given how much of machine-driven UEBA depends on specialised, hard to-to-find skills, this can’t be taken for granted, especially when asking vendors to explain the basis for the opaque algorithms they use to conduct baselining. Containing the insider risk using a UEBA should always be about a system that delivers on its promises today, not at some idealised point in the future. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/20/insider_threat_real_time/

Oi, you. Equifax. Cough up half a million quid for fumbling 15 million Brits’ personal info to hackers

The UK’s privacy watchdog wants to fine Equifax £500,000 ($660,000) after hackers siphoned off 15 million Brits’ info from the credit-score agency’s databases.

Or in other words, no less than 30 quid for each of the affected British citizens. The fine could have been much larger had it fallen under Europe’s GDPR.

However, the security breach predates the hardline regulations, which kicked in this year, leaving the UK Information Commissioner’s Office (ICO) to hand out its largest possible monetary penalty under the nation’s old Data Protection Act: half a million quid.

American biz Equifax was ransacked in 2017 when miscreants exploited an Apache Struts 2 security vulnerability for which a patch existed yet hadn’t been installed by the biz’s IT staff. As a result, the cyber-intruders made off with sensitive personal information on roughly 150 million Americans, 15 million Brits, and others.

Out of that 15 million, 20,000 records included people’s names, dates of birth, telephone numbers, and driving license numbers, 637,000 records included names, dates of birth, and telephone numbers, and the rest: names and dates of birth.

laughlaugh

Equifax: About those 400,000 UK records we lost? It’s now 15.2M. Yes, M for MEELLLION

READ MORE

Also, some 27,000 Brits also had their Equifax account email addresses swiped, and 15,000 UK individuals had their names, addresses, dates of birth, account usernames and plaintext passwords, account recovery secret question and answer, obscured credit card numbers, and spending amounts stolen by hackers.

That last lot’s information was stored in a document called the “standard daily fraud” report, which was built from production data, and held in a file share accessible by sysadmins and other IT staff. Thus, it was accessible to the hackers. Ironically, the file was crafted on a daily basis for Equifax’s fraud investigations team to use for probing allegations of credit card scams.

Criminals broke into Equifax’s systems between May 13 and July 20, 2017, even though the biz was warned in March that year by US Homeland Security that its IT infrastructure was insecure. Uncle Sam literally told the company that its Struts 2 framework had a remotely exploitable security hole (CVE-2017-6538) in it.

Due to poor internal processes and auditing, though, the software wasn’t patched, allowing crooks to tiptoe through the hole and into the US-based network. We’re told Homeland Security’s warning was passed through the ranks in Equifax, however its sysadmins did not realize its public-facing customer dispute-handling portal running the Struts 2 framework needed updating, and thus it was left unpatched.

Miscreants were poking around Equifax’s insecure systems as early as March 10, prior to the May incursion.

On July 29, the US side of Equifax realized it had been hacked, and in late August worked out British folks were hit, too. Its IT staff had to replay, on test installations, the database queries run by the hackers in order to figure out what had been extracted.

On September 7, that year, the US side told its UK-based Equifax Ltd the bad news, and a day later, the agency admitted to the ICO that it had been pwned – initially suggesting fewer than 400,000 Brits were affected, then nudging that figure to 1.5 million before finally upgrading it with an extra zero.

The ICO probed the computer security breach in parallel with the UK’s Financial Conduct Authority, we’re told, before settling on handing out the maximum penalty possible.

Elizabeth Denham, Blighty’s Information Commissioner, said on Thursday:

The loss of personal information, particularly where there is the potential for financial fraud, is not only upsetting to customers, it undermines consumer trust in digital commerce.

This is compounded when the company is a global firm whose business relies on personal data.

We are determined to look after UK citizens’ information wherever it is held. Equifax Ltd has received the highest fine possible under the 1998 legislation because of the number of victims, the type of data at risk and because it has no excuse for failing to adhere to its own policies and controls as well as the law.

Many of the people affected would not have been aware the company held their data; learning about the cyber attack would have been unexpected and is likely to have caused particular distress.

Multinational data companies like Equifax must understand what personal data they hold and take robust steps to protect it. Their boards need to ensure that internal controls and systems work effectively to meet legal requirements and customers’ expectations. Equifax Ltd showed a serious disregard for their customers and the personal information entrusted to them, and that led to today’s fine.

Equifax can appeal the penalty, and if it does cough up the cash, it will be funneled into the UK government’s public coffers. We note that, to date, no fine has been levied against the agency in its home nation. Equifax made a $587m profit in 2017 from revenues of $3.4bn. As such, one of its executives could perhaps put the $660,000 fine on expenses.

The company had no comment to offer at time of publication. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/20/equifax_ico_fine/

As Tech Drives the Business, So Do CISOs

Security leaders are evolving from technicians to business executives as tech drives enterprise projects, applications, and goals.

The tasks topping the CISO’s to-do list are slowly shifting, as their core priorities transition from primarily technical expertise to securing business applications and processes.

It’s the key takeaway from a new report, conducted by Enterprise Strategy Group (ESG) and commissioned by Spirent, on how CISO responsibilities are shifting as cybersecurity becomes more complex. Researchers polled 413 IT and security pros with knowledge of, or responsibility for, the planning, implementation, and/or operations of security policies and processes.

“There’s a transition from a technology focus to a business focus,” says Jon Oltsik, ESG senior principal analyst. “And that doesn’t preclude the oversight of technology, but the technology is sort of guided by business initiatives, business applications, business goals, things like that.”

About 80% of experts say security knowledge, skills, operations, and management are more difficult now compared with two years ago. They attribute the complexity to growth in the number and sophistication of malware, IT projects, targeted attacks, and connected devices.

Nearly all (96% of) respondents say the CISO’s role has expanded, and the primary driver of their prominence is increasing difficulty of protecting enterprise data. Nearly 80% point to malware as the primary reason, and many claim between 80-90% of malware attacks target a single device, and 50-60% of malicious Web domains are active for one hour or less.

Organizations are increasingly digital and cyberattackers are taking precise aim to poke holes in their defenses. Oltsik calls it “death by a thousand cuts”. CISOs have seen breaches and regulations increase as more people realize the business is driven by tech. “Regardless of what business you’re in or process you’re talking about, there’s an IT underpinning,” he notes.

CISOs are becoming part of more board-level discussions to prevent breaches.

“There’s a real shift from reactivity to proactivity,” says Oltsik. In the past, companies built their defenses and hoped nothing bad would happen. When something eventually did happen, their responses were poorly organized, inefficient, and took a long time to put into practice. What’s more, responses were tech-oriented – not business oriented. The answer to compromise was “let’s fix the system” and not, “how do we fix the business,” he explains. Now, this has changed.

The CISO’s Growing To-Do List

How the CISO’s responsibilities change depends on the size of the organization, he continues. In a smaller organization they’ll be more involved with technology; less so in a larger enterprise.

“They’re being asked to participate in board-level meetings, business planning meetings,” Oltsik says of CISOs who manage within larger organizations. Especially in larger companies, the CISO is moving more toward business skills and away from technical skills.

Business leaders used to ask the CISO what controls they needed; now they want security embedded in business planning and application development. “You want security expertise in the operations groups, you want that in development groups, you want that in each component of operations, including the cloud,” he adds.

CISOs also have a responsibility to convey security data to business professionals, adds Amie Christianson, director of Operations Application Security at Spirent. High-level executive summaries help board members understand the threats affecting their business.

She uses a medical example. “When I get my lab results, I want to see at a high level what they are, and am I within a certain range,” she explains. “And that gives me peace of mind.” A doctor might see more details and act differently on the data, but a summary tells her everything she needs to know about her health. The same applies for CISOs and security summaries.

More Projects, More Problems

The increase in corporate IT projects is the second-biggest driver of complexity, researchers found, and projects related to IoT and cloud make security a greater challenge. Oltsik says he’s seeing more digital transformation applications, more IoT apps, more social media use, and greater reliance on mobile devices and applications.

Business processes and initiatives “are happening at a faster pace than they did in the past; they’re being done in an agile manner,” he continues. Applications have gone from six-month release cycles to multiple releases per day, and all of that affects security. Security teams used to plan for risk assessments and controls every few months; now, it’s every day.

When they face a new project, CISOs who have responsibility from the get-go can address security at the beginning and continuously test it throughout development. Most (86% of) respondents agree integrating security in project planning can lessen the likelihood of a breach, and 79% agree businesses should more frequently test security controls.

As security budgets continue to grow – and researchers found they will among 92% of respondents – businesses are shifting their spending from point tools to more integrated architectures. Professional and managed services are becoming popular as CISOs realize they lack the staff to handle the many security tasks they’re assigned.

As for outsourcing, “pedestrian areas” like email security and Web security are the first to leave the business, says Oltsik. While these are the most frequently outsourced, he says he’s beginning to explore the implications of using outside firms for threat detection and response.

Ultimately, he anticipates, we’ll see the role of the CISO split in two: a chief business security officer, who focuses on the enterprise, and a chief technical security officer who focuses on the systems. Christianson agrees: as security becomes part of the risk conversation, the business-focused CISO will be required to communicate with risk and compliance officers.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/analytics/as-tech-drives-the-business-so-do-cisos/d/d-id/1332850?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cryptojackers Grow Dramatically on Enterprise Networks

A new report shows that illicit cryptomining malware is growing by leaps and bounds on the networks of unsuspecting victims.

Cryptojacking — threat actors placing illicit cryptocurrency miners on a victim’s systems — is a growing threat to enterprise IT according to a just-released report from the Cyber Threat Alliance (CTA). CTA members have seen miner detections increase 459% from 2017 through 2018 and there’s no sign that the rate of infection is slowing.

The joint paper, written with contributions from a number of CTA members (including Cisco Talos, Fortinet, McAfee, Rapid7, NTT Security, Sophos, and Palo Alto Networks), points out that there is little unique in the methods cryptojackers use to infect their victims; defending against cryptojackers is identical in almost every respect to defending against other threats.

“If you have evidence of cryptomining on your networks, you probably have other bad stuff on your network, as well,” says Neil Jenkins, chief analytics officer for the CTA and principal author of the report. “The way the actors are getting their miners on the networks, they’re basically exploiting bad practices,” he explains.

Jenkins points out that the cryptojackers tend to exploit known vulnerabilities on networks and take the greatest toll on networks with poor visibility into the existing state of the system. And, contrary to what some may think, cryptojackers are anything but a “victimless crime.”

“We highlight in the report that, if mining is in a lot of places, you’re going to have trouble with doing your regular tasks because it will chew up resources,” Jenkins says. Beyond that, when an illicit cryptocurrency miner gains persistence, this indicates that other malware could take up residence, as well.

The possibility of other malware is critical because the CTA’s members have seen the activity of cryptominers rise and fall in lockstep with the price of cryptocurrencies. And while each infected system may mine a small fraction of a currency unit per day, the threat actors seem to see this as a “long game.” According to the report, “Illicit mining often occurs undetected within an enterprise over a long time period, generating a steady stream of revenue while not calling attention to itself. It is a quieter crime than ransomware and DDoS, which by their very nature are disruptive and cause an obvious issue.”

Jenkins says that cryptomining’s lack of unique infection vectors works in favor of victims trying to prevent or mitigate damage. “We’re trying to push organizations to see that this is a threat they can directly impact by improving their cyber hygiene, their best practices, by sharing information, and by upgrading their technology,” he says. “It will help them recover from the problem and pay dividends down the road.”

Related content:

 

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/cryptojackers-grow-dramatically-on-enterprise-networks/d/d-id/1332852?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

NSS Labs Files Antitrust Suit Against Symantec, CrowdStrike, ESET, AMTSO

Suit underscores longtime battle between vendors and labs over control of security testing protocols.

Security product testing firm NSS Labs today filed an antitrust lawsuit against cybersecurity vendors CrowdStrike, ESET, and Symantec as well as the Anti-Malware Testing Standards Organization (AMTSO) over a vendor-backed testing protocol.

The lawsuit accuses the three security vendors and the nonprofit AMTSO, of which they and other endpoint security vendors are members, of unfairly allowing their products to be tested only by organizations that comply with AMTSO’s testing protocol standard. NSS Labs, which also is a member of AMTSO, earlier this year voted against adoption of the standard and says it has no plans to comply with it.

A majority of AMTSO members voted in favor of the standard in May of this year, and most plan to adopt the protocol.

Friction between security vendors and independent testing labs is nothing new. Vendors and labs traditionally have had an uneasy and sometimes contentious relationship over control of the testing process and parameters. NSS Labs’ suit appears to represent an escalation of that age-old conflict, security experts say.

NSS Labs is calling foul in its lawsuit: “NSS Labs has suffered antitrust injury as a result of the acts herein alleged because it is the direct and principal target of the concerted refusal to deal/group boycott” any testing organizations that don’t adopt ATMSO’s testing standard, the lawsuit says.

In an interview with Dark Reading, Jason Brvenik, chief technology officer at NSS Labs, said the ATMSO standard falls short. “Our fundamental focus is that if a product is good enough to sell, it’s good enough to test,” and NSS Labs shouldn’t be forced to comply with ATMSO’s standard, he says. “It should be an independent test.”

Brvenik says the AMTSO standard does not support independent testing. “It’s driven by vendors to create a picture of capabilities that are not true,” for example, he says. “The standard is more like guidelines to interact with than a standard, and it doesn’t make things better for products” or the way they are tested, he says.

According to the NSS Labs suit, other vendors that spoke out against the adoption of AMTSO’s standards included AVComparatives, AV-Test, and SKD LABS. None of those vendors are named as parties in NSS Labs’ case. Efforts to reach AV-Test, AVComparatives, and SKD Labs were unsuccessful as of this posting.

CrowdStrike declined to comment on the NSS Labs suit but said in a statement: “CrowdStrike supports independent and ethical testing — including public testing — for our products and for the industry. We have undergone independent testing with AV-Comparatives, SE Labs, and MITRE and you can find information on that testing here. We applaud AMTSO’s efforts to promote clear, consistent, and transparent testing standards.”

ESET said it had not been officially contacted about the suit, but that it refutes the allegations. “We are aware of the allegations stated in the blog post from NSS Labs, however, we have yet to receive official legal communication. As legal proceedings appear to have been initiated, we are unable to say more at this time, beyond the statement that we categorically deny the allegations,” an ESET spokesperson said. “Our customers should be reassured that ESET’s products have been rigorously tested by many independent third-party reviewers around the world, received numerous awards for their level of protection of end users over many years, and are widely praised by industry-leading specialists.” 

Symantec would not comment on the case, and efforts to reach AMTSO  were unsuccessful as of this posting.

In a blog post earlier this month, ATMSO president Dennis Batchelder wrote that the protocol is a voluntary framework for testing anti-malware software “fairly and transparently.” 

For enterprises, there aren’t many options for vetting security software. Most don’t have the resources to perform their own in-house testing of security products, so they rely on consulting firms’ recommendations, third-party testing organizations — or the claims of their vendor.

Jon Oltsik, senior principal analyst with consulting firm Enterprise Strategy Group, says he’s seen enterprises struggle with the testing dilemma. “Customers don’t know how to test the efficacy of next-generation endpoint security products,” he says. “No one trusts vendors to test their own product.”

The concept of a vetted product testing standard is a “very good idea,” says Oltsik, who notes that he has not specifically studied ATMSO’s protocol.

Bottom Line
NSS Labs meantime argues that the AMTSO and its standard are anti-competitive. “They claim to try to improve testing but what they’re actually doing is actively preventing unbiased testing. Further, vendors are openly exerting control and collectively boycotting testing organizations that don’t comply with their AMTSO standards — even going so far as to block the independent purchase and testing of their products,” Vikram Phatak, CEO of NSS Labs wrote in a blog post today announcing the suit.

Meanwhile, NSS Labs claims in its lawsuit that AMTSO’s efforts have hurt its bottom line. “NSS Labs has lost sales and profits from the sale and license of its public testing reports, including its AEP Group Test reports, because it cannot charge customers who purchase reports that do not include all market participants as much as it could charge for reports that included all market participants.” 

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/endpoint/nss-labs-files-antitrust-suit-against-symantec-crowdstrike-eset-amtso/d/d-id/1332851?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

URL spoofing – what it is and what to do about it [VIDEO]

We take on the problem of URL spoofing, where the address bar in your browser doesn’t tell you the truth about the identity of the website you’re looking at.

Apple products had a recently disclosed bug of this sort (now fixed), which led to a lot of coverage of the issue, so we thought we’d explain what URL spoofing is, and what you can do about it.

Enjoy…

(Watch directly on YouTube if the video won’t play here.)


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/GDaP3KaRw5s/