STE WILLIAMS

UPnP joins the ‘just turn it off on consumer devices, already’ club

Universal Plug ‘n’ Play, that eternal feast of the black-hat, has been identified as helping to amplify denial-of-service attacks.

Researchers at Imperva looked into misbehaving UPnP implementations after spotting odd attack traffic while analysing a Simple Service Discovery Protocol (SSDP, an Internet proposal absorbed into UPnP) amplification attack during April 2018.

A skull atop money

It’s 2017, and UPnP is helping black-hats run banking malware

READ MORE

The company’s Avishay Zawoznik, Johnathan Azaria, and Igal Zeifman wrote that while some of the attack packets came from familiar UDP ports, others were randomised.

In trying to replicate the behaviour, the three researchers concluded that attackers were using UPnP on badly-secured devices like routers (turn it off, people), and tried to replicate the attack.

It’s not particularly difficult, particularly with Shodan to help. The required steps are:

  • Discover targets on Shodan by searching for the rootDesc.xml file (Imperva found 1.3 million devices);
  • Use HTTP to access rootDesc.xml;
  • Modify the victim’s port forwarding rules (the researchers noted that this isn’t supposed to work, since port forwarding should be between internal and external addresses, but “few routers actually bother to verify that a provided ‘internal IP’ is actually internal, and [they abide] by all forwarding rules as a result”.
  • Launch the attack.

That means an attacker can create a port forwarding rule that spoofs a victim’s IP address – so a bunch of ill-secured routers can be sent a DNS request which they’ll try to return to the victim, in the classic redirection DDoS attack.

The port forwarding lets an attacker use “evasive ports”, “enabling them to bypass commonplace scrubbing directives that identify amplification payloads by looking for source port data for blacklisting”, the post explained.

The researchers noted that this style of attack isn’t limited to reflecting DNS queries – late in April 2018, they observed a low-volume attack (probably probing) using Network Time Protocol responses over irregular ports.

The lesson is simple: sysadmins need to block UPnP from Internet-facing access; and vendors making consumer-grade devices need to make that block the device default. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/16/upnp_amplifies_ddos_attacks/

Mining apps? We’re cool so long as they admit to it, says Canonical

Canonical has responded to last week’s discovery that its Snap store carried apps containing embedded crypto-currency miners, by pledging to introduce a “verified developer” program.

When users complained that apps by Nicholas Tomb included the mining code, they were pulled from the Ubuntu Snap store, with Canonical promising an investigation.

The company’s follow-up, here, explained the resolution of this event and mentioned developer verification as a possible solution.

Canonical wrote “we are working on is the ability to flag specific publishers as verified. The details of that will be announced soon, but the basic idea is that it’ll be easier for users to identify that the person or organisation publishing the snap are who they claim to be.”

In explaining its response to the Tomb case, Canonical asks a question that most other app souks don’t ask: is crypto-mining evil?

The first question worth asking, in this case, is whether the publisher was in fact doing anything wrong, considering that mining cryptocurrency is not illegal or unethical by itself.

In this case, it wasn’t the mining that got the apps pulled, but misleading users while trying to “monetise software published under licenses that allow it, unaware of the social or technical consequences.”

Tomb, the post says, promised to play nice in future.

As for code review, we noted last week that even Apple and Google, both of which are rather better-resourced than Canonical, sometimes get caught out with malware dressed as apps.

Snaps go through similar steps to iOS or Android apps, the post says: “automated checkpoints that packages must go through before they are accepted, and manual reviews by a human when specific issues are flagged”.

However, “the inherent complexity of software means it’s impossible for a large scale repository to only accept software after every individual file has been reviewed in detail”, and there’s no way to ensure that all software can be trustworthy before using it. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/16/canonical_snaps_review_process_improvements_promised/

25% of Businesses Targeted with Cryptojacking in the Cloud

New public cloud security report detects a spike in cryptojacking, mismanaged cloud storage, account takeover, and major patches getting overlooked.

It seems like only yesterday we learned about the first of many Amazon S3 bucket leaks to expose troves of personal information in 2017. Poor cloud configuration was only one security issue to plague businesses last year; now, they have several to worry about.

RedLock’s second annual Cloud Security Trends report digs into lessons learned from attacks and breaches over the past year. Researchers found the top issues in the cloud are account compromises, which affected 27% of organizations, cryptojacking (25%), risky configurations (51%), and missing high-severity patches in the cloud (24%).

“Twelve months ago, in everyone’s minds, the biggest thing was cloud misconfigurations, S3 buckets,” says RedLock cofounder and CEO Varun Badhwar. By now, the threats have escalated.

On average, 27% of organizations experienced potential account compromise, including major companies Uber, Tesla, OneLogin, Aviva, and Gemalto. Risky configurations affected 51%; among them were FedEx, Deep Root Analytics, and Under Armour. Nearly one-quarter (24%), including Drupal, MongoDB, Elasticsearch, and Intel, missed high-severity patches in the cloud.

Cryptojacking has gone mainstream as attackers have unprecedented access to high-powered public cloud computing resources, affecting major corporations like Tesla, Gemalto, and Aviva. One-quarter of organizations had cryptojacking in their environments, compared with just 8% last year. Badhwar says activity has ramped up 300% in the last quarter, partially because the bar to enter the world of cryptomining is low and the payoff is high.

“Once the attackers are in, the ease of the cloud is really what makes it possible,” he notes. “They can spin up an infinite number of resources … it’s really a question of how aggressive they want to be.” For some organizations, cryptomining incidents can cost between $50-100K per day, for every day an attacker is mining digital currency undetected on their network.

Researchers also noticed account compromise has led to new attack vectors. Leaked credentials have been found in GitHub repositories, unsecured Kubernetes administrative interfaces, and public Trello boards; now, public cloud instance metadata APIs are an attack vector.

“Public cloud instance metadata is data about your instance that can be used toconfigure or manage the running instance,” researchers report. “Essentially, an instance’s metadata can be queried via an API to obtain access credentials to the public cloud environment by any process running on the instance.” Account compromise will continue to evolve, they anticipate.

There was a spot of good news in the report, Badhwar notes. One year ago, 82% of corporate databases were not encrypted. Now the number has dropped to 49%, marking a 67% improvement. However, he continues, the decline is most likely due to compliance and not because companies are becoming more security-conscious overall.

“I have to say, because this is one of those outlier trends related to improvement … I’m more inclined to say the majority is driven by the visibility and audits that GDPR is driving,” he says. This isn’t to say companies are excelling in compliance, however. Researchers found on average, businesses fail 30% of CIS Foundations’ best practices, 50% of PCI requirements, and 23% of NIST CSF requirements.

Visibility, along with speed of innovation, are two factors causing challenges in the cloud, Badhwar adds. As organizations rapidly adopt cloud services, security teams are often left in the dark when it comes to the applications developers are putting in the cloud. The cloud provides new tools and features to developers, who want to use them right away and don’t consult with security as they’re building.

“When you don’t have visibility, how can you do any security or policy enforcement?” Badhwar points out.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/25--of-businesses-targeted-with-cryptojacking-in-the-cloud/d/d-id/1331813?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Julian Assange said to have racked up $5m security bill for Ecuador

The government of Ecuador spent nearly $5m to provide protected internet access to asylum-seeker Julian Assange and he responded by hacking theur systems, an anonymously sourced report has claimed.

According to a report from The Guardian, internal documents show that the the Wikileaks boss required surveillance and security services that had to be paid out of a secret intelligence fund.

Julian Assange in an Ecuador football shirt

Are you able to read this headline? Then you’re not Julian Assange. His broadband is unplugged

READ MORE

The report claims that the South American nation allocated around $66,000 per month to support “Operation Hotel”, a covert project that covered both the security costs associated with Assange’s asylum as well as the costs of monitoring the activist’s various personal visits. The scheme was first named “Operation Guest” before changing to Hotel, which may reflect the length of Assange’s stay at the embassy.

The operation is said to have been signed off by then-president Rafael Correa and later by Ecuadorian foreign minister Ricardo Patiño.

What’s worse, the money may not have even been well spent. The documents also claim that Assange was able to compromise the network at the embassy to utilize a satellite internet service and intercept communications intended for other staff members.

Lies!

WikiLeaks issued a denial of the allegations shortly after the story was published.

Assange has been staying at the Ecuadorian embassy in London since 2012, when he took refuge from an interview request from Swedish law enforcement. While the Swedes have since been dropped any investigation, he still faces immediate arrest in London on charges of jumping bail, and the US government has also expressed interest in pursuing charges against the leakmonger.

The report notes that much of the money was spent on meatware security measures: undercover operatives who took monthly cash payments in order to keep tabs on Assange. Other money was said to have gone to security service providers and other payments were made to Italy’s Hacking Team, a security company whose correspondence WikiLeaks would later publish.

While Assange remains under the protection of Ecuador, reports have surfaced in recent months suggesting he was wearing out his welcome at the embassy.

For the time being, however, he remains a guest (albeit an [allegedly] expensive one) of the government. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/15/julian_assange_bill/

Ex-CIA man named as suspect in Vault 7 leak

A former CIA employee has been named as the prime suspect in last year’s dump of thousands of documents on the agency’s hacking practices.

A report from The Washington Post cites court documents that name Joshua Adam Schulte as the person authorities think to be behind the massive Vault7 data dump.

Transcripts [PDF] from the case contains multiple references to searches related to the Vault 7 case.

kangaroo

WikiLeaks doc dump reveals CIA tools for infecting air-gapped PCs

READ MORE

“In March of 2016, there was a significant disclosure of classified material from the Central Intelligence Agency. The material that was taken was taken during a time when the defendant was working at the agency,” prosecuting attorney Matthew Laroche is quoted as saying.

“The government immediately had enough evidence to establish that he was a target of that investigation. They conducted a number of search warrants on the defendant’s residence.”

Another January transcript [PDF] made public also notes that attorneys were discussing “national security evidence that might be present in the case.”

Here’s where things get tricky: the government says it does not have enough evidence to charge Schulte with the leak. However, he is facing unrelated charges in the New York Southern District court for possession and distribution of child abuse images.

He has plead not guilty to the charges.

The report says that, while the government thinks Schulte was the one who handed the cache of documents over to WikiLeaks, they do not currently have enough evidence to bring charges. Rather, he is being charged with operating a server that contained a 54GB container of child abuse content (we’re not going to label it as ‘pornography’ out of respect for adult entertainment performers).

Schulte’s lawyers have argued that he simply ran a public server and had no idea as to the contents of the encrypted container. Interestingly, court transcripts show that Schulte’s team has offered his work with the CIA, and the rigorous screenings that come with it, as arguments in his defense.

According to the report, Schulte worked for the the CIA’s engineering development group until 2016, a position that would have given him access to the thousands of agency documents that were handed over to WikiLeaks in 2017.

That cache would eventually be disclosed as the ‘Vault 7’ data dump. While it was embarrassing for the CIA to lose so many documents, the dump itself provided little in the way of juicy intel: mostly it just showed that, yes, the CIA engages in covert intelligence operations.

Most notably, the dump included details on hacking tools the agency used to compromise Windows, MacOS and iOS devices. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/15/vault_7_leak/

Feds Name Suspect in CIA ‘Vault 7’ Hacking Tool Leak

Ex-CIA employee in jail for unrelated charges at this time.

Former CIA employee Joshua Adam Schulte has been named as a suspect who allegedly may have handed over to WikiLeaks a massive trove of the intelligence agency’s cyber espionage tools that the activist group then published online.

The Washington Post reported today that federal prosecutors named Schulte as the suspect in a hearing earlier this year, and that he is now imprisoned in Manhatten for separate and unrelated charges. Schulte worked on a CIA team that built hacking tools to conduct cyber espionage against foreign adversaries, according to the report.

WikiLeaks in March of 2017 began publishing more than 8,700 confidential CIA documents under the title “Vault 7”  that laid bare the intel agency’s global hacking arsenal. Among the leaked documents were various zero-day vulnerabilities in Android, iOS, and Windows as well as exploits against network routers, smart TVs, and connected vehicles.

Materials gathered from a search warrant of Schulte’s home did not provide sufficient evidence for charges in the Vault 7 case, but he reportedly remains a target in the probe.

Read more here.

 

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/feds-name-suspect-in-cia-vault-7-hacking-tool-leak/d/d-id/1331809?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Don’t Roll the Dice When Prioritizing Vulnerability Fixes

CVSS scores alone are ineffective risk predictors – modeling for likelihood of exploitation also needs to be taken into account.

The way that organizations today decide which software vulnerabilities to fix and which to ignore reduces risk no better than if they rolled dice to choose, according to a new study today from Kenna Security and Cyentia Institute. The report’s authors argue that enterprises need to get smarter about how they prioritize flaws for remediation if they want to really make a dent in their risk exposure. 

The fact is, that organizations today are drowning in software vulnerabilities. A different report out today from Risk Based Security highlights this reality. It found that last quarter alone there were nearly 60 new vulnerabilities disclosed every single day. Among the 5,375 flaws published in the first 90 days of the year, approximately 18% had CVSS scores of 9.0 or higher. 

Those numbers in part demonstrate why some organizations can’t fix every vulnerability in their environment – which means they must prioritize their efforts. The question is, what makes for a good prioritization system? 

Techniques like using CVSS vulnerability severity scores to guide vulnerability management activities have long been the stand-in methodologies. But those can’t necessarily predict how likely attackers will be to actually exploit any given flaw in order to carry out an attack. And that’s the real fly in the ointment, because according to the Kenna and Cyentia report, just 2% of published vulnerabilities have observed exploits in the wild. 

So, say an organization had the resources to miraculously fix 98% of the flaws in their environment; if they chose the wrong 2% to miss they still could be wide open to the full brunt of vulnerabilities attackers are actually targeting. And given the breach statistics against mature organizations that presumably use some standardized method of prioritization, one must question the efficacy of the same old, same old way of how flaws are picked for remediation.

“Security people know intuitively that what they’ve been doing historically is wrong, but they have no data-driven way to justify a change internally,” says Michael Roytman, chief data scientist for Kenna. “That’s what we hope this report provides people.” 

Cyentia examined prioritization techniques statistically in terms of two big variables that were measured in light of whether exploits exist: coverage and efficiency.

Coverage measures how thoroughly organizations were able to fix flaws in their environment for which an exploit exists. If there are 100 vulnerabilities in an environment that have exploits and the organization only fixes 15 of them then the coverage of that prioritization is 15%. The leftover 85% is the organization’s unremediated risk.

On the flip side, efficiency measures how effective the organization is in choosing vulnerabilities that are being exploited in practice by the bad guys. If the organization fixes 100 flaws but only 15 of them are being exploited, then that’s a prioritization efficiency rating of 15%. The other 85% are those for which time might have been better spent doing something other than fixing them.  

“Ideally, we’d love a remediation strategy that achieves 100% coverage and 100% efficiency,” the report explains. “But in reality, a direct trade-off exists between the two.” 

So a strategy that goes after really bad CVEs of 10 or higher would have a good efficiency rating but is going to have terrible coverage. But the other mode of going after everything CVSS 6 and above means that efficiency is going to go through the floor because many of these will never be exploited. 

When measuring coverage and efficiency of  prioritization using simplistic remediation rules such as using CVSS scores, Cyentia found that the various choices tended to be no more better than choosing at random. It then analyzed coverage and efficiency using a more complex model that tries to predict which vulnerabilities are most likely to be exploited – using variables like whether the flaw includes key words like “remote code execution,” predictive weighting of the vendor, CVSS score and the volume of community chatter around a given flaw in reference lists like Bugtraq. This kind of modeling was able to outperform historical rules with better coverage, twice the efficiency, and half the effort. 

“We’ll never, of course, have perfect in vulnerability remediation. What we have to do is figure out where we are and then figure out how to get better,” says Jay Jacobs, chief data scientist and founder of Cyentia. “Being exploit-driven, I think, is one of the better approaches.”

Related Content:

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/vulnerability-management/dont-roll-the-dice-when-prioritizing-vulnerability-fixes/d/d-id/1331811?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Kaspersky Lab to Move Some Core Operations to Switzerland

Most customer data storage and processing, software assembly, and threat detection updates will be based in Zurich.

Moscow-based Kaspersky Lab plans to relocate most of its core infrastructure and operations to Switzerland in a bid to allay concerns the company is vulnerable to Russian-government influence.

By the end of 2019, customer data storage and processing for most regions including the US and North America will be based in Switzerland. So too will most software assembly operations and threat detection updates, the security vendor said this week. Kaspersky Lab will arrange for all activity in its Switzerland facility to be supervised by an independent third party to ensure full transparency.

The move is part of a broader effort by Kaspersky Lab to reestablish market trust following accusations by the US government that the company is vulnerable to interference from Russian intelligence and the government in Moscow. The concerns are primarily tied to an incident where the AV firm allegedly collected classified data belonging to the US National Security Agency (NSA) from the computer of an NSA contractor.

Kaspersky Lab has said its AV software automatically collected a file containing source code for a secret NSA hacking tool as part of its usual malware analysis process. Kaspersky Lab has maintained that its AV technology flagged the file as potentially malicious and uploaded the software to its network for analysis. But the company quickly deleted the data after determining what it was, Kaspersky has claimed. Critics, meanwhile, accused the company of helping Russian intelligence steal the data as part of a broader and systematic data theft campaign.

The Trump administration last December formally banned US government agencies from using Kaspersky Lab’s range of antivirus and anti-malware tools. The ban, included in a broader spending bill, required all federal agencies to purge their systems of Kaspersky Lab software in 90 days.

The security vendor has sued the US government over the ban while also committing to make its operations more transparent to show it is not operating under Russian government influence. Last year, Kaspersky Lab announced the company would establish a total of three Transparency Centers worldwide from where it will carry out a majority of its operations under supervision by a trusted third-party. The company has also offered up its source code for third-party inspection under the transparency program.

The Switzerland center is the first of those transparency centers and demonstrates Kaspersky Lab’s commitment to openness a spokeswoman says. “The Transparency Center will be created and operated by Kaspersky Lab and will serve as a facility for trusted responsible third-parties from both the public and private sectors to review and evaluate the source code of Kaspersky Lab software and software updates,” she said. Source code for public releases will be stored in Switzerland and will be available for independent review.

Assembly Tools

The security vendor’s new facility in Zurich will also host Kaspersky’s software build conveyor — a set of tools the company uses to assemble its anti-malware software. By the end of this year, Kaspersky Lab will start assembling all products and threat detection rule databases for worldwide use out of its Swiss center.

“A third party organization will have all necessary access to processes and products and will decide for itself what to review,” the spokeswoman said. The third party organization will be a non-profit entity that will be established independently for the purpose of producing professional technical reviews of Kaspersky Lab products. “On a regular basis the third-party organization will report publicly on its activities, and everyone will have an opportunity to access these reports,” she said.

The third-party overseer will have access to Kaspersky’s software development documentation, source code of publicly released products and access to the rules and databases the vendor uses for threat detection. Kaspersky Lab will also provide access to the source code of cloud services handling and storing data belonging to customers in North America, Europe and other regions.

Kaspersky Lab will continue to use the current software build conveyer in Moscow for creating products and AV bases for the Russian market.

Wesley McGrew, director of cyber operations at security consultancy Horne Cyber, says the measures that Kaspersky Lab is taking should help increase confidence among private businesses and individuals. But the vendor will still have its work cut out among potential government clients in the US and elsewhere.

“With competitors to choose from that haven’t had the same accusations placed against them, governments aren’t going to be quick to place trust back in Kaspersky,” McGrew predicts. A lot will depend on the extent and the type of the visibility that the independent observer will have over Kaspersky’s operations.

“The nature of antivirus software, with its high degree of privileged access to systems and networks, requires a lot of trust in the software, and how it is maintained and operated over time,” McGrew notes. “Oversight will need to be comprehensive across the entirety of Kaspersky operations to convince people of the lack of Russian government influence.”

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/kaspersky-lab-to-move-some-core-operations-to-switzerland/d/d-id/1331814?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

25% of Businesses Targeted with Cryptojacking

New public cloud security report detects a spike in cryptojacking, mismanaged cloud storage, account takeover, and major patches getting overlooked.

It seems like only yesterday we learned about the first of many Amazon S3 bucket leaks to expose troves of personal information in 2017. Poor cloud configuration was only one security issue to plague businesses last year; now, they have several to worry about.

RedLock’s second annual Cloud Security Trends report digs into lessons learned from attacks and breaches over the past year. Researchers found the top issues in the cloud are account compromises, which affected 27% of organizations, cryptojacking (25%), risky configurations (51%), and missing high-severity patches in the cloud (24%).

“Twelve months ago, in everyone’s minds, the biggest thing was cloud misconfigurations, S3 buckets,” says RedLock cofounder and CEO Varun Badhwar. By now, the threats have escalated.

On average, 27% of organizations experienced potential account compromise, including major companies Uber, Tesla, OneLogin, Aviva, and Gemalto. Risky configurations affected 51%; among them were FedEx, Deep Root Analytics, and Under Armour. Nearly one-quarter (24%), including Drupal, MongoDB, Elasticsearch, and Intel, missed high-severity patches in the cloud.

Cryptojacking has gone mainstream as attackers have unprecedented access to high-powered public cloud computing resources, affecting major corporations like Tesla, Gemalto, and Aviva. One-quarter of organizations had cryptojacking in their environments, compared with just 8% last year. Badhwar says activity has ramped up 300% in the last quarter, partially because the bar to enter the world of cryptomining is low and the payoff is high.

“Once the attackers are in, the ease of the cloud is really what makes it possible,” he notes. “They can spin up an infinite number of resources … it’s really a question of how aggressive they want to be.” For some organizations, cryptomining incidents can cost between $50-100K per day, for every day an attacker is mining digital currency undetected on their network.

Researchers also noticed account compromise has led to new attack vectors. Leaked credentials have been found in GitHub repositories, unsecured Kubernetes administrative interfaces, and public Trello boards; now, public cloud instance metadata APIs are an attack vector.

“Public cloud instance metadata is data about your instance that can be used toconfigure or manage the running instance,” researchers report. “Essentially, an instance’s metadata can be queried via an API to obtain access credentials to the public cloud environment by any process running on the instance.” Account compromise will continue to evolve, they anticipate.

There was a spot of good news in the report, Badhwar notes. One year ago, 82% of corporate databases were not encrypted. Now the number has dropped to 49%, marking a 67% improvement. However, he continues, the decline is most likely due to compliance and not because companies are becoming more security-conscious overall.

“I have to say, because this is one of those outlier trends related to improvement … I’m more inclined to say the majority is driven by the visibility and audits that GDPR is driving,” he says. This isn’t to say companies are excelling in compliance, however. Researchers found on average, businesses fail 30% of CIS Foundations’ best practices, 50% of PCI requirements, and 23% of NIST CSF requirements.

Visibility, along with speed of innovation, are two factors causing challenges in the cloud, Badhwar adds. As organizations rapidly adopt cloud services, security teams are often left in the dark when it comes to the applications developers are putting in the cloud. The cloud provides new tools and features to developers, who want to use them right away and don’t consult with security as they’re building.

“When you don’t have visibility, how can you do any security or policy enforcement?” Badhwar points out.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/25--of-businesses-targeted-with-cryptojacking/d/d-id/1331813?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

US Government Cybersecurity at a Crossroads

Trump reportedly kills cybersecurity coordinator position, while meantime many agencies continue to play catch-up in their defenses.

Amid a report today that the Trump White House plans to cut the administration’s cybersecurity coordinator position altogether, new data shows how US federal government agencies continue to struggle to close security holes in their software.

Politico reported that the administration has eliminated the White House cybersecurity position, which was recently vacated by former head Rob Joyce, who has returned to the National Security Agency. Politico said it had obtained an email to the White House National Security Council staff from John Bolton aide Christine Samuelian: “The role of cyber coordinator will end,” in an effort to “streamline authority” in the NSC, which includes two senior cybersecurity directors, she said in the email, according to Politico.

As of this posting, there was no official word from the White House. But Sen. Mark Warner, D-Va., tweeted in response to the news report:

“Mr. President, if you really want to put America first, don’t cut the White House Cybersecurity Coordinator — the only person in the federal government tasked with delivering a coordinated, whole-of-government response to the growing cyber threats facing our nation.”

According to the US Department of Homeland Security in its newly published cybersecurity strategy released today, cyber incidents reported to the DHS by federal agencies increased more than tenfold between 2006 and 2015, culminating in the Office of Personnel Management breach in 2015 that compromised personal data of 4 million employees and overall, 22 million people.

App Gap

So how are the feds doing security-wise to date? New software scan data from Veracode reflects a major element of security challenges for federal agencies: secure software. The scan data shows that federal agencies have the least secure applications of all industry sectors, with nearly half sporting cross-site scripting (XSS) flaws; 32%, SQL injection; and 48%, cryptographic flaws.

Unlike most industries, just 4% of federal apps are scanned weekly, 21% monthly, 24% quarterly, and about half, less than quarterly, Veracode found.

In the first scan of an app, there were 103.36 flaws per megabyte of code.

“How often a customer is testing an app gives us an understanding where they are” waterfall to DevOps-wise, says Chris Wysopal, CTO and co-founder of Veracode, which only handles nonclassified apps for the feds. With 4% of apps tested weekly by agencies, “that tells us there’s a not a lot of DevOps or Agile going on,” Wysopal says.

Wysopal notes that the government has challenges that other industries do not. The technology sector not surprisingly fared best in Veracode’s security scans. “They are culturally completely different: technology is focused on building … process, tooling, languages and DevOps and automating things,” Wysopal says. “We see them building security in as part of the process of developing their software.”

Government, on the other hand, approaches software development more from an audit perspective, he says. “There’s huge room for improvement.”

Agencies tend to run older systems for longer periods of time, mainly due to long procurement processes and budget constraints, for instance. “The biggest determinant of whether software has vulnerabilities is how old it is,” he says.

Even with the relatively dire data, it’s from agencies who are being proactive in their application security by opting for a scanning service, he notes. “Our data is slightly rosy” in that context of the government sector, he says.

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/us-government-cybersecurity-at-a-crossroads/d/d-id/1331815?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple