STE WILLIAMS

Cisco, ISARA to Test Hybrid Classic, Quantum-Safe Digital Certificates

Goal is to make it easier for organizations to handle the migration to quantum computing when it becomes available.

Cisco Systems and security firm ISARA are collaborating in an initiative to test digital certificates capable of working on conventional public key cryptography, as well as quantum computing environments.

The goal is to demonstrate how a single digital certificate supporting multiple public key algorithms can help reduce costs and the risks associated with migrating the public key infrastructure to quantum mode.

The need for such measures stems ironically enough from the power of quantum computing, which while having the potential to enable a new generation of applications also has the ability to overcome current encryption schemes.

“Quantum computing allows us to efficiently solve the hard math problems underlying the public key cryptography we rely upon today for Internet banking, connecting to work remotely, and doing ecommerce,” says Mike Brown, CTO of ISARA. That fact necessitates new approaches to public key cryptography, he says.

Quantum computers are designed to harness the behavior of atoms and subatomic particles to handle computationally intensive applications — in areas like medicine — that are well beyond the capabilities of current generation computers.

Traditional crypto certificates that are used to authenticate digital transactions and IDs are not secure enough to authenticate transactions in a quantum environment. So at least for the duration of the migration from traditional computers to quantum computing, digital certificates will need to be equipped to support both computing environments.

“As a technology industry, we have been extremely successful at making the use of cryptography nearly ubiquitous,” Brown says. So successful in fact that cryptography has become integral to the plumbing of the Internet, he says.

“So that means migrating cryptography, and specifically authentication tools, will involve changes to nearly everything. For a company, this will be a multi-year IT project with all of the associated complexity.”

The approach in which Cisco and ISARA are collaborating is to use dual-algorithm certificates, where one algorithm works to protect quantum transactions while the other maintains backwards compatibility with traditional environments.

To demonstrate the viability of the approach Cisco and ISARA have set up a public server that uses the so-called PQ hybrid X.509 certificates (PQ for Post-Quantum) to authenticate to transport layer security clients.

“We use authentication to ensure that it was ‘you’ making that bill payment through your bank account online and that the amounts haven’t been tampered with,” Brown says. “Quantum-safe authentication is a way to accomplish that using mathematics that quantum computers can’t solve.”

Under the collaborative effort, ISARA is bringing its expertise in quantum-safe cryptography and PKI to work with Cisco and its Enrollment over Secure Transport (EST) system for issuing backwards-compatible hybrid systems.

“The collaboration between Cisco and ISARA began with a focus on the use of a particular quantum-safe authentication scheme called LMS or Leighton-Micali Signatures,” Brown says. “The next phase will introduce support for additional algorithms.”

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here. Register with Promo Code DR200 and save $200.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/endpoint/cisco-isara-to-test-hybrid-classic-quantum-safe-digital-certificates/d/d-id/1331532?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Power Line Vulnerability Closes Air Gap

A new demonstration of malware shows that air-gapped computers may still be at risk.

Security professionals love to talk about the “air gap” as the ultimate in safety for a computer: When it’s not attached to network cables or a wireless network, it’s presumed to be safe. Presumed, that is, until now. This week, researchers from the Ben-Gurion University of the Negev announced that they have come up with a way to exfiltrate data from air-gapped computers via malware that can control the computer’s power consumption.

By adding workload to CPU cores that aren’t doing anything else, the malware will change how much power (how many watts) the computer is using. Done carefully, the result is, essentially, an FM transmission over the power line. When a probe is placed near the power cable, the modulation can be detected and decoded — and information will have left the system.

The researchers call the malware that controls the power consumption PowerHammer; so far, it’s a research proof-of-concept that hasn’t been seen in the wild. That’s good, because while ways to thwart a PowerHammer-like attack exist, none are perfect.

PowerHammer isn’t the first time control or information signals have been sent over power lines. Electric motors are frequently controlled via pulse-width modulation (PWM) sent over the power lines, building control systems have used power-line carriers, and some electrical utilities have experimented with broadband internet access over power lines. This is, however, a reminder that capabilities can be used by individuals and groups with many different agendas.

For more, read here.

Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/power-line-vulnerability-closes-air-gap/d/d-id/1331534?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Fake Hillary porn just the tip of Russia’s Reddit penetration

A fake porn video that claimed to show Hillary Clinton engaging in a sex act has been traced back to a Reddit account which the platform acknowledged on Tuesday is linked to a Russian troll farm.

The account, u/rubinjer, was banned but is being kept up for the time being for purposes of transparency, Reddit said. The account was used to post pro-Trump, racially divisive, anti-Clinton messages.

The fake porno was titled “This is How Hillary gets black votes.” It linked to an animated gif that NBC News said was still available on the platform as of Tuesday. Links to the video and gif have now been deleted, according to the BBC.

NBC News said that the same faux gif was posted five times to PornHub under the name “Leaked Hillary Clinton’s Hotel Sex Tape with Black Guy,” and also onto the porn site SpankBang.

NBC News reports that it had been viewed more than 250,000 times on PornHub.

Over the past month, Reddit has been investigating how its platform might have been used to spread Russian propaganda.

When it posted its yearly transparency report on Tuesday, Reddit said that so far, it’s found and removed a few hundred accounts that are suspected of having come from the Russian propaganda factory called the Internet Research Agency (IRA).

Specifically, it’s found 944 suspicious accounts, which it listed here.

Out of all the 944 suspicious accounts, the rubinjer account was the IRA’s most popular: it had 99,493 upvotes at the time Reddit closed it down.

In Tuesday’s post, Reddit’s CEO Steve Huffman said this is how the platform is identifying Russian propaganda accounts:

There were a number of signals: suspicious creation patterns, usage patterns (account sharing), voting collaboration, etc. We also corroborated our findings with public lists from other companies (e.g. Twitter).

With regards to the fake Clinton sex video, however, its existence was first revealed when IRA whistleblower Alan Baskaev told the Russian television channel TV Rain about it. He said that the troll factory had hired a black man and a Hillary Clinton look-alike to make the sex tape. It wasn’t until Reddit posted the list of suspended accounts that the link to Russian trolls was credibly made.

The Reddit community didn’t fall for much of this stuff. Huffman said that one heartening aspect turned up by the investigation is the community’s clear-eyed skepticism: moderators banned more than half of the Russia-linked accounts, and a majority of the posts, “before they got any traction at all.”


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0sUEbnGvJ_g/

Interview: Sarah Jamie Lewis, Executive Director of the Open Privacy Research Society

This article is an interview with Sarah Jamie Lewis, Executive Director of the Open Privacy Research Society, a new privacy advocacy and research non-profit based in Vancouver, Canada.

Its goal is to make it easier for people, especially marginalized groups (including LGBT persons), to protect their privacy and anonymity online by helping app and technology firms more easily build privacy-by-default services via open source software that they’re spearheading.

We asked Sarah a few questions about the Open Privacy Research Society and the state of privacy in tech in general, and have reprinted her responses in full below.

What was the impetus for this project?

Last year I published a book, Queer Privacy, it’s a collection of essays written by people in queer and trans communities. While all the essays were ostensibly about technology, they cover broad topics like coming out, dating, sex work, intimate partner violence and even death and media representation.

It was a hard project to work on, but my goal was to finally start documenting how modern technology fails to protect the privacy, or uphold the consent of, marginalized people.

I’m not a fan of simply documenting though, and it’s no coincidence that Open Privacy emerged roughly a year after I finished the first cut of Queer Privacy.

I have had a year to sit and think about the kinds of technology we need to build, as well as the kind of organization we need to ensure that technology exists. And I’ve also had a year to find some amazing people to work with me and help guide that.

Can you give some examples of things that you saw and why they were problematic?

Real name policies are one example, forced account correlation is another, e.g. it is common for LGBT people in certain parts of the world to have two Facebook accounts: a regular one for family and work, and an “out” one which they use for dating/meeting other LGBT people.

These people have to go through a lot of effort to make sure that those accounts stay separate/unidentifiable, with one of the major risks being their “out” account showing up on a “people you may know” sidebar for one of their family members, and thus potentially outing them.

We have recently seen issues even in LGBT focused apps like Grindr, sharing location data insecurely or HIV status with third party platforms. This kind of non-consensual data sharing is something that should be impossible.

A key phrase in the Open Privacy Research Society mission statement is “We believe that moral systems enable consent.” Can you elaborate a bit on this?

We believe that it is possible to use technology to protect fundamental human rights. It is not just enough to give people the building blocks e.g. encryption – we need to build systems that actively help them achieve their goals and protect them from harm.

We define moral systems as those that protect people by default and are built to withstand abuse by those with malicious intent. We believe in systems that distribute power and resist attempts to centralize it.

Our priorities right now are on making metadata resistant communication platforms usable. We have good tools for protecting the content of communications, but we believe we can do better.

Technology shouldn’t be able to collect information on who someone is talking to, when they [are talking], where they are talking from etc. This kind of communications metadata is pervasive and enables corporations and governments to build surveillance and censorship systems.

We are working on an open protocol (Cwtch) that makes that kind of metadata collection impossible and allows us and others to build messaging apps, discussion forums, advertising boards or any other imaginable application in a way that is privacy-preserving in the truest sense of the word.

Cwtch is based on Ricochet which uses Tor onion services to provide peer-to-peer instant messaging without third parties. There is no one in a position to take data without consent because the only data shared is between you and the person you are talking with. Everything is as private as can be and metadata is kept as small as technically possible.

We’ve been working on a way to extend this concept to groups (which will be version 1 of Cwtch), and then eventually to higher level applications. The idea being that the data is controlled by you, and the only time you give it away is either directly to another person you trust or via privacy-preserving structures that only you and other people you trust have access to.

Why has so much of our tech failed to protect marginalized communities?

Quite frankly it’s because catering to the protection of marginalized communities is not aligned with the incentives of modern surveillance capitalism. There are countless examples of social networks, messaging apps, and other tools placing marginalized people at risk through simple ignorance.

What should organizations do to build better tools and privacy controls for all their users?

I think we have to give people tools and involve them in the research necessary to produce those tools. Too often we have produced technically brilliant tools that are unusable by those with limited time to devote to learning them. We have entire movements centered around training people how to use these tools, that is unsustainable.

Nothing about us, without us” isn’t just a catchy saying, it is a reminder that when we build technology we must involve as many voices as possible, from as many communities as possible. Only with those voices can we hope to build technology that protects the most vulnerable and the most marginalized.

Your career began in the context of government work and at a large corporation (Amazon), both of which are known for tracking citizens/consumers. What is it that caused you to not just walk away from that kind of work but choose to fight against it?

I believe that when you make a mistake, regardless of your intent, you must work to undo the damage you have caused. I helped build surveillance systems when I was early in my career. It is something I regret doing, and I think the only way to reduce the harm done by those actions it to build new systems and structures that resist censorship and surveillance. Systems and structures that enable consent. That’s what Open Privacy is and will be.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RP0YqyyMF0s/

Instagram bends to GDPR – a “download everything” tool is coming

Following criticism about lack of data portability – unlike parent Facebook, it doesn’t have a Download Your Data tool – Instagram now says it’s building a tool to let users download everything they’ve ever shared.

Everything, as in everything? We’re still waiting to hear details.

An Instagram spokesperson told TechCrunch that the new tool – available “soon” – will enable users to download a copy of their photos, videos and messages. What’s not clear yet is if the tool will also enable users to export following and follower lists, Likes, comments, Stories, and the captions they put onto posts.

Nor was it clear what quality the downloadable photos and videos will have: will they export with the high resolution that they’re uploaded or displayed in, or will they come through compressed?

Hang tight, Instagram told TechCrunch: more details are coming soon.

We’ll share more details very soon when we actually launch the tool. But at a high level it allows you to download and export what you have shared on Instagram.

If the tool launches by 25 May, it will help Instagram to comply with the European Union’s upcoming General Data Protection Regulation (GDPR) privacy law, which requires data portability.

The new law requires that individuals be able to demand deletion of data, to opt out of future data collection, to view what personal data a company holds, and to download that data in a format that they can move to competitors.

Not that there’s a lack of tools to get data out of Instagram now. Digital Trends lists the tools and techniques that people have used to extract their Insta-goodies. But the third-party apps that get the job done aren’t necessarily safe, and there’s no guarantee they’ll play nice with your data or get everything. For one, you have to hand over not just your content, but also your Instagram login.

The Guardian suggests that we’ll be seeing a flurry of similar announcements as the clock ticks GDPR-ward. This is the first time we’ve heard from Instagram on the subject, but we heard plenty from Facebook CEO Mark Zuckerberg this week when it comes to GDPR privacy tools.

On Wednesday, Zuckerberg told the House Energy and Commerce Committee that the GDPR changes Facebook’s making will be made available to all users worldwide.

That goes not just for the same privacy controls, Zuckerberg said. Facebook will also provide the same European-level, GDPR-required data protections and disclosures to Americans.

Well, kinda. Maybe.

It was a clear “yes” when Rep. Gene Green asked the CEO if he would “commit today that Facebook will extend the same protections to Americans that Europeans will receive under the GDPR?”

But that “yes” was a bit more ambiguous during various points in the marathon questioning sessions.

When Rep. Janice Schakowsky asked if all the rights required under the GDPR will be applied to Americans as well, Zuckerberg reverted to describing privacy controls the company is adding.

Congresswoman, the GDPR has a bunch of different important pieces. One is offering controls over – that we’re doing. The second is around pushing for affirmative consent and putting a control in front of people that walks people through their choices. We’re going to do that, too… We’re going to put a tool at the top of people’s apps that walks them through their settings…

He was similarly ambiguous when testifying before the Senate. That just might have something to do with the big, bold letters on his crib sheet, which was photographed by the AP and which read:

GDPR (Don’t say we already do what the GDPR requires).


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/bntk_B2DPR4/

The ransomware that says, “I don’t want money” – play a violent game instead!

Thanks to Simon Porter of SophosLabs for his behind-the-scenes work on this article.

Not all ransomware is made equal.

To be clear, we’re not for a moment suggesting that any form of ransomware is technically, ethically, morally or legally acceptable.

After all, ransomware is guilty of unuauthorised access as soon as it reads your files, and of the more serious crime of unauthorised modification as soon as it overwrites them.

Worse still, most ransomware follows up those offences with the yet more odious crime of demanding money with menaces – what is known on the street as blackmail, extortion, standover, or plain old criminal b*****dry.

But it’s Friday the Thirteenth today, historically the “day of madness” for computer virus writers, so we thought we’d feature a recent ransomware sample with an unusual twist.

This one explicitly and unusually says, “I don’t want money.”

Instead, the PUBG Ransomware has a weirder aim: to get you to play a recently-released online game called PLAYERUNKNOWN’s Battleground, or PUBG for short.

Sophos products proactively detected this malware as Mal/Genasom-A.
The sample used to prepare this article has the SHA256 hash: 3208efe9­6d14f5a6­a2840dae­cbead6b0­f4d73c5a­05192a1a­8eef8b50­bbfb4bc1

PUBG is a game of the “last player standing” sort, a genre based on an ultra-violent, dystopian and unsurprisingly controversial Japanese novel of 1999 (made into a film in 2000) called Battle Royale, in which adolescent schoolchildren are forced to fight to the death under the terms of a government law known as the BR Act.

Edifying stuff, indeed.

Anyway, the malware author wants you to play PUBG, offering to unscramble your files once you’ve clocked up an hour of time in the game.

Your files is encrypred by PUBG Ransomware!
but don't worry! It is not hard to unlock it.
I don't want money!
Just play PUBG 1Hours!

In theory, this means buying a copy of the game (it’s currently £26.99 in the UK) and installing the software, but the ransomware doesn’t make any effort to take a slice of your purchasing pie.

There’s no download link, affiliate code, keylogger, credit card sniffer or other malware mechanism by which the author could sneakily take advantage of your purchase, assuming you didn’t have the game already.

Quite why he chose PUBG, and what he’s hoping to achieve by urging you to play it, is a mystery.

In practice, there’s no need to buy the gane at all, because the malware detects that you are “playing” simply by monitoring the list of running apps for a program called TSLGAME.EXE, which is the name of the file you launch to start the PUBG game. (No, we don’t know what TSL stands for.)

So you can rename any handy utility to TSLGAME.EXE, run it, and the malware will assume you have obeyed its instructions to play the game.

The malware shows you a counter so you can keep track of how many seconds you’ve been playing, but instead of waiting for you to clock up 3600 seconds of game time (that’s 60 minutes’ worth of 60 seconds, or one hour), it decrypts your data after just three seconds.

Mostly harmless?

We’re assuming that the author of this malware – we don’t know who they are, but they left the username Ryank inside the compiled code, for what that’s worth – intended this as a rather sleazy and slightly risky joke.

Indeed, at first sight, you might be inclined to dismiss this sort of malware as “mostly harmless”, because it includes a built-in decryptor.

Also, it uses a hard-coded encryption process (AES in CBC mode with the key GBUPRansomware) so that you, or perhaps a technically-inclined friend, could probably knit your own recovery tool if all else failed.

Nevertheless, programs like PUBG Ransomware simply aren’t acceptable: it’s not up to someone else to take any sort of unauthorised risks – no matter how carefully calculated or cautiously programmed – with your data.

For instance, a bug or an unexpected error condition in the encryption or decryption code could have disastrous side-effects, not least because this malware simply ignores most run-time errors, and ploughs on regardless if something goes wrong.

The risk of data corruption caused by badly written and inadequately tested code is obvious.

Add to the equation that this particular badly-written code is acting without authorisation, and comes from an anonymous author who can’t be contacted for support or otherwise held to account if your data goes down the drain…

…and you will realise why malware is still malicious even if it isn’t overtly about money.

What to do?

If you’re a hobbyist coder looking to have some programming fun…

…avoid the temptation to muck about with malware.

Find an online coding community that you can contribute to openly and be proud of taking part in.

There are loads of open source projects that would love to have you if you are willing to play by the rules.

Don’t let yourself get sucked into writing malicious software that you’ll spend the rest of your life hoping no one finds out that you were part of.

Learn more

Ransomware that openly proclaims it’s not interested in money is very rare.

Most ransomware is all about money – your money, paid over to the crooks to get your dataa back.

Why not read our guide to staying ahead of the cybercriminals?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/kwFyC-XtpfY/

Great Western Railway warns of great Western password reuse: Brits told to reset logins

Great Western Rail is urging all customers to change their GWR.com passwords after miscreants gained access to strangers’ accounts over the last week.

The British train company said circa 1,000 accounts were directly affected out of more than a million, and has written to those customers and the UK Information Commissioner’s Office.

It appears scumbags took username and password combinations leaked from other hacked websites and services, and used those to log into GWR.com accounts that had reused those credentials. This is a common attack known as credential stuffing.

“We are now asking other account holders to do the same as a precaution against potential further attempts,” GWR told The Register.

“This kind of attack uses account details harvested from other areas of the web to try and catch out consumers with poor password habits. Sadly, it is the kind of attack that is experienced on a daily basis by businesses across the globe, and is a reminder of the importance of good password practice.

“We have acted quickly and decisively with our partners to protect our customers’ data, and have taken clear steps to stop it happening again.”

In a general email to account holders, GWR said it has reset all GWR.com passwords as a precaution: “To ensure the security of your personal information you will need to do this when you next log in to the GWR.com website.

“You should use a unique password for each of your accounts for security, and we recommend you review all of your accounts for maximum security, and we recommend you review all your online passwords and change any that are the same.”

However, some customers who received the email were concerned the note may have been sent by scammers.

The Register has asked GWR for further comment. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/11/great_western_rail_password_reset/

From Bangkok to Phuket, they cry out: Oh, Bucket! Thai mobile operator spills 46k people’s data

TrueMove H, the biggest 4G mobile operator in Thailand, has suffered a data breach.

Personal data collected by the operator leaked into an Amazon Web Services S3 cloud storage bucket. The leaked data, which includes images of identity documents was accessible to world+dog before the mobile operator finally acted to restrict access to the confidential files yesterday, 12 April.

The issue was uncovered by security researcher Niall Merrigan, who told us he had tried to disclose the problem to TrueMove H, but said the mobile operator had been slow to respond.

the office

Amazon’s answer to all those leaky AWS S3 buckets: A dashboard warning light

READ MORE

The researcher told El Reg that he’d uncovered around 46K records that collectively weighed in at around 32GB. Merrigan attempted to raise the issue with TrueMove H, but initially made little headway beyond an acknowledgement of his communication.

Representatives of the telco initially told him to ring its head office when he asked for the contact details of a security response staffer before telling him his concerns had been passed on some two weeks later, after El Reg began asking questions on the back of Merrigan’s findings.

In the meantime, other security researchers have validated his concerns.

“There were lots of driving licences and I think I saw a passport,” said security researcher Scott Helme. “I guess they have to send ID for something and the company is storing the photos in this bucket, which can be viewed by the public.”

El Reg approached TrueMove H about the incident. The mobile operator responded last month with a holding statement stating that it was investigating the matter and we hung fire on opublication until the data was no longer public facing.

Please kindly be informed that this matter has been informed to a related team for investigation. If they have any queries or require any further information from you, they will contact [you] later.

Merrigan said the exposed data was still available up until yesterday, when it was finally made private, allowing the security researcher to go public with his findings. A blog post by Merrigan that explains the breach – and featuring redacted screenshots of the leaked identity documents – can be found here. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/13/thai_mobile_operator_data_breach/

Federal Agency Data Under Siege

Seventy-one percent of IT security professionals in US federal agencies have reported breaches in their organizations.

The US government continues to grapple with the same cybersecurity challenges faced by most organizations, but it has a different set of hurdles to overcome than its private-sector counterparts. As a result, federal agencies are experiencing more data breaches than other industry sectors. Despite skyrocketing IT security spending, successful attacks are escalating across the board. Federal agencies in particular are weathering a perfect storm around data that puts agency secrets — and the personal data of over 330 million American citizens — at risk.

According to Thales’ 2018 Data Threat Report—Federal Government Edition, 57% of federal respondents reported data breaches, a threefold increase over the 18% recorded back in 2016. As many as 12% experienced multiple breaches in 2017 and in previous years.

Many agencies are in a difficult position. Federal agencies must protect sensitive data and both thwart bad guys hunting for citizens’ private data and nation-state hackers with their own agendas — in addition to grappling with perennial underfunding, understaffing, and antiquated systems that commercial enterprises tossed into the dumpster years ago. At the same time, they need to make government more accessible and transparent via digital transformation, which inevitably exposes them to more cyber threats.

But these factors don’t completely explain the growing numbers of breaches at federal agencies.

Catching Up with the Private Sector
Despite these troubles, agency IT security professionals are trying to stay positive, partly because spending is sharply increasing this year. “Like most other sectors, data security spending plans in the US federal sector are up compared to last year — way up,” says Garrett Bekker, 451 Research’s principal analyst for information security, as highlighted in the Thales report. “Perhaps more importantly, for the first time, the US federal government ranks the highest of any US vertical in terms of spending increase plans — more than nine out of 10 (93%) plan to increase security spending in 2018.”

In fact, a staggering 73% of federal agencies say their IT security spending will be much higher in 2018, according to the report. This comes after several years of IT security spending well below that of commercial enterprises.

“The bad news is that reports by US federal respondents of successful breaches last year (57%) are far ahead of the global average (36%), and also the global federal sector (26%). Further, 70% of US federal respondents say their agencies were breached at some point in the past,” says Bekker.

Digital Transformation Compounds the Problem
As in the private sector, digital transformation is a big cause of the data threats plaguing federal agencies. According to the report, an increasing number of federal agencies are adopting cloud services, with many operating multi-cloud environments at rates that outstrip even those in the private sector. A staggering 45% of federal agencies use five or more infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) providers, as opposed to just 20% in the private sector. Nearly half (48%) of federal agencies use more than 100 software-as-a-service (SaaS) applications, where data is harder to control, versus the global average of 22%.

However, a paltry 23% of federal agencies use encryption in the cloud — and in more than a third of all cases where encryption is applied (34%), the encryption keys are in the hands of the cloud provider. “US and global federal show preference for allowing cloud providers to control encryption keys,” says Bekker. “This is a potential problem since they don’t really have full control over their data if they don’t control the keys.”

Strengthening Cyber Resilience
To keep the government’s digital initiatives alive and strengthen cyber resilience, agencies report — at rates of 77% or higher — that they will be implementing, or are planning to implement, better encryption technologies to protect sensitive data. This includes data masking (89%), database and file encryption (88%), encryption in the cloud (84%), and application layer encryption (77%).

However, each IaaS and PaaS deployment and environment needs a specific data security plan, enforced by policy, operational methods, and tools. Agencies clearly recognize the need for action, but they must rethink their priorities. Case in point: data-in-motion and data-at-rest defenses are ranked equally at 78% and 77%, respectively, as the most effective tools for protecting data, according to the report. Unfortunately, this isn’t where IT security spending is being directed. In fact, data-at-rest defenses — which are the most effective at protecting large data stores — are seeing the lowest spending increases, at only 19%, while endpoint and mobile defenses are garnering the biggest increases (56%). 

Says Bekker: “The largest amount of respondents plan to increase spending on endpoint and mobile devices, despite ranking endpoint and mobile devices as least effective at protecting sensitive federal data — a major disconnect.”

Governments must rethink their priorities. The adoption of digital technology (cloud, Internet of Things, big data, mobile payments, etc.) requires new approaches to protecting citizen data, government secrets, and other sensitive information. In the digital world, there is no room for breaches, outages, or even service interruptions. Customers expect an instant, seamless, and hassle-free user experience. In times of digitalization, the competition is just one click away, and even reduced availability can cause financial harm.

Besides using encryption technology, firewalls, and intrusion-detection systems, a distributed denial-of-service (DDoS) mitigation solution can help preventing service outages. Especially with the IoT gaining maturity and billions of devices are being connected, the threat landscape is evolving fast. Technologies such as artificial intelligence pose an additional threat for organizations, as they can be used maliciously to boost cyberattacks such as DDoS attacks.

Thus, it’s essential for federal agencies to constantly review the cyber capabilities and make further adjustments, if and where necessary. Relying on traditional security solutions such as on-premises solutions is simply not sufficient considering the rapid change of technologies in the course of the digital revolution.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop ITX 2018 agenda here.

Marc Wilczek is a columnist and recognized thought leader, geared toward helping organizations drive their digital agenda and achieve higher levels of innovation and productivity through technology. Over the past 20 years, he has held various senior leadership roles across … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/federal-agency-data-under-siege/a/d-id/1331467?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Former Airline Database Administrator Sentenced for Hacking Reservation System

Former PenAir IT staffer gets five-year probation sentence via plea deal.

A former database administrator at PenAir in a plea deal will serve a five-year probation sentence for hacking the airline’s Sabre database for its ticketing and reservation system. 

Suzette Kugler, 59, of Desert Hot Springs, California, after her Feb. 2017 dismissal from PenAir hacked into its system between April and May 2017, apparently in retaliation for her firing. She used her insider knowledge of the database system to create phony privileged employee credentials, which she used to destroy critical data and to prevent airline employees from booking, ticketing, modifying, and boarding passengers during the attack. 

US District Judge Sharon L. Gleason sentenced Kugler to five years of probation, 250 hours of community service, and a fine of $5,616 to PenAir for damages to the airline.

Read more here

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here. Register with Promo Code DR200 and save $200.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/former-airline-database-administrator-sentenced-for-hacking-reservation-system/d/d-id/1331530?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple