STE WILLIAMS

Surprise! Student receives $36,000 Google bug bounty for RCE flaw

What’s the only thing better than a bug bounty cheque? A bug bounty cheque you weren’t expecting.

In the case of 18-year old student researcher at Uruguay’s University of the Republic in Montevideo, this cheque was to the tune of $36,337, awarded by Google for finding a surprisingly big hole in the security of its App Engine (GAE) cloud platform.

The story began when the researcher gained access to GAE’s restricted non-production environment earlier this year and found it was possible to rummage around in the platform’s internal and hidden APIs.

Google is not in a hurry to document this to outsiders, which made searching for vulnerabilities of any size a question of trial and error. This made the ease with which it was possible to find and interact with some of these APIs even more surprising.

Inside GAE’s deployment environment, the dangerous vulnerability turned out to be in one service, “app_config_service”. This proved significant because commands sent to it:

Allowed me to set internal settings such as the allowed email senders, the app’s Service Account ID, ignore quota restrictions, and set my app as a “SuperApp” and give it “FILE_GOOGLE3_ACCESS

In response to this revelation, someone at Google “bumped up the severity”, which raised its bug bounty value. However, Google’s bounty assessors added in an email:

Please stop exploring this further, as it seems you could easily break something using these internal APIs. When issuing a reward, we’ll take into account what you could have achieved if you wanted to.

A second email on 13 March confirmed the unexpectedly large bounty and for a good reason. Writes the student discoverer:

I was not aware until then that this was regarded as Remote Code Execution (The highest tier for bugs), it was a very pleasant surprise.

This means that an attacker could, in theory, bypass the researcher’s fiddling in Google’s API innards and go straight to this vulnerable service from a network or the internet, assuming they knew about it.

Google’s alarm at that is not surprising because an RCE of a GAE API could get very messy. The company has now fixed the issue.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/uQbeCki3Ju8/

The State of Information Sharing 20 Years after the First White House Mandate

Finally! Actionable guidance for ISACs and enterprises on what threat intel to share, how to share it, and which key technologies will automate redaction and protect privacy.

Much has been made of the need to share information among companies since President Clinton signed Presidential Directive 63 exactly 20 years ago today, on May 22, 1998. Commonly referred to as PDD-63, the directive called for the creation of information sharing and analysis centers (ISACs) for critical sectors of the economy. President Obama widened the aperture to include other constituencies that desired to work together, including small businesses, sports organizations, and Internet of Things communities. Congress stepped up and passed the Cyber Security Act of 2015, which clarified what information can (and cannot) be shared and relieved concerns about liability and antitrust.

But even with all of this activity, progress has been very slow. Robust organizations like the FS-ISAC have been established to address sharing within the financial sector, but most organizations would agree that we have struggled with the “what, when, and how” of information sharing. In fact, the use of the word “sharing” in cybersecurity has almost become pejorative. Very basic questions have surfaced within in small and large organizations, such as, “How do I decide what to share?” “Do I only need to share information after a breach?” “How do I share securely?” and perhaps most importantly, “What value will I receive in return?”

Prior to the anniversary of PDD-63, the Cloud Security Alliance (CSA) with little fanfare made a significant contribution to enabling the free flow of sharing by releasing a research paper on its experiences in operating the Cloud Cyber Incident Sharing Center (C-CISC). The organization’s work started nearly two years ago when Jim Reavis, CEO of CSA, started a voluntary exchange among member companies to exchange data. CSA member experiences yielded some straightforward lessons that can be adopted by ISACs and individual organizations alike.

Fixing a Broken Information Sharing Process
First, we must acknowledge there are vast differences between legacy information sharing systems and what organizations should look for today. The working group discovered that many organizations would hold data until after a breach was confirmed, which is of little value to others seeking to prevent a similar attack. Most data was being shared through noisy email listservs, and the review and approval process for sharing data was burdensome, resulting in reports that lacked proper context.

Through trial and error, CSA discovered what to share and how to share and identified key technologies to automate redaction and protect privacy.

The Hardest Part Is Getting Started
CSA’s working group also found the majority of enterprises it encountered wanted to participate in a threat intelligence exchange, but they didn’t know where to start. Enterprises begin by leveraging events generated by security information and event management systems or other tools that require review by an analyst. Then they gather event data with context into a secure repository, and, finally, exchange data with others using automated redaction tools.

CSA learned that most organizations did not have the means to see all of their suspicious event data in a common repository. In some cases, organizations were using multiple case management or orchestration tools that did not allow for easy correlation or real-time chronology of event data. The CSA guidance advises to select a system that allows the user to receive immediate feedback and is extensible, allowing you to choose what you want to share and with whom.

CSA’s research paper includes other useful guidance around developing supporting security knowledge management policies and helps shape organizations that are thinking about evolving to mature cyber intelligence knowledge management, rather than thinking about purely reactionary threat intelligence as we did in the wake of breaches against Target and others several years ago.

Twenty years is far too long to wait for such guidance, but it has arrived just in time. You can download the paper here.

Related Content:

Paul Kurtz is the CEO and cofounder of TruSTAR Technology. Prior to TruSTAR, Paul was the CISO and chief strategy officer for CyberPoint International LLC where he built the US government and international business verticals. Prior to CyberPoint, Paul was the managing partner … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/the-state-of-information-sharing-20-years-after-the-first-white-house-mandate/a/d-id/1331849?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

LA County Nonprofit Exposes 3.2M PII Files via Unsecured S3 Bucket

A misconfiguration accidentally compromised credentials, email addresses, and 200,000 rows of notes describing abuse and suicidal distress.

Los Angeles County 211, an LA-based nonprofit providing information and referrals for health and human services in LA County, accidentally exposed 3.2 million files through a misconfigured Amazon Web Services S3 storage bucket, the UpGuard Cyber Risk Team reported this week.

UpGuard discovered the bucket on March 14, 2018. While not all files were publicly downloadable, those that were included Postgres database backups and CSV exports of the data, which contained thousands of rows of personal information. UpGuard reached a member of the security team on April 24; soon after, the bucket was no longer accessible.

Downloadable files included access credentials for employees operating the 211 system, email addresses for contacts and registered sources of LA County 211, and most concerningly, call notes pertaining to conversations with people in need. Notes include the reasons for calls and the personally identifiable information of people reporting problems, and in some cases, reported abusers. Within three million rows of call logs are 200,000 rows of notes describing abuse and suicidal distress, raising major privacy concerns.

It’s not the first time an organization has compromised data in a misconfigured S3 bucket; however, the LA County 211 incident is significant because of the population it serves. The nonprofit provides badly needed services to hundreds of thousands of people each year. All types of reports are collected in a single database, which makes sense from a functional perspective but simultaneously creates a “crown jewel” of data for attackers, says UpGuard.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/la-county-nonprofit-exposes-32m-pii-files-via-unsecured-s3-bucket/d/d-id/1331875?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Big bimmer bummer: Bavaria’s BMW buggies battered by bad bugs

A security audit conducted by Tencent’s Keen Security Lab on BMW cars has given the luxury automaker a handy crop of bugs to fix – including a backdoor in infotainment units fitted since 2012.

Now that the patches are gradually being distributed to owners, the Chinese infosec team has gone public with its security audit, revealing some of the details right here [PDF].

The researchers noted that their probing found flaws that could be exploited by an attacker to inject messages into a target vehicle’s CAN bus – the spinal cord, if you will, of the machine – and engine control unit while the car was being driven. That would potentially allow miscreants to take over or interfere with the operation of the vehicle to at least some degree.

There are 14 bugs in total. Seven have been assigned standard CVE ID numbers: CVE-2018-9322, CVE-2018-9320, CVE-2018-9312, CVE-2018-9313, CVE-2018-9314, CVE-2018-9311, and CVE-2018-9318. Other flaws are awaiting CVE assignments.

Four require physical USB access – you need to plug a booby-trapped gadget into a USB port – or access to the ODB diagnostics port. That means an attacker has to be inside your vehicle to exploit them. Six can be exploited remotely, from outside the car, with at least one via Bluetooth, which is super bad. Another four require physical or “indirect” physical access to the machine.

Of those 14, eight affect the internet-connected infotainment unit that plays music, media, and other stuff to the driver and passengers. Four affect the telematics control system. Two affect the wireless communications hardware.

Forgive us if the details seem vague: Tencent is withholding all the nitty-gritty until people have had a chance to update their wagons. Below is a summary of the bugs that were uncovered and reported:

Table of security flaws found in BMWs by Tencent

Tencent’s table of vulnerabilities found in BMW vehicles … HU_NBT is the internet-connected infotainment system, TCB is the telematics control, BDC is a wireless communications module

Click to enlarge

Robot drives a car. Conceptual illustration from Shutterstock

BMW chief: Big auto will stay in the driving seat with autonomous cars

READ MORE

Bimmer’s infotainment head uses the operating system QNX, running on an Intel-based x86 board that handles multimedia and the BMW ConnectedDrive services, and an Arm-compatible board that oversees things like power management and CAN bus communication.

One of the vulnerabilities at this point – getting from the CAN bus to BMW’s K-CAN bus – was discovered because the Bimmers’ developers reused some Texas Instruments code “to operate the special memory of Jacinto chip to send the CAN messages.” The telematics unit uses a Qualcomm MDM6200 for USB comms, and a Freescale 9S12X for K-CAN communications. Its software is based on Qualcomm’s REX OS realtime operating system.

The telematics control unit handles contact with the BMW service network, meaning the researchers could trigger BMW Remote Services with arbitrary messages though a simulated GSM network. Meanwhile, the worst telematics unit bug is described thus:

After some tough reverse-engineering work on TCB’s firmware, we also found a memory corruption vulnerability that allows us to bypass the signature protection and achieve remote code execution in the firmware.

The researchers also said they found ways to “influence the vehicle via different kinds of attack chains,” which ultimately let them send “arbitrary diagnostic messages to the ECUs [Engine Control Units].” And there’s a lack of defenses protecting things like diagnostics, they wrote, reproduced unedited:

A secure diagnostic function should be designed properly to avoid the incorrect usage at an abnormal situation. However, we found that most of the ECUs still respond to the diagnostic messages even at normal driving speed (confirmed on BMW i3), which could cause serious security issues already. It will become much worse if attackers invoke some special UDS routines (e.g. reset ECU, etc..).

What the researchers only describe as a “backdoor” in the infotainment unit – the most common vulnerabilities to carry this description are either admin accounts with no password or a default password, or accounts configured with hardcoded credentials, but the document doesn’t specify this – means a USB connection can get past the infotainment unit to the vehicle’s K-CAN bus, and from there, can be used to attack individual devices, such as the engine controller.

The infotainment unit is also at risk of over-the-air Bluetooth-borne attacks. Under controlled conditions, the Tencent researchers said they could also wirelessly hijack the car’s hardware via the cellular phone network, adding that “a malicious backdoor can inject controlled diagnosis messages to the CAN buses in the vehicle.”

The vulnerabilities were confirmed in the BMW i3, X1, 525Li and 730Li models the researchers tested, but bugs in the telematics control unit would affect “BMW models which equipped with this module produced from year 2012.”

BMW was able to send over-the-air updates to fix some bugs, we’re told, but others will need patches through the dealer network, which explains why the researchers are withholding their full technical report to March 2019.

We’ve asked BMW for further comment. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/23/bmw_security_bugs/

ISP TalkTalk’s Wi-Fi passwords Walk Walk thanks to Awks Awks router security hole

A years-old vulnerability continues to menace the security of some home Wi-Fi networks in the UK.

The WPS feature in TalkTalk’s Super Router can be compromised to steal the gateway’s wireless network password, according to folks at software development house IndigoFuzz. The British ISP and telco was warned of the shortcoming in 2014, but seemingly nothing has been done about it.

According to IndigoFuzz’s advisory on Monday, the routers provide a WPS pairing option that is always turned on. Because that WPS connection is insecure, an attacker within range can exploit it using readily available hacking tools, and thus extract the router’s Wi-Fi password.

In other words, if you’re near a TalkTalk Super Router, you can probe it for the Wi-Fi password via the wonky WPS feature, and hop onto the wireless network.

talktalk

TalkTalk CEO Dido Harding pockets £2.8m

READ MORE

“This method has proven successful on multiple TalkTalk Super Routers belonging to consenting parties which is enough to suggest that this vulnerability affects all TalkTalk Super Routers of this particular model/version,” the IndigoFuzz team explained.

“TalkTalk have been notified of this vulnerability in the past and have failed to patch it many years later.”

Normally, a computer security researcher discovering such a vulnerability would give the affected vendor the courtesy of at least a 30-day waiting period to develop and roll out patches or mitigations before going public with the details. In this, case, however, IndigoFuzz went public immediately because TalkTalk subscribers publicly raised the alarm in 2014 that the WPS feature is insecure, and thus the ISP has had plenty of time to correct its equipment.

Researchers have, in fact, been unraveling various flaws in routers’ WPS functionality all the way back to 2011, if not beyond.

Since security-bungling TalkTalk has had four years to address the matter, IndigoFuzz reckoned another 30 days won’t matter much, and went ahead with the disclosure this week. “The purpose of this article is to encourage TalkTalk to immediately patch this vulnerability in order to protect their customers,” the biz noted.

The Register has asked TalkTalk for comment. At the time of publication, we, like everyone else in this case, have yet to hear back. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/22/talktalk_routers_cracked_by_four_year_old_bug/

US Senator to DOD CIO: ‘Take Immediate Action’ on HTTPS

US Senator Ron Wyden pens a letter to the Department of Defense CIO, urging stronger security on public-facing government sites.

HTTPS adoption has grown to the point where it can, and should, be considered the standard for Web security. The problem is that not all organizations have jumped on board — including the United States Department of Defense (DOD), which runs several sites that lack HTTPS encryption.

In a strongly worded letter to DOD CIO Dana Deasy, US Senator Ron Wyden, D-Ore., urges “immediate action” on the adoption of cybersecurity best practices for all publicly accessible DOD Web services.

A handful of DOD sites, including the Army, Air Force, and National Security Agency homepages, have HTTPS by default and use certificates trusted by major browsers, Wyden writes. Several others — namely, the websites for the Navy, Marines, and the CIO office itself — either don’t encrypt connections or only verify their authenticity with a DOD Root Certificate Authority.

“Many mainstream web browsers do not consider these DoD certificates trustworthy and issue scary security warnings that users are forced to navigate before accessing the website’s information,” writes Wyden in his letter. The poor user experience affects civilians and service members, all of whom must face security warnings when visiting DOD webpages.

This isn’t the first time the government has been mandated to improve its Web security. In 2015, the Office of Management and Budget (OMB) issued a memo instructing federal agencies to enable HTTPS encryption and enforce it with HTTP Strict Transport Security (HSTS) by the end of 2016. In 2017, the Department of Homeland Security issued a directive emphasizing the OMB’s requirements and requiring civilians to practice better security hygiene.

“Any public-facing website is a gateway to exposing personal information, getting any sorts of data that can be detrimental against the Department of Defense,” says Mike Chung, head of government solutions at Bugcrowd, which last year ran the Hack the Pentagon event to improve security for public-facing government websites.

The security implications if Wyden’s requests aren’t fulfilled could affect all government agencies, which hold personal data that can be exposed or extracted, he continues. “To me, it’s all that sensitive data they hold in their IT infrastructure that has the potential to be hacked into,” Chung says of the repercussions. “That could be the absolute worst-case scenario.”

HackerOne advisor Lisa Wiswell agrees. “HTTPS has been industry best practice for way too long to not have every single public facing website owned or operated by the US Federal or State Governments converted,” she says. “Plain text is not acceptable – no matter if you’re inputting personallu identifiable information or just browsing a website.”

There’s little doubt the security community will be watching the response from Deasy, who was appointed to the role of DOD CIO in April and most recently held the position of CIO at JPMorgan Chase. Addressing these issues is “absolutely a must” for him, Chung notes.

Wyden says “the DoD cannot continue these insecure practices” because the consequences of staying stagnant are growing greater. Starting in July, Google plans to acknowledge HTTPS as the expected standard by removing the “secure” label from HTTPS websites and marking all HTTP sites as “not secure,” alerting users whenever they visit unencrypted pages.

Google’s warnings will weaken public trust in the DOD’s ability to defend against cybercrime, according to Wyden. The DOD sets a poor example by teaching people to dismiss critical security warnings as irrelevant. Normalizing these alerts drives the risk of cyberattacks and foreign-government hacking: if the DOD doesn’t prioritize security, civilians have less incentive to do the same.

The letter closes with three key security recommendations. Wyden urges Deasy to adopt the guidelines described in memos from the OMB and DHS, obtain and deploy certificates trusted by major Web browsers for all publicly accessible services, and evaluate the use of shorter-lived, machine-generated certificates.

That said, Deasy will need to do more than adopt HTTPS to strengthen the government’s security posture. “My hope is that this new CIO will take it to the next level and really have the opportunity to do an assessment across all DOD public-facing websites, as well as mobile apps,” Chung says. This may mean, for example, launching more crowdsourced initiatives to mitigate the lack of skilled security pros in the government.

It’s a difficult time for government cybersecurity, which finds itself in a tough position as it loses a cybersecurity coordinator amid growing threats. Data shows federal agencies have the least-secure applications across industry sectors, with just 4% of federal apps scanned weekly.

“I don’t know if the DOD is well-prepared to fight the cyber war,” says Chung. “There’s a lack of resources, lack of preparedness, lack of understanding of where these different attack vectors can come from.”

Related Content

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/us-senator-to-dod-cio-take-immediate-action-on-https/d/d-id/1331870?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New Spectre Variants Add to Vulnerability Worries

Variants 3a and 4 build on the Spectre foundation, but how worried should enterprise security professionals really be?

The Spectre and Meltdown vulnerabilities hit the most basic level of computer hardware, striking the logical interface between instruction execution and cache. Intel and operating system publishers since have released patches to remediate these two issues but the problem with the CPU architecture remains, with the addition of new vulnerabilities disclosed this week.

The newly discovered Variants 3a and 4 are the latest speculative execution vulnerabilities in Intel (and presumably AMD, ARM, and other) CPUs. These side-channel attacks exploit vulnerabilities in the basic execution of the system rather than in any piece of software. That makes them both more involved to remediate, and perfect foundations for entire families of exploits and attacks.

These latest variations on the Spectre theme were disclosed by researchers from various organizations: Jann Horn of Google Project Zero (GPZ) and Ken Johnson of the Microsoft Security Response Center (MSRC) independently discovered Variant 4, while Zdenek Sojka, Rudolf Marek and Alex Zuepke from SYSGO AG, along with Innokentiy Sennovskiy from BiZone LLC, discovered and reported Variant 3a.

Variant 4 is interesting because it could be exploited in a language-based runtime environment. These environments are typically seen in languages that are interpreted or compiled at run-time — languages like JavaScript. In most cases, these environments are encountered in Web-based applications, which is both good and bad from a Spectre vulnerability perspective.

The downside of the equation is ubiquity: it would be difficult to find a computer without one or more Web browsers in a modern enterprise. The good news, however, is every major browser has already been updated to make Spectre and its family members unavailable to attackers.

Variant 4, if successfully exploited, could allow an attacker to see into memory and access information belonging to other programs, processes, and users. Variant 3a uses the same sort of technique to a different end; in this case, an attacker could get information on the system configuration and status rather than data from any particular user.

In the case of each new variant, the organizations with the most to worry about are the same: those in the cloud. “The original worries were, ‘I get a $5 account on a virtual account and I can run my code but share memory with neighbors,'” says Tod Beardsley, research director of Rapid7. “It’s a real problem for the Amazons or Digital Oceans of the world.”

Large cloud or hosted service providers presumably have already applied the patches provided by Intel. The existing patches for existing exploits are not what concern experts, though.

“The fact that we are seeing a new derivative of the … Spectre vulnerabilities is not surprising. Vulnerability exploits often come in series, as we’ve seen with WannaCry, and later on NotPetya, both used the same SMB vulnerability to rapidly propagate across organizations,” says Oren Apir, CTO of Cyberbit.

And the derivatives of Spectre will continue to be a concern because they strike at a core factor in modern computer deployment.

“We as an industry have trained people to expect speed. In this case, the vulnerabilities take advantage of the very features that make them fast,” says Renaud Deraison, co-founder and CTO of Tenable. “Intel optimized for performance and later learned they were facing a tradeoff between security and performance. The vast majority of people would choose speed over security, too.”

Beardsley agrees that the market is driven by a need for speed, and prioritizing performance  concerns him when the conversation turns to remediating these vulnerabilities. 

“I did see an Intel write-up where they were working to ship a fix on this but it would be shipped default ‘off,'” he says. “That’s a really worrisome thing because it means that no one will apply the fix. In this class of bug, where you’re trading performance for security.”

Trading performance for security may work in this case because while the Spectre vulnerabilities are interesting and critical, they’re not being widely used for system exploits: “I can get you to run my code just by asking nicely. I don’t have to be this clever,” Beardsley says, pointing out that phishing and other social engineering exploits are far more economical and effective than relatively sophisticated attacks like Spectre and its kin.

Both Aspir and Beardsley expect announcements of vulnerabilities based on the Spectre and Meltdown families to continue. They say Variants 5, 6, and beyond may already be in the hands of chip and operating system vendors, waiting for the expiration of the responsible disclosure period for widespread announcement.

Beardsley sees hope, though, in the rapid evolution of the exploits. “There are super-smart people looking at the issue,” Beardsley says. “It’s great that we have so much runway — good guys are finding these before bad guys are using them, at least that we know of. It gives me a good feeling that the good guys are ahead of things for a change.”

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/new-spectre-variants-add-to-vulnerability-worries/d/d-id/1331873?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

TeenSafe phone monitoring app leaks teens’ iCloud logins in plaintext

A security researcher has discovered at least two servers hosted by a “secure” monitoring app for iOS and Android, TeenSafe, that were up on Amazon Web Services (AWS) for months without the need for a passcode to get at their data.

The mobile app, TeenSafe, bills itself as being a “secure” monitoring app built by parents, for parents. It lets parents view their kids’ text messages, monitor who they’re calling and when, to track their phones’ current and historical locations, to check their browsing histories, and to see what apps they’ve installed.

The leaky servers were discovered by Robert Wiggins, a UK-based security researcher who searches for public and exposed data. The company took one server down after being contacted by ZDNet. The other server apparently held only non-sensitive data: likely, test data.

Data from more than 10,000 accounts were exposed.

Wiggins said that the unprotected servers were letting anybody see Apple user IDs, parents’ email addresses, unique phone IDs, users’ attempts to “find my iPhone” and passwords stored in plaintext.

Wiggins said that if Android data were being exposed, he didn’t come across it.

The security researcher told the BBC that the data was viewable because TeenSafe lacked basic security measures, such as a firewall, to protect it.

All in spite of TeenSafe’s claims that the app is secure and uses encryption to scramble data:

TeenSafe employs industry-leading SSL and Vormetric data encryption to secure your child’s data. Your child’s data is encrypted – and remains encrypted – until delivered to you, the parent.

Contents of messages, including photos, weren’t included in the leak, and ZDNet’s Zack Whittaker reports that none of the records contained the locations of either parents or children. But that’s not just cold comfort: it’s an ice cube. A hacker could simply use those plaintext passwords to get at a teenager’s content because, as Whittaker noted, the the app requires that two-factor authentication (2FA) be turned off. Thus…

A malicious actor viewing this data only needs to use the credentials to break into the child’s account to access their personal content data.

Safe and effective password storage could have made that close to impossible, even with a stolen password database.

TeenSafe claims that over a million parents use the service.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/17sGATrDls8/

Server? What server? Site forgotten for 12 years attracts hacks, fines

A web server set up by an enterprising student for a conference in 2004 and then forgotten about has left the University of Greenwich nursing a £120,000 ($160,000) fine from Britain’s Information Commissioner (ICO).

Forgetting about a web server isn’t generally a good idea, but this was a particularly dangerous oversight because it had been linked to a database containing the personal data of 19,500 University staff, students, alumni, and conference attendees.

The data also included more intimate personal data of 3,500 people covering learning difficulties, staff sickness, food allergies, and extenuating circumstances put forward by students during their studies.

You can probably guess where this is heading – eventually cybercriminals chanced upon the forgotten server and did their worst.

The initial breach is thought to have occurred in 2013, before it was broken into several times during 2016 with the help of an SQL flaw and some uploaded PHP exploits that opened the way to the databases holding the good stuff.

Eventually, one of the attackers posted the data to Pastebin in January 2016, at which point the breach became public knowledge.

What went wrong? That’s the unsettling bit because on one level – at least from the perspective of 2004 – not  much.

The University’s Computing and Maths School (CMS) had held a training conference and one of the academics involved asked a student to build a web microsite. The site included a facility for conference academics to upload documents anonymously via URL, something that attackers would eventually use to their advantage.

Nobody remembered (or had the job of) shutting this down once the conference had finished and so it sat there for years as new vulnerabilities were discovered, patches were applied, skills were improved on all sides and attacks on web servers became everyday occurrences.

How it was forgotten about is not clear, but anyone working in IT will be familiar with the cold-sweat-inducing spectre of shadow IT. A lack of processes for managing servers not within the IT department and the fact that the University later reorganised itself into new faculties were probably contributing factors.

Concluded the ICO:

Whilst the microsite was developed in one of the University’s departments without its knowledge, as a data controller it is responsible for the security of data throughout the institution.

Of course, something as risky as a server connected to a database should never be left in the hands of a single person whose job doesn’t also include securing it.

Perhaps the biggest error wasn’t that the server was forgotten about but that this simple error went unnoticed for an extraordinary 12 years.

That implies that nobody was proactively assessing security – because if the criminals were able to find the microsite, surely the University could have too.

In the light of the fine, the University said that it had completely overhauled its security since 2016 into a form that sounds more like a more modern security operation.

Here’s hoping that there aren’t more servers that time forgot out there that universities have simply forgotten to shut down.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/du8rwVyAhys/

Brit water firms, power plants with crap cyber security will pay up to £17m, peers told

Plans to fine Britain’s national utilities and infrastructure providers £17m for shoddy cyber security will be at the forefront of industry’s mind once everyone “gets over” GDPR, peers heard at a House of Lords committee.

Speaking on a panel on cyber security for critical national infrastructure (CNI) yesterday, Elliot Rose, cyber security head at PA consulting, warned: “We’ve all been preoccupied with GDPR, but the [EU Network and Information Systems] directive [will carry] significant fines.”

Rose added that a lot of these organisations – including water, electric and telecoms organisations – are facing challenges, as their legacy systems increasingly interface with and are exposed to the internet. He said that was “a particular area of concern” – citing one example of airports introducing remote control towers to manage traffic.

Critical infrastructure firms will be required to show they have a strategy to cover power outages, hardware failures and environmental hazards

He added: “I do think that will play out more once we get over GDPR.”

Digital minister Margot James said earlier this year the measures would come into force next May. They will also cover other threats affecting IT such as power outages, hardware failures and environmental hazards. Critical infrastructure firms will be required to show they have a strategy to cover such incidents.

Britain’s CNI appears to be an increasingly attractive target for hostile state actors. Last year Ciaran Martin, chief exec of the National Cyber Security Centre, revealed hackers acting on behalf of Russia had targeted the UK’s telecommunications, media and energy sectors.

waterworks

Now that’s taking the p… Sewage plant ‘hacked’ to craft crypto-coins

READ MORE

Alastair MacWillson, chair of the Institute of Information Security Professionals, said CNI companies faced problems attracting talent against higher-paying organisations.

“Because of difference in margins, in my experience it is more difficult for a water company, say, to hire a top cyber security team than it is for a bank. There is that industry challenge.”

On the subject of a lack of skills, Rob Crook, managing director of cyber security and Intelligence at Raytheon, noted 30 per cent shortfall in the number of vacancies it would like to fill, a proportion he said was representative across industry.

“The Initiative to introduce coding into primary schools, which we welcomed in principle, may have fallen into some difficulties in practice,” he said. “For one, it not obvious that initiative has included cyber security into its curriculum. Secondly, I’m not sure it’s inspiring people into the profession.”

MacWillson noted that currently just 7 per cent of of cyber security staffers are women, making up just 4 per cent of his own institute’s ranks.

Part of the problem is the approach to targeting schoolchildren to come into the profession. By focusing on skills from computer science and STEM, the government and industry are narrowing their pool for general diversity. Attempts should be made to broaden the net, he said. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/22/house_of_lords_new_cybersecurity_regulations/