STE WILLIAMS

Brit MP Dorries: I gave my staff the, um, green light to use my login

UK MP Nadine Dorries revealed yesterday that she shares her parliamentary login information with her staff, in an attempt to defend recently resurfaced allegations about porn allegedly found on Damian’s office computer.

Tweeting on Saturday, Dorries disputed the assertion that only Green could have accessed the computer, saying that she gives out her password to her staff so that they can access her email account.

Nadine Dorries tweet 1

As is inevitable on Twitter, horrified replies soon appeared, pointing out that sharing your login details with four to six staff members is a bad idea. She then caused additional alarm by saying that she herself struggled to remember her password.

Nadine Dorries tweet 2

At one point Dorries also tweeted that her staff had delegate level access to her account, which caused users to question why she would need to give out her details in the first place.

Nadine Dorries tweet 3

The origins of the story date back to a 2008 police raid on Green’s office, originally to investigate the potential leaking of government documents to Green, then the shadow immigration minister.

The story reappeared after the BBC interviewed former detective Neil Lewis last week, who claimed he was certain that Green had been the one who accessed the X-rated images due to the use of Green’s personal account and documents on his office computer while the material was accessed. Mr Green denies the allegations.

Infosec experts were quick to criticise Dorries’ approach, which is all the more questionable just months after a targeted attack against computers at Westminster.

“@NadineDorries thinks that she’s unimportant, and not a target. That’s naive in itself, but reckless when you consider the PII and confidential communications of others,” said anti-virus industry veteran Graham Cluley.

Infosec veteran Quentyn Taylor‏ added: “Given Parliament is using office 365 there is no excuse for @NadineDorries sharing her credentials. Delegation rules are there for a reason and probably easier than sharing passwords.”

Thom Langford‏, an experienced CSO and sometime security blogger and conference speaker said: “The practice of sharing logins is madness and should be reviewed immediately. The integrity of parliamentary data is at stake as well as non repudiation of any data/messages sent.”

Even more to the point, Dories’ stated practices are contrary to guidance in the House of Commons staff handbook, which clearly advises that MPs should not share their passwords.

This criticism was dismissed by Dorries over the weekend as “trolling” by computer nerds. Another Tory, Nick Boles MP, admitted that he had to ask his staff for password reminders.

Dorries, a former inmate of the I’m a Celebrity, Get me Outta Here Jungle, then doubled down by suggesting the work PCs of all MPs are riddled with the stuff.

That’s quite a claim, but infosec experts are at least ready to back up suggestions that lax IT security among MPS is not a party political issue. “I think this is an all party issue, probably happens across Parliament,” said Langford‏.

The frustrating thing for those versed in infosec is that there are plenty of simple technologies to ease password hassles – which are a problem – including password managers, two-factor authentication and more. “I’d like to see proximity card assisted login on MP accounts and computers. You walk away, you’re out. Make the card part of their ID card so they are less likely to ‘loan’ it,” said Rik Ferguson of Trend Micro.

The ICO commented on the social media platform: “We’re aware of reports that MPs share logins and passwords and are making enquiries of the relevant parliamentary authorities. We would remind MPs and others of their obligations under the Data Protection Act to keep personal data secure.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/04/dorries_i_give_my_staff_my_login_details/

Damian Green: Not only my workstation – mystery pr0n all over Parliamentary PCs

Under-fire Cabinet Office minister Damian Green has reportedly told an internal government enquiry that he has proof he was not the one who downloaded porn onto his Parliamentary computer.

The minister has forwarded to investigators an email from Eleanor Laing, deputy speaker of the House of Commons, detailing how one of Laing’s staff had found porn on her boss’s computer without having accessed or watched it, according to the Telegraph.

Green was accused by ex-senior cop Bob Quick of having had pornography on his work computer 10 years ago. The former policeman was forced to step down shortly after showing a document marked “Secret” to press photographers. Immediately prior to that, he had led an operation that entered Parliament without a warrant and arrested Green for received documents leaked from the Home Office.

The minister has consistently denied downloading or viewing porn on his work computer, saying: “No allegations about the presence of improper material on my parliamentary computers have ever been put to me or to the parliamentary authorities by the police.”

Quick was repeating and endorsing allegations made by the BBC last week, which quoted another one of the Metropolitan Police’s finest, an ex-detective named Neil Lewis, who appeared to confess last week to having retained private copies of data he took from Green’s computers.

Met commissioner Cressida Dick has claimed her force is thinking about investigating Lewis. Dick was the officer in charge of the infamous killing of Jean Charles de Menezes, for which no police employee was ever brought before a court or even disciplined.

In defending Green over the weekend, fellow Conservative MP Nadine Dorries made the startling admission that she shares her personal login credentials with her staff, prompting howls of dismay from the information security community. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/04/damian_green_not_only_me_who_has_mystery_pr0n_on_parliamentary_pcs/

Creepy Cayla doll violates liberté publique, screams French data protection agency

The French data protection agency has issued a formal notice to a biz peddling allegedly insecure toys, just in time for Christmas.

The mass-marketed toys in question – Genesis Toys’ My Friend Cayla doll and i-Que robot – are Bluetooth-enabled so they can capture and analyse children’s speech through an app on – ideally – their parents’ mobile phones.

But the toys’ security is lax, to say the least, in that anyone nearby can pair with the device unchecked – and once they do, they’re able to listen and record conversations between child and toy.

Moreover, would-be dodgy characters don’t even need the Cayla or i-Que app installed, because phones simply identify the doll as a hands-free headset.

The French agency, CNIL, said its controllers had found that anyone within nine metres of the toy, even outside the building, can pair a mobile phone with the toys “without having to log in (for instance, with a PIN code or a button on the toy)”.

It added that it was also possible to communicate with the child through the toy, either by using a hands-free kit or by sending a pre-recorded message from the phone.

The agency said the toys were in breach of Article 1 of the French Data Protection Act, which provides that technology “shall not violate human identity, human rights, privacy, or individual or public liberties”.

However, it’s not clear whether the formal notice – which isn’t a sanction but could be followed up with a full investigation – will spur Genesis Toys into action, given the devices have already been outlawed in Germany.

The German telecoms regulator banned the sale of the dolls in February, on the grounds that the devices violate privacy laws because they can illegally transmit data collected without detection.

And US authorities have also raised concerns, with several consumer complaints being lodged with the Federal Trade Commission, including from the Electronic Privacy Information Centre (EPIC).

EPIC said last year that “the failure to employ basic security measures to protect children’s private conversations from covert eavesdropping by unauthorized parties and strangers creates a substantial risk of harm because children may be subject to predatory stalking or physical danger”.

Cayla is also listed in this year’s annual Trouble in Toyland report (PDF) from the US Public Interest Research Group, a federation of consumer nonprofits.

It listed the doll as the main example for the privacy threat posed by Internet of Things toys, although it should be noted that there is no end of creepy, privacy-invading playthings on the market right now.

Elsewhere in CNIL’s complaint was that Genesis Toys’ privacy statement (last updated on February 23, 2015) didn’t make it clear what the data collected by the toys would be used for.

This echoes concerns raised by the European Consumer Organization BEUC, which has previously pointed out that “Cayla will happily talk about how much she loves different Disney movies, meanwhile, the app-provider also has a commercial relationship with Disney”.

After all, the toy industry’s product-pushing ways are for life, not just for Christmas. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/04/creepy_cayla_doll_breaches_french_data_protection_rules_says_agency/

Hacked IV Pumps and Digital Smart Pens Can Lead to Data Breaches

Researcher to reveal IoT medical device dangers at Black Hat Europe this week.

An attack on a single IV infusion pump or digital smart pen can be leveraged to a widespread breach that exposes patient records, according to a Spirent SecurityLabs researcher.

Saurabh Harit, managing consultant with Spirent, will present his findings on flaws in IV infusion pumps and digital smart pens at Black Hat Europe this week.

“Perpetuators can use this patient information to file false insurance claims as well as to buy medical equipment and drugs using a fake ID. These products are then easily sold on the black market,” Harit says. “What makes medical data more lucrative than the financial data is the low and slow detection rate of the fraud itself. While a credit card fraud can be detected and blocked in a matter of minutes these days, medical data fraud can go undetected for months, if not more.”

Harit has notified the affected IV infusion pump and digital smart pen vendors of the vulnerabilities, which have patched the flaws, Harit says he will not reveal the names of the companies or their devices.

Smart Pen Problems

“By far the most surprising thing we came across in our research was the amount of patient information that was available with the digital smart pen,” Harit says. “We felt even if we breached it, we would not get a lot of information off of it because the healthcare organization said they did not store patient information on the device.”

Doctors use digital smart pens to prescribe medications for patients and that information is then digitally transmitted to pharmacies with the patient’s name, address, phone number, health records, and other medical information.

But after reverse-engineering the digital smart pen, Harit found a cache of information. First he peered into the device’s underlying operating system by simply connecting a monitor to the device through a serial interface.

Then, by exploiting network protocols, he obtained low-privilege access to the device. After exploiting its software and services to bypass the device’s security checks and lock-down mode, he was able to gain administrative access.

Once the on-device encryption was broken, Harit gained access to sensitive configurations for the healthcare institution’s backend servers, where a treasure trove of patient medical records and other sensitive data could be found for a number of doctors and medical facilities tied to that healthcare institution that had used the digital smart pens.

“I thought this server was not connected to the Internet, but it was,” Harit says.

Fixing the vulnerability in the digital smart pen was easy, though, because it’s a new product and designed with security in mind, Harit says, noting that the pens can be updated remotely.

Lethal Pump

Harit’s research also explored the security of an IV infusion pump, a growing target when it comes to IoT medical device attacks and one that can be lethal given that it delivers fluids, medication, and nutrients to patients.

Harit discovered that a simple $7 hardware device could interface with the IV infusion pump, read its configuration data, and understand which access point it was seeking to connect to. As a result, he established a fake access point, connected with the IV pump, and then collected sensitive medical data on an individual that included a master drug list and quantity of drugs to be taken.  

“If you have 200 of the same pumps in a hospital, an attacker could write a malware script and launch it onto the hospital network and modify the attack to search for all the pumps and attack them,” he says.

The IV pump requires the creation, test, and remote deployment of a patch to fix the vulnerability, Harit says.

An attacker would need to gain physical access to the IV pump or digital smart pen to compromise them, Harit says.

He adds that task is not difficult, given the relative ease in walking into a poorly staffed hospital room or medical clinic room. Digital smart pens are small, so they are also easy to pocket, he notes.

Meanwhile, healthcare organizations that suffer a data breach typically learn about a breach from a third party, such as an insurance company, end user, a security monitoring entity, or law enforcement, Harit says.

“In most cases, the breach goes undetected for months and even years.”

Related Content:

 

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/mobile/hacked-iv-pumps-and-digital-smart-pens-can-lead-to-data-breaches/d/d-id/1330536?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Tips for Writing Better Infosec Job Descriptions

Security leaders frustrated with their talent search may be searching for the wrong skills and qualifications.

INSECURITY CONFERENCE 2017 – Washington, DC – Advanced security tools can’t reach their full potential without qualified employees to operate them, said Dawn-Marie Hutchinson, executive director at Optiv, at Dark Reading’s INsecurity conference here last week.

In her presentation entitled “Finding and Hiring the Best IT Security People,” Hutchinson described her own challenges in finding the right talent. As the former CISO of Urban Outfitters, her challenge wasn’t in securing the budget for security tools, but finding people to run them.

Some businesses have both tools and talent but struggle when the two don’t align, she explained. You might be well-staffed with employees who lack skills to be helpful. To find the right employees, Hutchinson suggested reevaluating job requirements and approaching the hiring process similar to how you would build a security program.

The key is becoming more involved in the hiring process and being more specific when looking for candidates. Security leaders looking for talent often tell their human resources department they’re seeking candidates with certain skill sets. HR then compiles a job description cobbled from descriptions they find online, which may include certifications or number of years of experience.

As a result, many companies use similar job descriptions for open positions, and applicants tailor their resumes to suit them. Many talented people who could be successful are weeded out because they lack specific credentials and experience they may not necessarily need for a specific role.

“When you go looking for people, be mindful,” said Hutchinson. “The number of years isn’t as good as the quality of years and most recent experience … let’s not [cross off] candidates because they don’t have experience in systems that don’t exist anymore.”

Compliance requirements also block potential candidates. Many qualifications added for compliance; for example, familiarity with HIPAA, are things a smart individual could figure out and learn. However, because HIPAA isn’t on their resume, they don’t make it to the interview. Similarly, certifications like the CISSP are “easy hits” and make it easy to narrow down the candidate pool but could prevent security teams from gaining valuable talent.

“We need to find people where they are and develop them into what we need them to be,” she noted.

Think about what you want your employees to do, and how you want them to operate, and build your staffing strategy around it. “What’s your desired outcome?” Hutchinson asked. Shaping a job description based on outcomes will result in a completely different posting.

So where do you look for job candidates? Hutchinson recommended networking with peer groups and looking to your internal tech staff to find talent. Former military members are also potential candidates; check with veterans organizations or the Wounded Warrior project to find people looking for work.

It’s also worth your while to hunt for candidates in new, unexplored pools. Business school graduates could be particularly valuable at a time when security pros need to explain risks and technology issues with board members. Security teams need people who can speak to the business, write, and communicate well.

But hiring is just one piece of the equation. Retention of loyal employees is another.

“What we really need to do is start figuring out how to keep them,” said Hutchinson. Most people who leave are swayed by higher salaries. Fair compensation is an obvious must-have; if you don’t pay your employees fairly, you will lose them. No perk will change their mind.

That said, it takes more than money to keep talent. You need to understand what motivates your employees. Many people swap jobs because they want more challenging or exciting work. They also highly value mobility and work flexibility, which the most preferred employee benefit from 2014 to 2016, she pointed out.

“Make them excited about being at work, excited about being in security,” she emphasized. “It will make them loyal to you … give them something they can get behind.”

If you can’t retain talent, it will ultimately cost your business. Not only will you have to invest time and resources into vetting new candidates, you’ll have to show them the ropes. It costs about nine months of someone’s salary to get them on-boarded, Hutchinson said.

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/cloud/tips-for-writing-better-infosec-job-descriptions/d/d-id/1330534?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Rising Dangers of Unsecured IoT Technology

As government regulation looms, the security industry must take a leading role in determining whether the convenience of the Internet of Things is worth the risk and compromise of unsecured devices.

Earlier this year, the Food and Drug Administration (FDA) recalled 450,000 pacemakers that are currently in use by patients out of fear that these devices could be compromised. Although the agency said there is not any reported patient harm related to the devices, the FDA is rightly concerned that attackers will exploit pacemaker vulnerabilities and have the ability to affect how a medical device works. 

While this is perhaps one of the most potentially life-threatening examples of unsecured Internet of Things (IoT) security, it drives home the point that manufacturers are not building these devices with security as a priority. As IoT devices grow in popularity, seemingly endless security- and privacy-related concerns are surfacing. 

IoT Malware: Alive and Well
With more than 20 billion devices expected to be connected to the Internet over the next few years, it comes as no surprise that attackers are increasingly looking to exploit them. Large-scale events like last October’s distributed denial-of-service attack targeting systems operated by Dyn, and warnings from security experts should have security professionals paying attention. But are they?

According to a recent Gartner report, by 2020, IoT technology will be in 95% of new electronic product designs. While this statistic demonstrates the success of IoT, it is also a precursor for alarm. As the adoption of IoT devices rises, manufacturers are competing to stay ahead. Creating cheap products quickly often means overlooking security and privacy measures.

In general, consumers need to have more control over privacy and how they use IoT devices (think of the pacemaker). Watches and other wearables, for instance, are good examples of devices that give consumers control. Users can turn them off, take them off, and customize them. However, other devices, such as your personal home assistant can, theoretically, always be listening, as when, according to a CNET report, a hostage victim was able to contact law enforcement through their Amazon Alexa device, despite the fact that Amazon says the technology doesn’t support “wake-up” action calls to outside phone lines.

National Security Issue?
The IoT Cybersecurity Act was introduced recently as an initiative designed to set security standards for the US government’s purchase of IoT devices. In order to steer clear of stifling innovation, the government doesn’t often insert itself into private sector manufacturing decisions. However, the proposed legislation signals that, at least in some quarters, IoT security is becoming a matter of national security. And, although this bill does not pertain to consumers, it is a step in the right direction by challenging manufacturers to prioritize IoT security and privacy in their engineering designs, and consumers, in their purchasing decisions.

At the end of the day, as consumers continue to embrace IoT technology, they should not have to sacrifice security and privacy for the convenience and enjoyment of a product and service. Instead, they should be able to decide how they use “things” and how they can control them. Until security and privacy measures are embedded in all devices, those of us in the security industry need to challenge ourselves by questioning whether the convenience is worth the risk and compromise of unsecured devices.

Related Content:

With 15-plus years of leadership experience implementing vendor security risk and assessment programs for startups and Fortune 500 companies, Jackson defines the security road map for SecureAuth’s suite of adaptive authentication and IS solutions. Prior to joining SecureAuth, … View Full Bio

Article source: https://www.darkreading.com/mobile/the-rising-dangers-of-unsecured-iot-technology--/a/d-id/1330518?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

PayPal paid $US233m for company that leaked 1.6 million records

PayPal has “identified a potential compromise of personally identifiable information for approximately 1.6 million customers.”

The good news ist that PayPal is not to blame for the likely leak. Fault can instead be ascribed to TIO Networks, a Canadian payments outfit that PayPal paid US$233m to acquire in February 2017.

That deal closed in July 2017 and PayPal has since reviewed TIO’s systems and turned up problems that saw it suspend TIO’s operations on November 10th, 2017.

TIO’s canned statement stated those efforts “uncovered evidence of unauthorized access to TIO’s network, including locations that stored personal information of some of TIO’s customers and customers of TIO billers.”

The company has not yet contacted all customers, billers and retailers affected by the leak, but has said it’s trying to do so as fast as possible.

In the interim, customers have been offered free credit checks and identity theft insurance. TIO’s FAQ also delivered the bad news that users have some administrivia to do:

At this point, TIO cannot provide a timeline for restoring bill pay services, and continues to recommend that you contact your biller to identify alternative ways to pay your bills. We sincerely apologize for any inconvenience caused to you by the disruption of TIO’s service.

“We will continue to communicate important updates to customers,” TIO promised in its canned statement. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/04/paypal_tio_data_breach/

PayPal paid $US233m for company that leaked 1.6 million records

PayPal has “identified a potential compromise of personally identifiable information for approximately 1.6 million customers.”

The good news ist that PayPal is not to blame for the likely leak. Fault can instead be ascribed to TIO Networks, a Canadian payments outfit that PayPal paid US$233m to acquire in February 2017.

That deal closed in July 2017 and PayPal has since reviewed TIO’s systems and turned up problems that saw it suspend TIO’s operations on November 10th, 2017.

TIO’s canned statement stated those efforts “uncovered evidence of unauthorized access to TIO’s network, including locations that stored personal information of some of TIO’s customers and customers of TIO billers.”

The company has not yet contacted all customers, billers and retailers affected by the leak, but has said it’s trying to do so as fast as possible.

In the interim, customers have been offered free credit checks and identity theft insurance. TIO’s FAQ also delivered the bad news that users have some administrivia to do:

At this point, TIO cannot provide a timeline for restoring bill pay services, and continues to recommend that you contact your biller to identify alternative ways to pay your bills. We sincerely apologize for any inconvenience caused to you by the disruption of TIO’s service.

“We will continue to communicate important updates to customers,” TIO promised in its canned statement. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/04/paypal_tio_data_breach/

Google to crack down on apps that snoop

Google has warned Android developers to give users better warnings about their apps’ data collection behaviours, or it will flag their failings.

Last Friday, the company announced revisions to Safe Browsing rules and “expanded enforcement of Google’s Unwanted Software Policy”.

If developers don’t comply within 60 days, Google said, it will warn users via Google Play Protect “or on webpages that lead to these apps”.

“Google Safe Browsing will show warnings on apps and on websites leading to apps that collect a user’s personal data without their consent”, the announcement said.

If an app handles either personal data (phone number, e-mail) or device data (such as IMEI number), developers will have to both prompt the user, and include a privacy policy in the app.

“Additionally, if an app collects and transmits personal data unrelated to the functionality of the app then, prior to collection and transmission, the app must prominently highlight how the user data will be used and have the user provide affirmative consent for such use,” the announcement said.

Google also warned developers against underhanded behaviour in crash reports: “the list of installed packages unrelated to the app may not be transmitted from the device without prominent disclosure and affirmative consent.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/04/expanded_google_unwanted_software_policy/

Google to crack down on apps that snoop

Google has warned Android developers to give users better warnings about their apps’ data collection behaviours, or it will flag their failings.

Last Friday, the company announced revisions to Safe Browsing rules and “expanded enforcement of Google’s Unwanted Software Policy”.

If developers don’t comply within 60 days, Google said, it will warn users via Google Play Protect “or on webpages that lead to these apps”.

“Google Safe Browsing will show warnings on apps and on websites leading to apps that collect a user’s personal data without their consent”, the announcement said.

If an app handles either personal data (phone number, e-mail) or device data (such as IMEI number), developers will have to both prompt the user, and include a privacy policy in the app.

“Additionally, if an app collects and transmits personal data unrelated to the functionality of the app then, prior to collection and transmission, the app must prominently highlight how the user data will be used and have the user provide affirmative consent for such use,” the announcement said.

Google also warned developers against underhanded behaviour in crash reports: “the list of installed packages unrelated to the app may not be transmitted from the device without prominent disclosure and affirmative consent.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/04/expanded_google_unwanted_software_policy/