STE WILLIAMS

Connected car data handover headache: There’s no quick fix… and it’s NOT just Land Rovers

The perils of previous owners retaining unfettered access to the data and controls of connected cars after resale is a wider problem across the industry, The Register has discovered.

We have confirmed that BMW, Mercedes-Benz and Nissan may all have much the same issue as Jaguar Land Rover, the focus of our recent article on the topic.

Reg reader Howard B told us that BMW showed indifference when he pointed out that he was still connected to one of its vehicles even after he sold it on.

Man faceplants in airbag

Shock Land Rover Discovery: Sellers could meddle with connected cars if not unbound

READ MORE

“I was still able to unlock and lock a previous vehicle I had owned, flash the lights, start the ventilation, etc, and see where the car was parked,” Howard told us. “Dealers should be making sure that the car is registered to a connected app account in their name so that the vehicle is no longer on a private individuals account.”

Howard B said he was able to access this data for “at least” six months after the vehicle was sold on, and noted that if he’d been of a dishonest nature he could have used the information for dastardly means.

The car is now connected to another person’s drive account but Howard said when he raised the concerns with BMW Connected services and the dealership, “they weren’t interested”.

In response to an El Reg query, BMW offered an explanation of its connected car procedures. Drivers selling on internet-enabled BMWs should disconnect themselves from the car before a sale. This will happen anyway once the new owner hooks up to with a BMW account, the car maker said.

The customers are able to delete all their BMW Connected app data with a click in the BMW Connected app. The data privacy policy tab in the BMW Connected app contains detailed information on data privacy for all services that explains to customers exactly how the data is used.

The customer need[s] to delete the mapped profile online at the ConnectedDrive account. Customers can delete the mapping via the Head-Unit and get a notification to delete the data online at the ConnectedDrive account as well.

Once a customer connects the car with a new ConnectedDrive account, all previous connections will be deleted.

New BMW owners are in a better position than newly minted Jaguar Land Rover owners, who are unable to evict the previous owners from access to the data and controls of connected cars simply by connecting themselves. Unlike BMW’s situation, dealer action is needed in the case of JLR. Our tipster is nonetheless dissatisfied with BMW’s approach.

“The vehicle is deleted from a previous owner’s connected drive account when a new owner adds the vehicle to their account, but if that new owner is not a technology type of person or does not know about apps then it will stay on the previous owners account,” Howard pointed out.

I was still able to unlock and lock a previous vehicle I had owned, flash the lights, start the ventilation … and see where the car was parked

He added that BMW’s approach relies on everyone following the car maker’s guidelines, a common criticism among several drivers we’ve spoken to about the topic.

Beep Mercs the spot…

Owners of other brands of connected car are also affected by much the same issue. Chris Rogers, a US-based hacker and transportation security expert, told The Reg it took a call to Mercedes-Benz to remove the previous owner’s info from a recently acquired second-hand S550.

We’ve also heard of someone who sold his previous car through a main dealer in the Netherlands more than a year ago but still has remote control over it, as previously reported.

Our initial article prompted further examples from Reg commenters.

“HWwiz” told us the JLR issue also affected newer Mercedes from approximately 2014 onward.

“If the last owner does not log in online and remove the car from their Mercedes Me account, then they can continue to remotely monitor the car, lock / unlock doors, etc,” our contact said.

“Non-Mercedes dealers have no control over this, whereas main dealers can terminate the accounts during re-sale.”

In a statement, MB placed the onus on previous owners to un-register themselves when connected cars are re-sold.

The vehicle will stay available to the previous owner as long as they do not remove the vehicle from their Mercedes me connect account. The previous owner of the vehicle has agreed to the terms of use, prior to using the services that he will remove the vehicle from his Mercedes me account when sells the vehicle.

Since Mercedes-Benz is not always aware that the vehicle is sold we cannot proactively deregister the vehicle from the Mercedes me account. The new owner always has the ability to visit an official Mercedes-Benz dealership to have the vehicle deregistered and registered to his own account.

To answer your question, we consider the previous owner to be responsible for the removal of the vehicle from the account, this is also agreed upon in our terms of use (see section 7, 7.3 Obligations of the Customer). If the previous owner does not comply we offer the new owner ample options to protect his own data and privacy.

The issue of who controls the data on connected cars is a topic that also affects drivers of mainstream motors as well as luxury brands. Volvo, Nissan and possibly other brands such as Renault also seem to be affected.

Reg commenter “clhking” told us: “Our Volvo bought from a Volvo dealership was not unbound. But the subscription to Volvo On Call [had] expired. So the previous owner would have had to pay to retain access to our car. When I called to activate our account the VIN (vehicle identification number) was still bound to the previous owner.”

The Register asked Volvo if it had anything to say about the implied criticism that its procedure for selling on connected cars fails to block access to sensitive information and controls from previous owners. The car maker, which offered to help our reader, said that app unbinding was part of its resale process.

The comprehensive process covered under the Volvo Selekt approved used car programme does include a check that the previous owner has deactivated their links to the car.

UK infosec researcher Scott Helme told El Reg that he could access his Nissan Leaf connected car “for months” after he sold it on.

It’s like selling your phone

Used connected cars need disconnecting, as UK government cyber assurance agency NCSC pointed out after our initial report. Consumers have got used to the idea of factory resetting their smartphone before selling it on. Cleaning out a car before resale is a well-understood practice but this applies only to the contents of a glove box and not to the data a connected car holds, which can include sensitive travel movements, other information and more.

“Users are also familiar with the concept of a phone having and storing personal data [but] not with a car,” Helme told El Reg.

Other security researchers we’ve spoken to faulted car makers for failing to think the issue through when they rolled out the technology. The problem is not as simple as it might appear. One solution, such as having a button inside held for 10 seconds to disassociate the old owner from the system, for example, could inadvertently help car thieves.

One Reg reader, “macjules”, said Tesla had come up with an example others might want to follow. “All they need is a functionality similar to Tesla. Go to Backup and Reset and select Factory Data Reset. Car is completely reset and new user can register.”

El Reg attempted to confirm with Tesla that this was how its system worked but we’ve yet to hear back. Security consultants with experience in connected cars expressed interest in the approach without endorsing it.

Although most respondents were critical of car makers in general, one reader countered that calls for automated connected car disassociation-on-sale functionality were unfair to car makers such as JLR.

“LeeE” said: “This is an unreasonable demand to make of JLR because any such automatic bullet-proof method would be dependent upon a similarly bullet-proof system/process whereby JLR is informed of the sale of any of their vehicles, including private sales.”

In general, problems arise when the seller of the vehicle fails to un-register their old account/vehicle association when they sell it. The situation is further complicated by the fact that it may not be the most recent seller but someone a few owners back that needs to have their access curtailed.

“It is the responsibility of the previous customer to disconnect and owners of cars with this tech will need to get used to checking their purchase has indeed been disconnected,” as one anonymous (coward) comment put it.

Car makers typically run the apps and manage the servers through which connected car services are delivered, making them “data controllers” under the General Data Protection Regulation. They are certainly data processors because they process personal information about owners and drivers of their cars. This could come to present legal peril for JLR and others.

Specialist IT solicitor Dai Davis has told El Reg that Jaguar Land Rover may run into GDPR regulatory issues over its role in the data held by connected cars and their resale. The same legal reasoning would apply to other car makers following the same practices.

Telematics

It could be that the telematics service platform (TSP) providers are at minimum partially culpable. “The TSP providers behind it all haven’t really figured out the problem properly,” one leading security consultant told El Reg. TSP firms such as CloudCar (strategic partner to JLR in the development of cloud-based infotainment), Kuantic and Harman (the Samsung-owned infotainment and connected car partner of BMW) work with a variety of car makers.

El Reg asked CloudCar and Harman to comment on whether they might be doing more to resolve the present situation around the sale of connected cars. We’ll update this story as and when we hear more.

At the suggestion of Volvo we also contacted the SMMT (The Society of Motor Manufacturers and Traders, a UK auto industry trade body) for comment. SMMT argued that although car makers have a responsibility for data processing, consumers also have a role to play by getting into the habit of removing their data and dissociating their smartphones when they sell on their connected cars.

Mike Hawes, SMMT chief executive, said: “Car manufacturers take privacy extremely seriously and customer consent underpins all personal data processing. While industry is committed to upholding a high level of customer data protection, including proportionate use of data, modern cars need to be treated the same as other connected devices.

“Owners should remove their digital information, and disable any associated online account, before selling a vehicle to another keeper. Personal data, including apps and paired mobile phones, can be removed from cars according to individual manufacturer instructions, giving peace of mind to motorists.”

That approach may seem fair enough but it still throws up problems. For example, commenter “andymcp” reports getting test messages about a car he’d sold on even though he’d disassociated his mobile from the motor and uninstalled the app.

“Having been through the process of unlinking a car during a private sale (not JLR), even if the app has an ‘end ownership’ option, it also likely comes with an in-car registration that’s entirely separate,” he explained. “Hence you still get phone calls when the new owner sets the alarm off. Or reinstall the app after getting an alarm notification call to find it’s been happily collecting data attributed to you for months. Or have a few buttons that offer you the chance to remote unlock, remote start, remotely activate the alarm, send destinations…”

Is it realistic to expect buyers of second-hand cars to know if the car has been connected? The response from the car industry has been to put the onus on the previous owner to delete data while minimising the role of auto manufacturers to come up with a well thought through process and for dealers to enforce it.

“When I buy a car, I want to be able to make sure MYSELF it is no longer accessible to previous owners, not rely on their goodwill or attention to detail,” IT worker Mike Walters‏ told El Reg, summarising the feelings of many drivers we’ve spoken to about the issue. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/21/connected_car_data_handover_mess/

TLS developers should ditch ‘pseudo constant time’ crypto processing

More than five years after cracks started showing in the Transport Layer Security (TLS) network crypto protocol, the author of the “Lucky 13” attack has poked holes in the fixes that were subsequently deployed.

Back in 2013, University of London Royal Holloway professor Kenneth Paterson popped the then-current version of TLS.

Lucky 13 was based on using fake message padding to induce timing changes in the processing of encrypted messages. Those differences in timing eventually gave an attacker enough information to compute the original message. It was fixed with patches to SSL libraries such as OpenSSL.

In the intervening years there’s been a lot of work on crypto processing, and that prompted Paterson to take a look at the patched versions – what he found isn’t encouraging.

In a paper published late last week at the International Association for Cryptologic Research (IACR), to be presented in October at the ACM SIGSAC Conference on Computer Communications Security, Paterson said implementations that used the “pseudo constant time” approach to fixing Lucky 13 are still vulnerable.

The paper fingered “Amazon’s s2n, GnuTLS, mbed TLS and wolfSSL”, going on to say that the flaws “apply in a cross-VM attack setting and are capable of recovering most of the plaintext whilst requiring only a moderate number of TLS connections”.

“We consider our complete set of results surprising in the light of the huge amount of effort spent on correcting and verifying CBC-mode and HMAC implementations in TLS over the last 5 years,” the paper said. It added that “the vulnerable s2n HMAC code had also passed formal verification… nothing short of the full ‘belt and braces’ approach adopted in OpenSSL is sufficient to provide a robust defence against Lucky 13 style attacks in all their forms”.

The researchers’ attack on Amazon’s open-source implementation of the protocol, s2n, has provided a good insight into how they worked. The team discovered an access pattern to a dynamically allocated memory location – “a buffer used to store part of he key in the HMAC [hash-based message authentication code] calculation”.

“For each handshake, we trace the cache set while processing valid messages, and find the cache set exhibiting the activity pattern we expect for the HMAC code,” the paper continued.

Access to the right part of memory is what provides the opportunity to launch a timing attack on the cryptography: the researchers split a message into two parts and hashed each part separately, watching how “the number of calls to the internal hash compression function might vary depending on the split point”.

Here’s a simplified version of their s2n attack algorithm:

Algorithm 1 s2n Simplified Attack
1: function SimplifiedS2NPadOracle(valid msg,attack msg)
2:      xor pad                       ← FindXorPadCache(valid msg)
3:      Prime(xor pad)                evict xor pad set from cache
4:      Send attacker’s TLS record to target
5:      Wait for verification error
6:      if Probe(xor pad) then
7:              return 1                buffer was accessed
8:      else
9:              return 0                buffer was not accessed
10:     end if
11: end function

The researchers notified the relevant package maintainers of the issues. WolfSSL was patched in June 2018; mbed TLS has implemented an interim fix while it works on full patches; GnuTLS has implemented partial patches, and advised users to use Encrypt-then-MAC if users require legacy cipher suites; and Amazon’s s2n team will remove CBC-mode cipher suites and take code from BoringSSL to replace its own CBC-mode decryption.

Since researching this kind of attack demanded a close reading of the relevant source code, the researchers noted that they also notified developers of other serious “but easy to patch” bugs in their TLS implementations. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/21/tls_developers_should_ditch_pseudo_constant_time_crypto_processing/

The Uncertain Fate of WHOIS, & Other Matters of Internet Accountability


To InformationWeek
Network Computing
Darkreading





Dark Reading | Security | Protect The Business - Enable Access

Search

Paul Vixie discusses the uncertain fate of WHOIS in the age of GDPR, the risks of domain name homographs, and other underpinnings of the Internet that are hard to trust and harder to fix.



‘);
}



‘);
}

Comments

‘);
}

‘);
}

News

Commentary

News

Register for Dark Reading Newsletters

Live Events

Webinars


More UBM Tech
Live Events

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments


Cartoon

Latest Comment: —  John Klossner’s latest cartoon!

The Biggest Cybersecurity Breaches of 2018 (So Far)

Reports

The State of IT and Cybersecurity

The State of IT and Cybersecurity

IT and security are often viewed as different disciplines – and different departments. Find out what our survey data revealed, read the report today!

Dark Reading - Bug Report
Bug Report

googletag.display(‘div-gpt-ad-961777897907396673-15’);

Information Week

UBM Tech

Augusta University Health Reports Major Data Breach

Over 400K individuals affected by the breach, which was the result of a successful phishing attack that occurred in September 2017.

Augusta University Health said it was hit with a data breach that exposed the personal information of some 417,000 patients, faculty, and students at the Georgia institution.

Names, addresses, dates of birth, lab test results, diagnoses, medications, surgeries, and health insurance information were among the data exposed, as well as a “small percentage” of driver’s license and Social Security numbers, according to the hospital system, the HIPAA Journal reported. Most of the breach victims were patients of Augusta University Medical Center and Children’s Hospital of Georgia, but the breach also exposed patient data from 80 outpatient clinics in the state.

Augusta University Health actually discovered the initial attack on Sept. 11, 2017, after several employees fell for a phishing attack and disclosed their login credentials. It wasn’t until July 31, 2018, that third-party incident investigators alerted the hospital system of the personally identifiable information (PII) exposure. 

This isn’t the first breach suffered by Augusta University Health, either: It has been hit by two previous attacks, affecting data of 10,300 people, the report said.

Read more here

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early bird rate ends August 31. Click for more info

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/augusta-university-health-reports-major-data-breach/d/d-id/1332607?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Updates: Cloud HSM Beta, Binary Authorization for Kubernetes

Google’s latest cloud security rollouts include early releases of its cloud-hosted security module and a container security tool to verify signed images.

Google is kicking off its week with a few cloud security updates: the beta release of Cloud HSM, a managed cloud-hosted hardware security module (HSM) service, and the introduction of binary authorization for its Google Kubernetes Engine to secure production infrastructure.

The idea behind Cloud HSM is to give Google Cloud Platform (GCP) users another option to protect their sensitive data and meet compliance requirements, explains product manager Il-Sung Lee in a blog post. Users can host encryption keys and perform cryptographic operations in FIPS 140-2 Level 3 certified HSMs to protect workloads without managing an HSM cluster.

HSM clusters require management, scaling, and upgrades, contributing to operational overhead. The Cloud HSM service is managed via regular Cloud KMS APIs, and it handles patching, scaling, and clustering without the added downtime, Lee writes.

Because the service is integrated with Google Cloud key management service (KMS), users can secure data in Google Compute Engine, BigQuery, Google Cloud Storage, DataProc, and other encryption key-enabled services with a hardware-protected key. On the compliance side, users will be able to verify cryptographic keys were created within the hardware boundary.

In addition to the beta release of Cloud HSM, Google is also announcing asymmetric key support for both Cloud HSM and Cloud KMS. Now, in addition to creating symmetric key encryption using AES-256 keys, users can create different types of asymmetric keys for signing processes or decryption. Lee reports RSA 2048, RSA 3072, RSA 4096, EC P256, and EC P384 keys will be available for signing; RSA 2048, 3072, and 4096 keys can decrypt blocks of data.

Binary Authorization for Google Kubernetes Engine
Google is also rolling out a beta release of Binary Authorization for its Kubernetes Engine, reports product manager Jianing Sandra Guo, so users in enterprise security and DevOps can trust content running on their production infrastructure.

Binary Authorization is a container security feature baked into the Kubernetes Engine deployment API, Guo writes in a blog post on the news. Its purpose is to provide a “policy enforcement chokepoint” so only signed and authorized images are used in the environment.

It’s especially handy in the age of containerized microservices, she explains. Many businesses run hundreds to thousands of jobs in production, often containing valuable data. While they could use identity-based control to restrict which people can deploy, this strategy relies on human operational knowledge that can’t be scaled for businesses with automated build and release structures, running hundreds of deployments each day across dozens of teams.

Binary Authorization runs on three principles, Guo says: establishing preventative security by only running trusted code, simplifying governance with a single path for code to move from development to production, and using open source to keep CI/CD tools interoperable. She also adds the feature is based on internal Google tech the company uses to protect deployments.

How it works: Binary Authorization integrates with desired CI/CD stages to produce signatures as images pass through, and it blocks those that don’t meet the organization’s criteria. On top of signature-based verification, the tool also lets users whitelist images using name patterns.

Unpatched third-party software is a common source of production vulnerability, Guo explains. Whitelisting lets users specify a repository, path, or set of images that are allowed to deploy, limiting the opportunities for compromise via third-party images. This option provides a centralized list of third-party images so users can identify which are vulnerable.

If you want to review failed deployment attempts, Binary Authorization also integrates with Cloud Audit Logging to record failures for further analysis, Guo adds.

With today’s beta release, users can create Kubernetes Engine clusters with Binary Authorization to access deploy-time policy controls. Users can set “attestors,” or authorities to verify images. Deployment policies can be set at both project and cluster levels for different levels of control — for example, if you want separate policies for dev and compliance clusters.

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early bird rate ends August 31. Click for more info

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/google-updates-cloud-hsm-beta-binary-authorization-for-kubernetes/d/d-id/1332608?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ohio Man Sentenced to 15 Years for BEC Scam

Olumuyiwa Adejumo and co-conspirators targeted CEOs, CFOs, and other enterprise leaders in the US with fraudulent emails.

Chief US District Judge Janet Hall last week sentenced Olumuyiwa Adejumo to 15 years in federal prison for his role in a business email compromise scheme targeting organizations in the United States. His sentence will be followed by 3 years of supervised release.

Adejumo, also known by a slew of aliases, including “Ade,” “Slimwaco,” “Waco,” “Waco Jamon,” “Hade,” and “Hadey,” teamed up with co-conspirator Adeyemi Odufuye and others to target CEOs, CFOs, and other corporate leaders with fraudulent emails. Their messages were crafted to appear as though they came from the legitimate email addresses of business executives.

The actors sent fake emails with the goal of having recipients send or wire money to bank accounts they controlled. Investigators found they controlled multiple email and social media accounts related to the scheme; in some cases, they sent malicious attachments to targets.

Adejumo admitted his role in the scheme caused total losses exceeding $100,000 to at least three organizations. He was ordered to pay $90,930 in restitution.

Read more details here.

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early bird rate ends August 31. Click for more info

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/ohio-man-sentenced-to-15-years-for-bec-scam/d/d-id/1332614?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

So phar, so FUD: PHP flaw puts WordPress sites at risk of hacks

Bsides Manchester A newly discovered WordPress flaw has left installs of the ubiquitous content management system potentially vulnerable to hacking.

Security shortcomings let attackers exploit weaknesses within WordPress’s PHP framework, allowing already registered users without admin privileges to run exploit code, infosec consultancy Secarma has warned.

The hole offers a previously undiscovered way to expose “unserialization” in the platform’s code using a combination of XML external entity (XXE) attacks and server-side request forgery (SSRF).

To make the attack work, a miscreant would need to upload a booby-trapped file onto the target application, then trigger a file operation through a crafted file name (that accesses the file through the phar:// stream wrapper), causing the target application to “unserialize” metadata contained in the file.

The flaw by itself would not allow an attacker to break into a targeted system and only expands the scope for mischief once a toehold on targeted systems is obtained through some other means.

Unserialization of attacker-controlled data is a known class of vulnerability that is liable to lead in the execution of malicious code. German security researcher Stefan Esser first documented the class of flaw 10 years ago.

Secarma’s research demonstrates a new technique which allows an attacker to transition from a type of vulnerability not previously considered that bad to one that can have severe impact.

WordPress was informed of the issue in February 2017 but has yet to take action, according to Secarma. PDF generation library TCPDP is similarly vulnerable. Content management system Typo3 was vulnerable up until early June – before it released updates to protect users.

Research into the vulnerability was presented by Secarma’s Sam Thomas at Thursday’s BSides cybersecurity conference in Manchester, UK – days after it was first unveiled at Black Hat in Las Vegas last week. His presentation (video below) was entitled It’s A PHP Unserialization Vulnerability Jim, But Not As We Know It. The part between the 30 and 38 minutes concentrates on the WordPress issue.

Youtube Video

A white paper, File Operation Induced Unserialization via the phar:// Stream Wrapper (PDF), explains the issue in more depth.

Thomas told El Reg immediately after his Manchester gig that he had reported the serious PHP-related vulnerability in WordPress through HackerOne – which runs its bug bounty programme – months ago but despite this the vuln had not been properly resolved. El Reg contacted both WordPress and HackerOne for comment.

We have yet to hear back from WordPress. HackerOne confirmed it worked with WordPress but declined to offer anything much beyond that.

“Due to our confidentiality obligations to our customers, HackerOne does not comment on customer bug bounty programs,” the outfit told El Reg.

Thomas said the WordPress flaw involves a “subtle vulnerability in thumbnail processing which allows an attacker to reach a ‘file_exists’ call with control of the start of the parameter”.

As things stand, the objective scope of the vulnerability and how easy it might be to exploit is unclear. Thomas’s presentation contained a number of caveats omitted from Secarma’s press release about the presentation, which boldly claimed the flaw left “30 per cent of the world’s top 1,000 websites vulnerable to hacking and data breaches”.

After careful analysis and a review of available material, El Reg‘s security desk has concluded claims of a “massive WordPress vulnerability” are a load of tribble’s testicles.

There’s an issue here but the premise that millions of websites are at risk of “complete system compromise” above and beyond the general widely known risk of running WordPress hasn’t been substantiated by Secarma, a security business owned by hosting outfit UKFast.

WordPress hasn’t issued a patch and we have no information about mitigation from the CMS vendor to go on either. During his presentation Thomas said that the “issue is only exposed to authenticated users… they are certainly not supposed to be able to execute [code]”.

In the absence of a fix, WordPress users need to be careful about new accounts that are author level and above, Thomas advised. These accounts should be locked down because the now-public hacking technique can be used to elevate privileges to admin. “Ultimately it’s an issue within PHP,” Thomas said, adding during a Twitter exchange that “the issue works against the default configuration of WordPress and PHP, [as far as I know] it is not dependent on network or system setup”.

Chinese researcher Orange Tsai had discovered the same problem, Thomas acknowledged during his Manchester presentation.

WordPress is widely used by bloggers, news outlets and all manner of businesses as a content management system. It’s no stranger to security problems of one sort or another, to put it mildly. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/20/php_unserialisation_wordpress_vuln/

SuperProf gets schooled after assigning weak passwords to tutors

Updated Private tutor networking website SuperProf has irritated teacher clients of a firm it recently acquired – by handing out hopelessly insecure passwords.

SuperProf, headquartered in Paris, recently bought UK-based Tutor Pages. Tutor Pages teachers have been migrated to the SuperProf platform but details of their fees, subjects, location and student testimonials have not come over with them.

So would-be students of language tuition in Lincoln, for example, can’t presently find local tutoring help through the platform. Even those looking for online tutoring will not be able to search for teachers with the right qualifications to suit their needs, rather defeating the purpose of the SuperProf platform. Some tutors have asked for their money back and complaints are rife on social media.

Tutors have been further irked by the temporary passwords assigned to newly migrated users. They just shoved the word “super” in front of the user’s first name.

Yes, you read that right.

A number of tutors complained to infosec veteran and privacy advocate Graham Cluley. “Superprof… has made its newest members’ passwords utterly predictable… leaving them wide open to hackers,” he wrote.

Clarinetist Lisa, who contacted Cluley to complain about the password, as well as claiming SuperProf altered her profile, was livid.

“They changed my hourly rates, listed as ‘first lesson free’, which I can’t remove unless I pay to upgrade and changed my password to something totally hackable,” she said. “They’ve also removed all my student testimonials and my website link, which I’d paid for.”

El Reg emailed the tutoring site on Friday, asking for comment on the situation. We’re yet to hear back but SuperProf responded to Cluley at least, telling him that it had sorted out the password mess it had quite unnecessarily brought on itself and its users.

“They are replacing affected passwords with random chars, and resending email instructions,” Cluley said. ®

Update

SuperProf has been in touch since publication to say that it had already reset passwords, adding that it was in process of repopulating tutor profiles, a particular focus of complaints.

At Superprof we take security seriously and know how key it is to the running of our business.

As Graham told you we have taken action to reset all the passwords from migrated tutors accounts with random string characters (as of 4:47pm on Friday 17th August 2018).

We also sent emails to all tutors from The Tutor Pages explaining migration corrections and password reset. We also encourage users to connect to their account to modify their password.

We are also holding a backup of all tutor profiles from The Tutor Pages in case tutors would like us to re-migrate, or update information initially present in their TTP profile, that was not migrated to Superprof.

Regarding issues with tutor profiles, we are aware that some information was not correctly transferred and we are working hard to correct this. All tutors from the tutor pages will be given a year’s premium membership on Superprof and have their accounts updated ‘star’ tutor status, that usually requires many months of activity to achieve on the platform.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/20/superprof_password_security_fail/

How Better Intel Can Reduce, Prevent Payment Card Fraud


To InformationWeek
Network Computing
Darkreading





Dark Reading | Security | Protect The Business - Enable Access

Search

Royal Bank of Canada machine learning researcher Cathal Smyth and Terbium Labs chief scientist Clare Gollnick discuss how they use intelligence about the carding market to predict the next payment card fraud victims. Filmed at the Dark Reading News Desk at Black Hat USA 2018.



‘);
}



‘);
}

Comments

‘);
}

‘);
}

News

Commentary

News

Register for Dark Reading Newsletters

Live Events

Webinars


More UBM Tech
Live Events

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments


Cartoon

Latest Comment: —  John Klossner’s latest cartoon!

The Biggest Cybersecurity Breaches of 2018 (So Far)

Reports

The State of IT and Cybersecurity

The State of IT and Cybersecurity

IT and security are often viewed as different disciplines – and different departments. Find out what our survey data revealed, read the report today!

Dark Reading - Bug Report
Bug Report

googletag.display(‘div-gpt-ad-961777897907396673-15’);

Information Week

UBM Tech

Data Privacy Careers Are Helping to Close the IT Gender Gap

There are three main reasons why the field has been more welcoming for women. Can other tech areas step up?

It’s common knowledge that there’s a huge gender-based disparity in technology. The most pertinent questions are why and what can be done to close that gap.

In the past few years, a series of studies have explored these questions and emerged with compelling, quantifiable findings. Among other insights, they’ve revealed the common assumption that women are less interested in tech careers to be false. As a 2016 CompTIA study illustrated, the problem isn’t one of interest, but of awareness: From a young age, women aren’t given the same exposure to tech career paths as their male counterparts.

Beyond overturning assumptions about women’s disinterest in pursuing tech careers, these studies have revealed two much more potent causal factors behind the gap: Deeply entrenched sexism within the tech field and the very real gender-based pay gap. As a study conducted by Hired revealed, women in tech earn 4% less for the same work, on average.

Yet there’s one sector of technology jobs where the gender gap doesn’t exist: data privacy. In stark contrast to almost every other tech role, data privacy is unique in its 50/50 split of female and male professionals. What is it about the data privacy career path that levels the playing field? And what lessons can be applied to other tech subsectors?

An Equal Playing Field
Within the tech sector, data protection and privacy is a burgeoning field. It’s gained particular momentum over the past few years, amid large-scale enterprise data breaches and intensifying national and international conversations about the data privacy duties of both enterprises and governments to protect individuals’ private data.

In the European Union, this conversation resulted in concrete policy change, with the emergence of the EU’s General Data Protection Regulation (GDPR), which imposes massive fines for companies that breach established data standards. While the EU is outpacing the US on the data protection front, we may soon see similar policy changes in the states — especially in the wake of high-profile personal data trust breaches such as the ongoing Facebook data harvesting debacle.

But what’s bad for Facebook is decidedly great for the data privacy industry. Within the sector, there’s enormous job opportunity. According to AvePoint and the Center for Information Policy Leadership’s second annual GDPR readiness report, roughly one-third of organizations surveyed said they’re building out staff to prepare for GDPR implementation. And of the many available roles, women and men alike are filling them at an equal pace. So, what makes data privacy such a level playing field in terms of gender?

● There’s pay parity: The most important step to closing the tech gender gap is ensuring pay parity across all roles. On that front, the tech industry as a whole continues to fall short. But that’s not the case for data privacy. As a 2015 privacy industry report revealed, there is only a nominal gender-based difference in pay. Instead, certifications play a much bigger factor in determining salary.  

● It’s a new(ish) profession: The notion of a data privacy professional has only entered into the tech lexicon within the past few years. From a gender standpoint, this is significant. In contrast to other tech roles like developers and programmers, there’s not a popular perception of who should fill data privacy roles, nor have subcultures emerged that implicitly exclude a group or groups of people. 

● There’s not a glass ceiling: The tech gender problem won’t be solved with just a 50/50 split; there has to be equity up the ladder as well. And that’s a big issue in many tech sub sectors, where women secure jobs but then don’t see a path for growth relative to their male counterparts. In data privacy, the data shows that growth is possible and is not gender-dependent: Among privacy professionals at the vice president and C-level, there’s a nearly even gender split.  

As the data privacy sector continues to grow — and grow equitably — it’s advisable for other tech fields to chart and follow its course. Working ardently to close the gender pay gap and offering equal work and career advancement opportunities to both women and men are just two of several ways industry leaders within the tech space can do to move toward completely closing the gender gap within tech.

Tech industry leaders can look to the data privacy industry as an example of what happens when stereotypes, toxic subcultures and pay inequities are taken off the table. What’s left is the work — and when it comes to doing that work, women and men gravitate toward it at the same rate, and rise through the ranks at the same pace.   

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early-bird rate ends August 31. Click for more info

Dana Simberkoff is the Chief Compliance and Risk Management Officer at AvePoint, Inc. She is responsible for executive level consulting, research and analytical support on current and upcoming industry trends, technology, standards, best practices, concepts and solutions for … View Full Bio

Article source: https://www.darkreading.com/risk/data-privacy-careers-are-helping-to-close-the-it-gender-gap/a/d-id/1332540?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple