STE WILLIAMS

With New SOL4Ce Lab, Purdue U. and DoE Set Sights on National Security

The cooperative research initiative brings together faculty and students to “focus on problems and cutting-edge ways to solve them.”

Image: denizbayram/Adobe Stock

The field of cybersecurity has evolved far past its original roots as a discipline focused exclusively on protecting computer systems. As just about anyone who works in the industry knows, a career in cybersecurity today requires knowledge about physical systems, human behavior, and business strategy, as well as many other elements beyond simply bits and bytes.  

The work at Purdue University’s Center for Education and Research in Information Assurance and Security (CERIAS) is a true example of the scale and breadth of security research in 2020, says managing director Joel Rasmus.

“CERIAS is a collection of 135 faculty at Purdue, and they come from 18 academic departments across eight colleges. It’s across multiple departments that are both deeply technical as well as behavioral,” Rasmus explains. “We don’t just say we are interdisciplinary; we have embraced it. On some projects, for example, we have had psychologists involved to identify insider threats and social engineering.”

CERIAS now hopes to take those efforts to educate students on security in a new direction that could impact national security efforts. The new Scalable Open Laboratory for Cyber Experimentation (SOL4CE), launched last month, is a collaboration between CERIAS and the US Department of Energy’s Sandia National Laboratories. While the two have had a relationship for a decade, the lab now gives them the opportunity to deepen both the speed and impact of their research efforts.

“It comes down to a foundation for new innovation,” says Kamlesh Patel, manager of Purdue partnerships at Sandia National Laboratories, “beginning with getting the professors who are on leading edge of these areas and have them focus on problems and cutting-edge ways to solve them.”

L-R: Lori Floyd, CERIAS office manager, Purdue University; Dr. Ken Patel, Sanager of Purdue partnerships, Sandia National Labs; Dr. Dongyan Xu, CERIAS director

Building a Pipeline of Talent
SOL4CE will also be a resource to faculty and students for quick modeling and simulation of cyber and cyber-physical systems — a critical component as the security industry struggles to fill open positions, Rasmus notes.

“There is a shortage of good cybersecurity talent out there, to the extent that by the time our students are putting together resumes, they already have job offers,” he says. “Sandia wanted to become involved in what we are doing on campus.”

To that end, Sandia sponsors cybersecurity competitions to raise more awareness of the work involved in the field, as well as facilitates internships and externships.

“Much of what we are working on here are critical skills areas in security. Computer science, data science, AI – these are all emerging fields that are cross-cutting to national security problems,” Patel says. “But there is a lack of talent, and we need to be thinking about our future workforce.”  

As part of the efforts to develop a pipeline of talent from the collaboration, Patel has taken residence at Purdue. He says he’s “on loan,” so to speak, from Sandia in order to absorb the academic culture.

“It builds bridges and cultures in a meaningful way,” he says.

More Than Just a Sandbox
Rasmus says Patel’s presence is also key for one of the other goals of the lab: to speed up impactful research on national security. Because Sandia is always engaged in work on security tools for national security, the collaborative lab means taking some of those ideas into the public domain so students and faculty get exposure to them.

“It’s a wide-open program that Sandia has put here that allows us to say, ‘Let us take this tool and see what else it can solve and how we can help make the nation safer,'” Rasmus says.

Some of the lab’s initial research includes looking at airline system vulnerabilities due to their level of interconnection. The lab is also working on research into the potential for various attacks at US nuclear power plants. They’ve coined the term “emulytics,” which is a combination of the words “analytics” and “emulating,” to describe the lab’s capabilities. The lab, Rasmus says, runs real scenarios and aims to extract data that can be used to improve and protect real-world systems.

“It’s not just, ‘Let’s build a virtual machine and see what happens,'” he says. “It’s a much greater tool than just a playground.”  

Related Content:

 

Joan Goodchild is a veteran journalist, editor, and writer who has been covering security for more than a decade. She has written for several publications and previously served as editor-in-chief for CSO Online. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/with-new-sol4ce-lab-purdue-u-and-doe-set-sights-on-national-security/b/d-id/1337225?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Avoiding the Perils of Electronic Communications

Twitter, Slack, etc., have become undeniably important for business today, but they can cause a lot of damage. That’s why an agile communications strategy is so important.

One of the more difficult and time-consuming exercises for security leaders is to analyze their company’s electronic communications channels and work to codify and implement processes that take into account proper security hygiene. In my experience, there is no one-size-fits-all approach because every company communicates in different ways and uses different tooling.

Due to the proliferation of collaboration tools and social media applications, it’s possible you don’t even realize how many tools your employees are using to communicate. For example, your CEO’s calendar probably shouldn’t be publicly available to the entire company as there can be significant risks from free access to this information. Because a calendar is a trusted application, you likely wouldn’t think twice about clicking on a link from a known source.

Evolution of Social Media
To be candid, social media applications have turned electronic communications into a difficult beast for CISOs to tackle. Take Twitter. This single application lets you reach global audiences instantly. While Twitter can be used as a mouthpiece to quickly disseminate news and spread awareness, there have been major downsides, and our society has yet to fully understand the ramifications of these.

One of the most notable incidents occurred in 2013, when a single tweet from the Associated Press’s verified account shared that there had been explosions at the White House and President Obama had been injured. A hacking group claimed responsibility for the tweet and the resulting stock market nosedive erased over $136 billion in equity market value in the three minutes following the tweet. The fact that one tweet could do this much damage was a wake-up call that we need to think long and hard about how systems are designed to curb potential abuse.

Additionally, any organization with sensitive intellectual property should take into account the lengths that sophisticated actors will go to breach its electronic communications — especially social media — including the use of insiders. For example, in late 2019, it was reported that two former Twitter employees were working for Saudi Arabia to spy on targeted users. It’s vital to account for these channels in employee training. While they might not associate Twitter, Instagram, or Facebook with a work-related threat, given the trust we place in our favorite social media apps, vulnerabilities in them can be leveraged by skilled adversaries as a foothold into an organization’s network.

While some might think of traditional electronic communications threats as simply phishing attempts with your email, there are dozens of channels that a CISO must consider when setting company policies. Due to the impact of a single tweet or post, these applications for your C-suite and senior leaders should be locked down and access should be contained to as few people as possible. Additionally, best practices such as implementing two-factor authentication will help to protect your organization.

Communication Policies Must Be Agile
At MongoDB, our most-used communications tool is Slack. The Slack platform is vital to asynchronous work with a global employee base and, in total, over 50 people were involved in the process of writing our new policy before the final guidelines were shared companywide. We consulted representatives from different teams across the company to get feedback on policies and wording to make sure it would resonate with everyone.

This might not be a surprise, but feedback from members of our engineering teams was that there should be no ambiguity in the policy. It was important to write and set a policy that ended up being very prescriptive without sounding condescending. Additionally, we also incorporated different data retention standards for things such as attachments, direct messages, and all communication in public versus private channels.

It’s important to educate our employees on data classification. Below is how we classify data into four groups as part of our company data security policy.


Having a prescriptive and thorough data security policy available as a living document to all employees can provide a valuable resource for asynchronous work. Engaging in ongoing education throughout the year helps build a secure culture and make sure this information is top of mind for employees. This can be as simple as a quarterly email for some people or addressing security-related questions at our monthly all-hands meeting.

Why Security Enables Innovation in Our API World
Given our roots as a developer company, modern tooling for software development is all through APIs. These integrate into Slack, which creates alerts and additional communication channels. While these integrations are hugely helpful, the best way to take into account security is to have each potential application vetted for security hygiene and assessed by our procurement and security teams before network integration.

Identity and access management with your APIs in the cloud is vital whether you’re developing software or work on a different team. For instance, someone who isn’t on an engineering team at MongoDB likely doesn’t need access to our GitHub API in Slack. If there is an ad hoc reason, that can go through the proper protocols to authorize only that user.

We believe identity and access management not only keeps us secure but also fosters greater innovation. Being able to implement secure processes into workflows and maintaining agile policies for your organization’s tooling is one of the key parts of a security leader’s job, but don’t be surprised at how difficult and time-intensive it is.

Related Content:

 

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “How to Prevent an AWS Cloud Bucket Data Leak.”

Lena joined MongoDB with more than 20 years of cybersecurity experience. Before joining MongoDB, she was the global chief information security officer for the international fintech company, Tradeweb, where she was responsible for all aspects of cybersecurity. She also served … View Full Bio

Article source: https://www.darkreading.com/operations/avoiding-the-perils-of-electronic-communications/a/d-id/1337131?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

XSS plugin vulnerabilities plague WordPress users

Thousands of active WordPress plugins have been hit with a swathe of cross-site scripting (XSS) vulnerabilities that could give attackers complete control of sites. One of the affected plugins was designed to work with the popular WordPress ecommerce system WooCommerce.

Researchers at NinTechNet found a vulnerability in the WordPress Flexible Checkout Fields for WooCommerce plugin, which enhances the popular WordPress ecommerce system with the ability to configure custom checkout fields using a simple user interface.

The flaw, which the authors at WPDesk subsequently blogged about, enabled attackers to add two custom fields to the Billing and Shipping sections of a WooCommerce page. These inject a script that, once run, enables the creation of four new administrative accounts using predefined email addresses. An existing site admin would have to visit either the plugin configuration screen or the checkout page for these accounts to be set up.

At this point, the infected site would also download a zip file and store it in the site’s content upload section, extracting PHP pages from it and installing them in the plugin section as Woo-Add-To-Carts.

WordPress firewall and malware scanning company Wordfence rated this vulnerability as critical in its own blog post. It added that the exploit was possible due to an XSS flaw on the pages accessed by the site admin. It happened because the site didn’t check authentication for updateSettingsAction, a function that hooked into admin_init. This contains code that runs whenever an admin page is loaded, including those that don’t require authentication. Wordfence’s researchers said:

By crafting an array of expected settings, attackers can inject JavaScript payloads into the elements that render onscreen.

Wordfence discovered several other WordPress plugin vulnerabilities. The other bug severe enough to get a critical rating cropped up in 10Web Map Builder for Google Maps. It seems similar to the bug in Flexible Checkout Fields because it lets an attacker inject JavaScript into settings values that are then called during admin_init. Like the first bug, it runs on pages that don’t require authentication. It also runs for some front-of-site visitors, Wordfence added.

Two other bugs surfaced by Wordfence got a high severity rating. The first was in the Async JavaScript plugin, which speeds up page load times by blocking JavaScript that delays page rendering. A flaw in that software similar to the one in Flexible Checkout Fields failed to check the capabilities of a built-in function. It enabled attackers to inject malicious JavaScript that triggered when admins viewed certain areas of the dashboard, Wordfence said.

The final bug affected the Modern Events Calendar Lite plugin, which helps people manage events. This plugin uses several actions for logged-in users with low privileges that manipulate settings data. Attackers have been injecting XSS code to target admin pages and create rogue accounts for themselves, Wordfence said. It was also possible to hit the front page of an affected site to affect visitors, it added.

These flaws have all been patched, but all affected heavily used plugins. Flexible Checkout Fields for WooCommerce and 10Web Map Builder each have over 20,000 active users, while Modern Events Calendar Lite has over 40,000 and Async JavaScript has over 100,000. In some cases, users reported hacked sites.

Anyone using these plugins should patch immediately using the blogging software’s built-in update system or by visiting the plugins’ download pages.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Bjh4-1ye-9E/

Nvidia patches severe flaws affecting GeForce, Quadro NVS and Tesla

Denial of service, local escalation of privileges, and information disclosure are not security worries most computer users will associate with their racy graphics card or its drivers.

And yet fixes for precisely these issues are part of February’s Nvidia GPU display update, all of which could compromise Windows or Linux PCs, allowing an attacker to gain local access after a malware attack.

In all, the update covers five desktop CVE vulnerabilities, including one, CVE‑2020‑5957, rated as critical. This is in the Windows GPU Display Driver control panel for the GeForce, Quadro  NVS, and Tesla products leading to a corrupt system file and escalation of privileges or denial of service.

A second control panel flaw affecting the same products is CVE‑2020‑5958, which might allow the planting of a malicious DLL file with the same results as above along with information disclosure.

The Virtual GPU Manager gets three fixes addressing CVE‑2020‑5959, CVE‑2020‑5960, and CVE‑2020‑5961, with the first of these rated critical.

Nvidia is also readying separate updates for its enterprise products, namely the Virtual GPU Manager (various hypervisors), and vGPU graphics driver for guest OS (Windows and Linux), which is also affected by some of the above flaws.

Depending on the driver version affected, these will be available in the week of 9 March 2020, with updates during April promised for organizations using either version 10.0 or 10.1 for any of the above products.

These days, updating graphics drivers needs to be part of the standalone user’s patching cycle along with Microsoft’s Patch Tuesday, Intel’s regular CPU and product patches, not forgetting browsers and individual products such as Adobe’s PDF Reader and various plugins .

Nvidia ships fixes for its products almost every month, with missed months made up for by two releases the following month.

Almost all include critical updates for severe vulnerabilities which could cause major problems if left unpatched.

November 2019’s update fixed 11 mostly severe flaws across its desktop products, while August 2019 saw a similar story.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/YitOMgSO498/

GoodRx stops sharing personal medical data with Google, Facebook

GoodRx – a mobile app that saves US consumers money on prescription drugs – has apologized and sworn to do better after a Consumer Reports investigation found that it was sharing people’s data with 20 other internet-based companies.

Consumer Reports had discovered that GoodRx was sharing the names of medications that people were using the app to research, including those of a highly sensitive, personal nature. For example, the consumer-focused nonprofit found it could use the app to look for discounts on Lexapro, an antidepressant; PrEP and Edurant, used to prevent and treat HIV, respectively; Cialis, for erectile dysfunction; Clomid, a medication used in fertility treatments; and Seroquel, an antipsychotic often prescribed to control schizophrenia and bipolar disorder.

The details GoodRx was sharing could lead to companies being able to infer “highly intimate details” about users, Consumer Reports said:

With the information coming off our test phone and browser, a company could infer highly intimate details about GoodRx users suffering from serious chronic conditions, and make educated guesses about their sexual orientation.

Consumer Reports found that some of the firms that GoodRx used for marketing automation and customer engagement were receiving the names of people’s drugs, the pharmacies where users tried to fill prescriptions, and ID numbers that advertising and analytics companies use to track the behavior of specific consumers across the web.

Several companies that Consumer Reports talked to said that they don’t share data broadly with data brokers or advertising companies. Rather, they only use the data to help GoodRx target its own users with information.

Thomas Goetz, chief of research at GoodRx:

To reach new customers who might find GoodRx useful, we place advertisements for GoodRx on third-party platforms, including Facebook and Google, and retarget users who have visited GoodRx to encourage them to come back and use the service.

How is this legal under HIPAA?

Still, Google, Facebook and the other third-parties all receive the names of meds people are researching, along with other details that could let them pinpoint whose phone or laptop is being used. How is it that the country’s health privacy law – the Health Insurance Portability and Accountability Act (HIPAA) – doesn’t make it illegal to share this health data?

HIPAA requires medical professionals to keep patients’ information private and secure. In the US, we’re all likely familiar with the law: you can’t ride an elevator in a hospital without seeing reminders about patient confidentiality plastered on its walls, and we all have to read documents describing HIPAA when we visit a new doctor’s office.

But you can’t blame the doctors for this one. Many of them bring up GoodRx as a way to help patients save money on spiraling medication costs. Some of the healthcare providers Consumer Reports spoke with weren’t even aware that this data gets shared.

Dr. Erin T. Bird, a urologist in Temple, Texas, told Consumer Reports that he often brings up GoodRx to patients, particularly when he’s dealing with erectile dysfunction, urinary incontinence, and cancer – conditions that call for expensive medications.

It’s a conversation that occurs with pretty much every prescription.

Dr. Bird was surprised to learn that the GoodRx app and website was sharing his patients’ prescription information:

I think that most physicians would think that within the space of healthcare, there are some consumer protections. I would have assumed that.

He would have been wrong to assume that.

Consumer Reports spoke with Deven McGraw, chief regulatory officer at consumer health tech company Ciitizen and former deputy director of health information privacy at the US Department of Health Human Services’ Office of Civil Rights, who said that people tend to have misconceptions about how far HIPAA goes to protect our health data:

If people think that HIPAA protects health data, then they probably believe that any health data in any context is going to be protected. That’s just not the case.

Consumer Reports delineated some of the use cases where your health data can wander freely online, outside of the protections of HIPAA, with “no more protection than your Instagram likes”:

HIPAA doesn’t apply to GoodRx or many other “direct-to-consumer” websites and apps that provide health and pharmaceutical information. It doesn’t apply to heart-rate data generated by a sports watch or Fitbit, information you enter into period-tracking apps, or running data held by running and cycling apps such as Strava. As far as the law is concerned, such information has no more protection than your Instagram likes.

On Friday, GoodRx said in a blog post that it has “never and will never sell our users’ personal health information.” Having said that, the Consumer Reports story led the company to re-examine its policies when it comes to sharing data with third parties. The review led GoodRx to determine that at least in the case of Facebook advertising, it was “not living up to our own standards.”

For this we are truly sorry, and we will do better.

GoodRx explained how it shares data with specific third parties, including, for example, how it uses Google Analytics to evaluate and improve the quality of the information it provides across its website, including its drug coupon pages.

The company listed a host of changes it’s initiating in order to better protect consumer privacy:

  • Protecting data shared with Facebook: It will henceforth never share drug names, conditions or other personal medical information with Facebook, even in encrypted form.
  • Protecting data shared with Google: GoodRx says it already de-identifies personal information shared with Google. It’s audited that data to ensure that any sharing is encrypted and meets Google’s strict privacy standards. “In other words, we sever any individual’s user information from web usage data,” it says.
  • Protecting data shared with other third parties: The company has audited agreements with third-party service providers that have any contact with its users’ personal health information, and ensured that all providers operate at the highest standards of data privacy, including HIPAA standards wherever appropriate.
  • Nationwide Rollout of California Privacy Protections: GoodRx is rolling out data privacy protections required for Californians by the new California Consumer Protection Act (CCPA) to all users, regardless of their location. That includes the ability to opt-out from cookies and tracking, as well as the ability to delete their data (which you can do here).
  • Appointed new VP of Data Privacy: The new data privacy exec will oversee GoodRx’s data privacy efforts and coordinate between engineering, marketing, and other teams “to ensure we only share what’s necessary and always act in our users’ best interests.”

Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Mjy_qyNwWEk/

Huge flaw found in how facial features are measured from images

How is it that our brains – the original face recognition program – can recognize somebody we know, even when they’re far away? As in, how do we recognize those we know in spite of their faces appearing to flatten out the further they are from us?

Cognitive experts say we do it by learning a face’s configuration – the specific pattern of feature-to-feature measurements. Then, even as our friends’ faces get optically distorted by being closer or further away, our brains employ a mechanism called perceptual constancy that optically “corrects” face shape… At least, it does when we’re already familiar with how far apart our friends’ features are.

But according to Dr. Eilidh Noyes, who lectures in Cognitive Psychology at the University of Huddersfield in the UK, the ease of accurately identifying people’s faces – enabled by our image-being-tweaked-in-the-wetware perceptual constancy – falls off when we don’t know somebody.

This also means that there’s a serious flaw with facial recognition systems that use what’s called anthropometry: the measurement of facial features from images. Given that the distance between features of a face varies as a result of the camera-to-subject distance, anthropometry just isn’t a reliable method of identification, Dr. Noyes says:

People are very good at recognizing the faces of their friends and family – people who they know well – across different images. However, the science tells us that when we don’t know the person/people in the image(s), face matching is actually very difficult.

In an excerpt of the abstract of a paper published in Cognition magazinethat came out of research done by Noyes and University of York’s Dr. Rob Jenkins on the effect of camera-to-subject distance on face recognition performance – the researchers write that identification of familiar faces was accurate, thanks to perceptual constancy.

But the researchers found that changing the distance between a camera and a subject – from 0.32m to 2.70m – impaired perceptual matching of unfamiliar faces, even though the images were presented at the same size.

In order to reduce the errors in face-matching that stem from this flaw in anthropometry before migrating to real-world use cases – such as facial recognition being used in passport control or to create national IDs – industry has to take the distance piece of the puzzle into account, she says.

Acknowledging these distance effects could reduce identification errors in applied settings such as passport control.

Or here’s a thought: perhaps this new finding can be used by lawyers working on behalf of people imprisoned after their faces were matched with those of suspects in grainy, low-quality photos? People like Willie Lynch, who was imprisoned even though an algorithm expressed only one star of confidence that it had generated the correct match?

Dr. Noyes said it best:

Accurate face identification is crucial in many police investigations and border security scenarios.

Noyes, a specialist in the field, was one of 20 global academic experts invited to attend a recent conference, at the University of New South Wales, which is home to the Unfamiliar Face Identification Group (UFIG). The conference’s title was Evaluating face recognition expertise: Turning theory into best practice.

The University of Huddersfield said that 20 world-leading experts in the science of face recognition assembled in Australia for a workshop and the conference, which were designed to lead to policy recommendations that will aid police, governments, the legal system and border control agencies.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/A15DxfJxLcc/

Digital piggy bank sevice broken into by cybercrooks

Saving money, at least in modest amounts, used to be a very simple business.

The easiest approach – many of us still do it, even in this online age – is the coin jar (or piggy bank, if you’re really old-school).

Instead of frittering away your small change on daily inconsequentials, you dump ununsed coins in the big glass jar in the corner of the living room, and just before it’s too heavy to pick up and move altogether…

…you drag it down to the bank and are often be pleasantly surprised how much money has accumulated in there.

But that’s a very 1990s approach! Why not put your money into a digital piggy bank, instead?

And, better yet, why not choose a piggy bank that deliberately starts out in debt?

It sounds bizarre – you essentially take out a loan you can’t touch, and clock up your “savings” by paying it off.

At the end of the period – a year, say – you’ve paid off the loan, so you not only get access to your loan capital as your “savings”, but also have a year’s worth of loan repayments that boost your credit rating.

By deliberately racking up debt to save against, your savings end up acting both as credit and as credit history.

That’s the business model of UK company Loqbox, which says it keeps the service free due to the affiliate fees it gets from the banks into which its customers release their funds after paying off a loan:

After making monthly payments for a year, your loan is repaid and you leave LOQBOX with an improved credit score and your money back into a new account for free.

[…]

We get paid by our partner banks for opening a new account for you, which is how we keep LOQBOX free. But if you’d prefer, you can opt for our Flexi Unlock premium add-on and unlock into an existing account for £30.

So far, so good…

…except that there’s a lot riding on you being able to keep up your “savings” payments for the period of the loan.

If you raid the coin jar every now and then (we’ve all done it – it’s part of the game!), the worst that can happen is you end up with nothing saved, or you take longer to fill the jar than you hoped.

But even though you can take an early exit from debt-based savings systems like Loqbox’s, and get back what you you’ve put in so far, you won’t then have finished the loan process in full, as – as the company warns – unlocking early could harm your credit history.

And you can’t just skip payments at will, in the same way that you can go a few weeks without putting coins in the jar, because that really would harm your credit history.

In other words, as well as keeping up your side of the repayments, and taking care of your online account, you’d better hope nothing bad happens to your account data at the other end.

Crooks in the piggy bank

Unfortunately, according to customer tweets and news reports, Loqbox has just suffered a data breach that uncovered enough personal data to make most affected customers uncomfortable, apparently including names, emails, phone numbers, postal addresses and dates of birth.

Additionally, partial bank account and card number details were stolen, too.

UK IT publication The Register claims that this “external attack” got at bank account sort codes plus two digits of the account number, as well as credit card expiry dates plus 10 digits’ worth of the card number.

Fortunately, those numbers don’t identify customers’ accounts or cards precisely enough to let them be abused directly.

Sort codes generally identify the bank and a branch, which crooks could guess at from your home address anyway; UK bank account numbers are usually eight digits long; and credit cards typically have 16 digits.

Also, the 10 card digits stolen apparently include the parts of the number that are often disclosed or can be figured out anyway, namely:

  • The first six digits, which identify the financial provider. These digits make up what’s called the BIN, short for Bank Identification Number. A glance at your credit card’s colour or design is often enough to figure out those numbers anyway.
  • The last four digits, which are routinely printed on receipts or sent in unencrypted emails. These are pretty much used as semi-public “check digits” to make it easy for you to see which card you used for what transactions.

In short, the breach sounds bad, but not that bad.

There’s no mention of passwords or password hashes being stolen, which almost certainly means that the crooks can’t use the breached data to wander into your Loqbox online account with ease, and there’s no mention of any transactional data or other credit history information being accessed.

What to do?

Loqbox doesn’t seem have any information about the breach on its own website or blog so we’re assuming that affected customers will hear by email.

Note that it doesn’t mean you are entirely off the hook if you haven’t yet heard from Loqbox – breach investigations can take quite some time to complete.

And even if you have heard from Loqbox already, the company may need to contact you again in the future as investigations continue – and you can probably see where the issue that “you might well be expecting an email some time soon” is going.

Our tips are therefore:

  1. Keep a closer eye than usual on your statements. Simply put, if you see something, say something. (But note #2.)
  2. Watch out for emails or calls that know more about you than you might expect. Even without full details of your bank account or payment card, crooks with data from this breach will be in a much more believable position to scam you into thinking they are legitimate. (And see #3.)
  3. Never contact Loqbox or any other financial provider using information from an email or a call. Get out your original paperwork (or turn your payment card over) and use contact details from there – that way, you won’t get tricked into talking to an imposter.
  4. Speak to your card provider about getting a new number. If your card provider thinks there’s now a risk of fraud on your current card, they’ll probably issue you a new card and cancel the old one.
  5. Don’t pick passwords that crooks could guess from your customer data. The more crooks know about you, even if it’s just your birthday and where you live, the more clues they have to guess poorly-chosen passwords. In fact, don’t pick guessable passwords at all – use a password manager if you’re struggling to come up with good passwords yourself.

HOW TO PICK A PROPER PASSWORD

(No video? Watch on YouTube. No audio? Click on the [CC] icon for subtitles.)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zpLp7E5OQY8/

Maersk prepares to lay off the Maidenhead staffers who rescued it from NotPetya super-pwnage

Exclusive Maersk is preparing to make 150 job cuts at its UK command-and-control centre (CCC) in Maidenhead – the one that rebuilt the global shipping company’s IT infrastructure after the infamous 2017 NotPetya ransomware attack.

The redundancies will see some jobs outsourced to India, according to employees who have been caught up in the process.

Company insiders told The Register they were first made aware of the situation in January, when confused staff found job adverts online for their own roles, posted by Indian outsourcer UCS, which is understood to be taking over the running of an outsourced CCC for Maersk.

At the beginning of February, staff in the Maidenhead CCC were formally told they were entering into one-and-a-half month’s of pre-redundancy consultation, as is mandatory under UK law for companies wanting to get rid of 100 staff or more over a 90-day period.

In a memo on 5 February – seen by us – Declan Murphy, Maersk’s senior director for global IT support operations and engineering, said: “Approximately 150 roles are impacted and the consultation process for colleagues that are affected will take place over the next 45 days.”

The Technology division has seemingly been separated from a “new Services, Operations and Engineering organisation,” wrote Murphy. He said Maersk was also “introducing” roughly “55 new roles within the new structure.”

The CCC team runs the security operations centre but also covers infrastructure, DBA, network, Azure and other functions.

“In effect, our jobs were being advertised in India for at least a week, maybe two, beore they were pulled,” said one source. The Register asked Maersk to comment on 28 February and yesterday emailed Murphy for comment.

The team assembled at Maersk was credited with rescuing the business after that 2017 incident when the entire company ground to a halt as NotPetya, a particularly nasty strain of ransomware, tore through its networks, encrypting and locking up everything in its path before showing messages demanding a ransom of $300 per device.

As recounted by Wired magazine, the effort to rebuild thousands of servers and individual devices was led by Maersk personnel in “an eight-stor[e]y glass-and-brick building in central Maidenhead” where the fourth and fifth floors were dedicated to the ransomware response.

Those teams rebuilt around 4,000 servers and 45,000 PCs and other devices, as we reported the year after the attack. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/03/maersk_redundancies_maidenhead_notpetya_rescuers/

Have I Been S0ld? No, trusted security website HIBP off the table, will remain independent

The popular security website Have I Been Pwned (HIBP) will remain independent – despite owner Troy Hunt’s decision last year to put the business up for sale.

Hunt’s site is a database of usernames or email addresses that have been exposed in data breaches. At the time of writing, it contains 9,543,096,417 records, which happens to be more than the population of Earth, showing the extent of such breaches.

Users can discover which breaches have included their username, and also check passwords against a list of passwords that have been leaked. Bad guys also have these lists so checking credentials against the service significantly improves security.

In his June 2019 post, Hunt stated that thanks to the huge attention the site receives he was “getting very close to burn-out” and would look for a new owner, though he still intended to remain part of the service. He also wanted to expand its scope, publishing more breaches, reaching a larger audience and working at the tough problem of “changing the behaviour of how people manage their online accounts”. He engaged the Mergers and Acquisitions department of KPMG to manage the process.

Things have not worked out as planned. It was not for lack of interest from acquirers. Hunt said in a lengthy post: “We spoke to 141 companies from around the globe.” That was reduced to 43, in part because Hunt “culled companies that I didn’t believe should have responsibility for the sort of data HIBP has, that wouldn’t shepherd the service in the direction I believed it should go, or were simply companies that I didn’t want to work for.”

Hunt also began to have doubts about the wisdom of the sale after receiving appreciative comments about the site from numerous people who identified trust in him personally as one of the key factors. “I remember one discussion in particular where the guy was talking so sincerely about his appreciation and I just started thinking ‘what am I doing – can I really sell this thing?'”

The potential acquirers were also aware of this and included “golden handcuffs” in their offers. “They wanted me locked in for years and if I changed my mind partway through, I’d pay for it big time,” he said.

A likely acquirer was chosen and the project moved to the due diligence phase, an onerous process of examining every detail of the business. This went on for some months, and then something happened. Hunt does not reveal exactly what, but said: “The circumstances that took the bidder out of the running was firstly, entirely unforeseen by the KPMG folks and myself and secondly, in no way related to the HIBP acquisition. It was a change in business model that not only made the deal infeasible from their perspective, but also from mine; some of the most important criteria for the possible suitor were simply no longer there.”

Hunt decided that rather than go through the process again he would abandon it, especially as none of the other potential acquirers were as suitable as the one who dropped out. “Have I Been Pwned is no longer being sold and I will continue running it independently,” he said.

This does leave him with the same problem that inspired the acquisition project in the first place: that it is too much for one person. “I’ll be considering the best way to start delegating workload,” he said. He remains determined to improve the way industry handles “the flood of data breaches we’re seeing”.

The outcome for the rest of us is that for the time being HIBP continues as before, though now with firm evidence that for its owner and operator, it is more than just a business. HIBP is highly valued so the fact that it remains as-is will be welcomed, though with the caution that (as Hunt correctly identified) the service can only be sustainable long term if more people are involved in its management and operation. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/03/have_i_been_pwned_no_longer_being_sold/

How Security Leaders at Starbucks and Microsoft Prepare for Breaches

Executives discuss the security incidents they’re most worried about and the steps they take to prepare for them.

In today’s increasingly crowded threat landscape, it can be difficult to determine which threats companies should prioritize. For those who are stuck, it’s helpful to consider what major organizations are worried about and the steps they’re taking to combat those types of attacks.

This was the premise behind “Preparing and Responding to a Breach,” a panel that took place at last week’s RSA Conference in San Francisco. Security leaders from Starbucks, Microsoft, WhiteHat Security, and SecurityScorecard discussed the lessons they learned from the many breaches that took place in 2019 and how they plan to learn from these incidents to defend against threats of the future.

Last year brought 5,283 security breaches, said moderator John Yeoh, head of research for the Cloud Security Alliance, kicking off the panel. Organizations collectively lost 7.9 billion records, he said, and incidents indicate “the same things that are happening over and over again.” What types of attacks were most frequent, he asked, and what did organizations learn from them?

“As far as types of attacks we see, [they] generally tend to either be application security attacks, phishing attacks, misconfiguration of cloud environments, these kinds of things,” said WhiteHat CTO Anthony Bettini. And while these threats are old news to security pros, his fellow panelists agreed they are also the ones organizations should have at top of mind for defensive strategies.

“The reason you keep hearing about phishing from speakers like us … it’s not because we want to bore you with repetition,” said Microsoft’s cybersecurity field CTO Diana Kelley. “It’s because phishing still works.” Application vulnerabilities, misconfiguration, and phishing are the three areas where attackers are having the greatest success, which is why they should be prioritized.

Some leaders, like SecurityScorecard CISO Paul Gigliardi, are most worried about how attackers use the data they steal. “One thing I often see is the somewhat sophisticated criminal groups are starting to use the aftermath of breaches to do even more targeted social engineering or phishing attacks at scale,” he explained. “It’s not just the fact a breach occurred; it’s that all of our company’s data is somehow in there.”

Credential reuse is a primary concern for Starbucks global CISO Andy Kirkland, who spoke to a concern prevalent in the retail and hospitality industries. “Whenever these credentials become available, we become a place where people want to see if they work,” he said. The sharing of usernames and passwords across multiple platforms is “a big thing to watch” for companies. Cloud misconfigurations, which Kirkland calls “the rebranding of shadow IT,” are another worry.

“Just about anyone can get an S3 bucket and do whatever they want with it; potentially put whatever they want in there,” Kirkland noted. The onus is on security professionals to identify these instances within an organization when they happen.

Practice, Practice, Practice
Panelists spoke to employee and customer training strategies, tabletop exercises, and other steps they take to better prepare for security incidents. One key takeaway was the importance of working employee training into the corporate culture for everyone. As organizations change over time, and new people are onboarded, there will be gaps in cybersecurity knowledge.

“I have to take cybersecurity training at Microsoft just like everybody else,” said Kelley. “We don’t just assume because somebody has a title, they get to be exempt from that training.” She advised annual or biannual security training for all employees. “Psychologically, humans are much better at learning when we’ve got a little bit of an adrenaline pump.” If an employee is caught getting phished, they may remember to be more cautious next time.

“The best training is in-the-moment training,” Kirkland emphasized. While some trainings are done for compliance, the unexpected phishing emails deliver real learning moments.

He also advocates tabletop exercises with all executives in order to plan for cyberattacks. Senior execs schedule a four-hour block during which they create an entire breach narrative. Sometimes, he said, it’s the first time in a while that leadership has come together to decide how they would respond to a security incident – and the results have had an effect beyond cybersecurity.

“The decisions, and the things that they’ve learned in those tabletop exercises, have informed the way that we respond as an organization to all manner of incidents; not necessarily those that were cyber-related,” Kirkland said. Learning how business leaders collaborate “is not only educational for them; it’s educational for you as a security professional,” he added.

Tabletop exercises should inform a standard operating procedure for cyberattacks, said Kelley. Whether it’s online or printed, every business should have guidance on how employees can escalate potential incidents and how they should respond to them. These procedures don’t need to be 100% accurate – after all, every breach is different – but they should provide basic information on which internal and external organizations (cloud providers, law enforcement) need to be notified.

“You’d be surprised, with these kinds of activities, how easy it is to forget what needs to be done,” she explained. If an employee doesn’t know the right information or can’t access it, they may have no idea how to move forward in the right direction.

Practitioners also pull lessons from previous security incidents: to inform annual trainings in incident response and business continuity, Gigliardi goes back into historical breach data to assess what security looked like before an incident. Breach disclosure is mandated under HIPAA and GDPR, he pointed out, and there are thousands of breaches that aren’t publicly reported but are just as significant. Businesses “can get a lot of value” in lessons from these events.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “How to Prevent an AWS Cloud Bucket Data Leak.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/how-security-leaders-at-starbucks-and-microsoft-prepare-for-breaches/d/d-id/1337219?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple