STE WILLIAMS

Why DPOs and CISOs Must Work Closely Together

Recent data protection laws mean that the data protection officer and CISO must work in tandem to make sure users’ data is protected.

With strict data protection laws in place around the world (including GDPR and CCPA), it’s vital that the data protection officer (DPO) and CISO work closely together. Although part of the DPO’s job is to audit the CISO’s security policies, it is essential that the DPO and CISO have a good rapport. Essentially, CISOs are concerned with security and confidential data, and DPOs are focused on privacy and personal data.

The CISO examines security issues from a business and operations’ standpoint. While bolstering an organization’s cybersecurity posture, the CISO strives to ensure that all company information is securely processed. The DPO is primarily concerned with how the organization handles personal data. This can include data minimization, communication with data subjects, rights management, storage minimization, data collection, and data processing.

Data Minimization
One of the DPO’s main goals is to ensure that no unnecessary customer data is processed. If any personal data is processed, it should not be kept beyond a certain date (as per the commitment mentioned in the privacy policy), and customers must be informed about the nature of the data processing.

Data minimization involves storing less personal data, which shrinks the overall attack surface. This is important when it comes to the collaboration between the DPO and CISO. With the DPO helping to minimize the amount of collected data, the CISO is able to maintain a higher level of security.

For example, perhaps your organization issues a sign-up form that asks for an email address, phone number, and Social Security number. The CISO will mostly be concerned with how the data is protected. Conversely, the DPO will likely ask questions such as, “Why are we even collecting this information?” and “Do we need to process (store, use, or transfer) this data?” By asking questions like these, the DPO helps the CISO’s security team effectively — and proactively — protect data.

Create an Activity Register
In modern digital organizations, there are many data flows coming from a variety of different sources. By creating a register, the DPO can help the CISO monitor the various data flows. An effective activity register will answer questions such as “Where exactly is this information being used?,” “Who is using it?,” and “To whom is this data being transferred?” Again, the CISO is interested in this information from a security standpoint, and the DPO has privacy concerns.

During the creation of an activity register, assess whether the data is personal in nature. Sometimes, whether the data is personal depends on the context. For example, perhaps a customer only provides a company with her home address. If this home address can be traced back to the individual, then it’s personal data. Due to nuances like these, it’s helpful to have a DPO with a legal background.

Data Protection by Design
Another way that the DPO and CISO can effectively work together is during product inception. By working closely with an organization’s developers, the DPO and CISO can proactively build data protection into the company’s products.

For example, during the creation of essential and nonessential cookies, the CISO will have concerns related to security vulnerabilities, and the DPO will have privacy concerns. From a security perspective, the CISO wants to ensure that the essential cookies — those used for tracking logged-in sessions and providing user-related functionality — are protected. This way, no impersonation can occur.

And from a privacy perspective, the DPO will be concerned about nonessential cookies, such as advertising cookies used to display ads. The DPO must ensure that the list of cookies is displayed to the website users, and that users can opt out of some cookies without significantly degrading website performance.

Thus, close collaboration between the CISO and the DPO during the cookie creation process can be effective from both a privacy and a security standpoint.

Handling Breaches and Privacy Violations
Another instance in which DPOs and CISOs should work closely together is in the event of a data breach or privacy violation. Incidentally, these are often disparate events. For example, perhaps a customer is given a contact form, and the phone number is used later to sell him or her a product. If there was not a link to the privacy policy on the contact form, this would be a privacy violation, but not a breach. Alternatively, perhaps there was a data breach; however, only source code was stolen. This would be a data breach but not a privacy violation.

Nevertheless, to assess the situation, the DPO and the CISO should closely collaborate. This is especially important during a breach, as fines can incur if the company doesn’t alert authorities about an incident in time.

Impact Assessments
After a breach, organizations should conduct a risk assessment during which the DPO functions in an advisory role. In addition to auditing the CISO’s existing security infrastructure, the DPO should offer advice for the future. With the help of the CISO, the DPO can answer questions such as “Can an incident like this happen elsewhere?,” “How can we protect against this moving forward?,” and most importantly, “Should we be collecting this personal data at all?”

Conclusion
By working closely, the DPO can help the CISO secure data more efficiently by collecting only the most necessary data and keeping customers well-informed about the transfer and usage of data. With the DPO and CISO working together, the transfer of data from one place to another can be transmitted securely and legally, greatly reducing the chance of a security breach occurring and ultimately helping the organization save time and money.

Related Content:

 

Rajesh Ganesan is Vice President at ManageEngine, the IT management division of Zoho Corporation. Rajesh has been with Zoho Corp. for over 20 years developing software products in various verticals including telecommunications, network management, and IT security. He has … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/why-dpos-and-cisos-must-work-closely-together/a/d-id/1336840?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Configuration Error Reveals 250 Million Microsoft Support Records

Some the records, found on five identically configured servers, might have contained data in clear text.

Researchers have found five servers revealing almost 250 million Customer Service and Support (CSS) records. Each server contains what appears to be the same set of data stored, with no security or authentication. In a blog post, Microsoft acknowledged the exposure and blamed it on misconfigured security rules after changes made in early December.

A security research team at Comparitech, led by Bob Diachenk, discovered the five Elasticsearch servers in late December. According to Microsoft, the vast majority of the records had all personally identifiable information redacted through automated processes, though the company admitted that some records with unusually formatted data might have contained data in clear text.

In the blog post revealing its research, Comparitech noted that Microsoft acted quickly to secure the servers, completing the action within 24 hours of notification.

Read more here and here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The Y2K Boomerang: InfoSec Lessons Learned from a New Date-Fix Problem.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/application-security/database-security/configuration-error-reveals-250-million-microsoft-support-records/d/d-id/1336857?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Eight Flaws in MSP Software Highlight Potential Ransomware Vector

An attack chain of vulnerabilities in ConnectWise’s software for MSPs has similarities to some of the details of the August attack on Texas local and state agencies.

Eight vulnerabilities in ConnectWise’s software for managed service providers (MSPs) purportedly allows attackers to silently execute code on any desktop managed by the application, an exploit chain with details similar to last August’s coordinated attacks on Texas government agencies, security consultancy Bishop Fox said in an advisory today.

Individually, the vulnerabilities are mostly not severe, with only one — a cross-site request forgery (CSRF) flaw — deemed critical. Together, however, the eight issues — six of which are assigned Common Vulnerability Enumeration (CVE) identifiers — could have been combined to create an attack chain that could compromise a ConnectWise Control server and, from there, any attached clients, Bishop Fox stated.

“An attacker that exploits the full attack chain can achieve unauthenticated remote code execution, resulting in compromise of the ConnectWise Control Server and ultimately the endpoint it has been installed on,” says Daniel Wood, the associate vice president of consulting for Bishop Fox. “This would provide full control over the vulnerable endpoint.”

The company and a third party confirmed the vulnerabilities and found that ConnectWise had patched some of the issues in the fall with little to no notice. The attack chain has similarities to some of the reported details of the August attack on Texas local and state agencies, Wood said in the published advisory

Multifactor authentication, for example, would likely not have helped the Texas agencies, according to press reports. Bishop Fox confirmed that multifactor authentication would not help against the attack chain proposed in its advisory, either.

“This is not proof that the vulnerabilities we discovered were used in the incident,” Wood said. “What we can say is that nothing we have read about the Texas ransomware attack so far rules out the possibility that these vulnerabilities were involved.”

In a statement sent to Dark Reading, ConnectWise refuted the findings, stressing that it takes the security of its products seriously.

“Bishop Fox could not provide additional information as the attack chain for the exploits they outlined were conceptual,” the company stated. “In addition, both Bishop Fox and ConnectWise agreed that no active exploits had occurred from these potential vulnerabilities.”

In the statement, ConnectWise acknowledged that it had fixed six of the eight issues. “We appreciated the insights and based on [Bishop Fox’s] report, we did our own internal research and evaluation and addressed the points they raised in their review,” the company wrote. “With an overabundance of caution, we resolved 6 of the 8 items Bishop Fox listed in their report by October 2, 2019.”

This is not the first time ransomware attackers infiltrated a company through ConnectWise’s products and services. In November 2017, a vulnerability researcher found an issue in ConnectWise’s plug-in for Kaseya’s network monitoring system and posted an exploit to GitHub. Attackers later used that vulnerability to compromise more than 1,500 systems and install ransomware, demanding a $2.6 million ransom from the managed service provider. 

In August, a coordinated ransomware attack scrambled data at 22 local and state agencies in Texas. Subsequent press reports indicated that the attacker had used a vulnerable installation of ConnectWise software to infect the governmental agencies.

Matt Hamilton, a former senior security analyst at Bishop Fox, discovered the latest vulnerabilities in mid-September. While the initial contact with ConnectWise proceeded quickly, the software maker stopped responding a week later, Bishop Fox stated.

“ConnectWise CISO John Ford asserted that the Bishop Fox findings did not affect on-premise solutions and stated that these vulnerabilities are not exploitable because ConnectWise was unable to reproduce them using the steps that Bishop Fox provided them,” Bishop Fox’s Wood stated in the advisory. “Additionally, Mr. Ford raised the threat of a defamation lawsuit. But Bishop Fox’s research found vulnerabilities that do, in fact, impact on-premise installations.”

Huntress Labs, an MSP security provider, is conducting an analysis and verification effort at the request of Bishop Fox. Huntress Labs found that ConnectWise had patched or otherwise mitigated two of the issues, including the most critical vulnerability, partially mitigated two other flaws, and left three issues unmitigated. The testing, which is ongoing, has not yet determined the status of the eighth issue, the security provider stated in a blog post.

Companies, especially those serving less technical markets, need to be transparent and upfront with their customers, Bishop Fox’s Wood says.

“The best thing a company can do is to create an easy-to-use and secure mechanism for researchers to report vulnerabilities that go to their engineering and development teams, where they can be analyzed and confirmed,” he says. “Once that occurs, they can be prioritized for remediation activities based upon the companies organizational practices.”

Because of the danger that such vulnerabilities post, ConnectWise’s current clients should request clarity on the issues, Wood adds.

“Follow up with ConnectWise support to ensure patches have occurred — and [were] exhaustively tested — to ensure vulnerabilities no longer exist that can result in complete takeover of the Control Server,” he urges. “Don’t use the product in its current state until confidence is reached.”

For its part, ConnectWise dismissed a vulnerability — or chain of vulnerabilities — being at the heart of the Texas ransomware incident.

“[T]here are malicious actors who utilize remote control products in scams to exploit a consumer or company through misrepresentation, network vulnerabilities, or phishing,” the company said in its statement to Dark Reading. “Our understanding is that the Texas attacks were precipitated by a phishing attack that led to a user’s credentials being compromised.”

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The Y2K Boomerang: InfoSec Lessons Learned from a New Date-Fix Problem.”

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/eight-flaws-in-msp-software-highlight-potential-ransomware-vector/d/d-id/1336856?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

For Mismanaged SOCs, The Price Is Not Right

New research finds security operations centers suffer high turnover and yield mediocre results for the investment they require.

The security operations center (SOC), considered a core component of many organizations’ cybersecurity strategies, is plagued with high costs and myriad challenges. Businesses running a SOC often struggle to achieve a high return for what proves to be an expensive investment.

These findings come from a new report entitled “The Economics of Security Operations Centers: What Is the True Cost for Effective Results?” conducted by the Ponemon Institute and commissioned by Respond Software. Researchers surveyed 637 IT and IT security practitioners who work in organizations running SOCs to learn about their economics and effectiveness.

The SOC has been a topic of conversation for much of the past five to six years, as experts seek to learn more about their cost and functionality, says Ponemon Institute chairman Larry Ponemon. Organizations spend an average of $2.86 million each year on their in-house SOC, researchers found. The annual cost jumps to $4.44 million if they outsource to a managed security service provider (MSSP), a number that researchers found surprising. Only 17% of respondents say their MSSP is “highly effective.”

Despite the pricey investment, only 51% of organizations surveyed are satisfied with their SOC’s effectiveness in detecting cyberattacks. Forty-four percent say their SOC’s ROI is worsening.

The most important SOC activities, they say, are the minimization of false-positives (84%), threat intelligence reporting (83%), monitoring and analyzing alerts (77%), intrusion detection (77%), use of technologies such as automation and machine learning (74%), agile DevOps (73%), threat hunting (71%), and cyber forensics (69%).

More than two-thirds (67%) of respondents say training SOC analysts is one of the most critical SOC activities. SOCs heavily rely on human expertise to prevent, detect, analyze, and respond to security incidents. Complexity and hiring challenges interfere with the ability to detect attacks.

“We found that, on average, when individuals were recruited to the SOC, it took a better part of a year to become an active member of the team,” Ponemon says. “You can’t just walk in and be an expert. It takes effort; it takes time.” Further, researchers discovered, 74% of respondents say their SOCs are “highly complex” environments, which makes management more difficult.

Staffing the SOC is expensive – about $1.46 million of average SOC spend goes toward direct labor costs – because low-level analysts make high salaries and usually don’t stay in their positions very long. The average salary for a tier-one analyst is $102,315, and 45% earn between $75,001 and $100,000. Thirty percent make $100,001 to $150,000, and 9% earn $150,000 or more. Only 16% of tier-one analysts make less than $75,000 per year.

The average SOC analyst leaves the organization after a little more than two years, and employers can’t keep up with the turnover. An average of four analysts is expected to be hired in 2020; however, three analysts will be fired or resign in one year. “It happens in security across the board,” says Ponemon of the turnover. “But in a SOC environment it’s pretty tough.”

Why the short stay? Seventy percent of respondents agree that SOC analysts burn out quickly because of the high-pressure environment and workload. “You’re constantly waiting for the next shoe to drop,” he adds. When asked about what makes SOC work painful, respondents pointed to an increasing workload (75%), being on call 24/7/365 (69%), lack of visibility into IT and network infrastructure (68%), too many alerts to chase (65%), and information overload (65%).

“The tier one analyst role traditionally has always been an entry-level job,” says Dan Lamorena, security executive with Respond Software. “It’s the building blocks of a security career for a lot of people.” Still, these employees are often hard to find. SOCs demand critical thinkers who are comfortable with technology and willing to take on tasks that tier two and three analysts don’t want to do, like sit through the night shift.

Ultimately, he continues, the time that tier one analysts spend in an entry-level role prepares them to take on higher positions at other companies, where they can demand higher salaries.

“You’re constantly learning how the adversary is acting,” Lamorena says. “You’re learning a lot of threat intelligence, the types of people attacking you. What are the tactics they’re using?”

The IT infrastructure monitored by the SOC also influences cost, researchers report. On-prem environments cost the most ($3.19 million), followed by mobile ($3.06 million) and cloud ($2.75 million). Hybrid environments combining on-prem and cloud cost the least, with $2.5 million in annual costs. Researchers also found respondents who ranked their effectiveness as higher generally spent more to improve their SOC’s ability to detect cyberattacks.

Spending also varies by industry. Financial services firms spend the most ($4.6 million) on their SOC each year, followed by industrial and manufacturing companies ($3.16 million), technology and software ($3.02 million), services ($2.56 million), and the public sector ($2.25 million).

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The Y2K Boomerang: InfoSec Lessons Learned from a New Date-Fix Problem.”

 

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/risk/for-mismanaged-socs-the-price-is-not-right/d/d-id/1336864?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Big Microsoft data breach – 250 million records exposed

Microsoft has today announced a data breach that affected one of its customer databases.

The blog article, entitled Access Misconfiguration for Customer Support Databases, admits that between 05 December 2019 and 31 December 2019, a database used for “support case analytics” was effectively visible from the cloud to the world.

Microsoft didn’t give details of how big the database was. However, consumer website Comparitech, which says it discovered the unsecured data online, claims it was to the order of 250 million records containing:

…logs of conversations between Microsoft support agents and customers from all over the world, spanning a 14-year period from 2005 to December 2019.

According to Comparitech, that same data was accessible on five Elasticsearch servers.

The company informed Microsoft, and Microsoft quickly secured the data.

Microsoft’s official statement states that “the vast majority of records were cleared of personal information,” meaning that it used automated tools to look for and remove private data.

However, some private data that was supposed to be redacted was missed and remained visible in the exposed information.

Microsoft didn’t say what type of personal information was involved, or which data fields ended up un-anonymised.

It did, however, give one example of data that would have been left behind: email addresses with spaces added by mistake were not recognised as personal data and therefore escaped anonymisation.

So if your email address were recorded as “[email protected]” your data would have been converted into a harmless form, whereas “name[space]@example.com” (an easy mistake for a support staffer to make when capturing data) would have been left alone.

Microsoft has promised to notify anyone whose data was inadvertently exposed in this way, but didn’t say what percentage of all records were affected.

What to do?

We don’t know how many people were affected or exactly what personal data was opened up for those users.

We also don’t know who else, besides Comparitech, may have noticed in the three weeks it was exposed, although Microsoft says that it “found no malicious use”.

We assume that if you don’t hear from Microsoft, even if you did contact support during the 2005 to 2019 period, then either your data wasn’t in the exposed database, or there wasn’t actually enough in the leaked database to allow anyone, including Microsoft itself, to identify you.

It’s nevertheless possible that crooks will contact you claiming that you *were* in the breach.

They might urge you to take steps to “fix” the problem, such as clicking on a link and logging in “for security reasons”, or to “confirm your account”, or on some other pretext.

Remember: if ever you receive a security alert email, whether you think it is legitimate or not, avoid clicking on any links, calling any numbers or taking any online actions demanded in the email.

Find your own way to the site where you would usually log in, and stay one step ahead of phishing emails!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Cst7PO49oB0/

WindiLeaks: Microsoft exposes 250 million customer support records dating back to 2005 (Not on purpose though)

Five identical Elasticsearch databases containing 250 million records of Microsoft customer support incidents were exposed on the internet for all to see for at least two days right at the end of 2019.

On 28 December 2019, these databases were found by BinaryEdge, which crawls the internet looking for exposed data. This was then picked up by security researcher Bob Diachenko, who reported the problem to Microsoft.

Microsoft secured the databases over 30-31 December, winning praise from Diachenko for “quick turnaround on this despite [it being] New Year’s Eve”.

That is cold comfort for customers whose data was exposed. What has been picked up by security researchers may well also have been found by criminals.

What data was published? These are logs of customer service and support interactions between 2005 and now. The good-ish news is that “most of the personally identifiable information — email aliases, contract numbers, and payment information—was redacted”, according to Comparitech. However, a subset contained plain-text data including email addresses, IP addresses, case descriptions, emails from Microsoft support, case numbers and “internal notes marked as confidential”.

Armed with this information, there is plenty of scope for identifying the customers, learning more about their internal IT systems if they are businesses, and using the data for activities such as impersonating Microsoft support and thereby gaining access to personal computers or business networks. “Just a quick follow-up on case xxxx…”

Eric Doerr, general manager of the Microsoft’s Security Response Center (MSRC), said: “We’re thankful to Bob Diachenko for working closely with us so that we were able to quickly fix this misconfiguration, analyze data, and notify customers as appropriate.”

It is not yet clear how many of the records include identifiable information, nor how they break down in terms of business versus consumer interactions. We have asked Microsoft for comment and will update with information received. Microsoft has posted further information about the incident here.

Despite the absence of financial or username/password data in the leaked database, the incident is embarrassing for Microsoft, undermining its efforts to keep its customers secure.

Calls from fake Microsoft support staff are nothing new; they are so widespread that most of us have received a few. What’s different now is that they may be better informed than before, so the solution is to be even more wary. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/22/250_million_micrsoft_customer_support_records_exposed/

Academics call for UK’s Computer Misuse Act 1990 to be reformed

Britain’s main anti-hacker law, the Computer Misuse Act 1990, is “confused”, “outdated” and “ambiguous”, according to a group of pro-reform academics.

A report launched this morning by the Criminal Law Reform Now Network (CLRNN) described a “range of measures to better tailor existing offences in line with our international obligations and other modern legal systems” in a call for the 30-year-old Act to be overhauled.

CLRNN mostly consists of academics from the University of Birmingham. In its report (PDF), the network described the current Act as “preventing cyber security professionals from carrying out threat intelligence research against cyber criminals and geo-political threat actors”, something it said is “leaving the UK’s critical national infrastructure at increased risk”.

Broadly, the network calls for new public interest defences for infosec professionals, academics and journalists as well as specific guidance for prosecutors and sentencing judges alike.

It also calls for the introduction of civil penalties for computer misuse naughtiness (mischief, in the legal jargon), with the Investigatory Powers Commissioner being suggested as a civil “regulator” to decide who should and should not be slapped with a civil fine.

Down in the detail

The CLRNN’s specific recommendations are:

  • Reforming the section 1, CMA90 offence to make it summary-only, or to narrow its current scope by “specifying required harms beyond simple unauthorised access.”
  • Narrowing sections 3 and 3ZA to require an intention to commit a criminal act, or to enable someone else to do so.
  • Creation of a “corporate failure to prevent offence” so companies can be held criminally liable for employees acting as such who commit computer misuse crimes.
  • Adding a new defence of assumed consent to accessing someone else’s computers “if [the other person] had known about the access and the circumstances of it, including the reasons for seeking it.”
  • A public interest defence allowing accused hackers to “prove that in the particular circumstances the act or acts (i) was necessary for the detection or prevention of crime, or (ii) was justified as being in the public interest”.

Ollie Whitehouse, global CTO of British infosec biz NCC Group and spokesman for the CyberUp campaign, commented in a canned statement: “This report shines a welcome light on the UK’s outdated cyber security crime laws, which leave the cyber industry tackling one of the biggest threats facing our national security within a regime drawn up 30 years ago.”

Neil Brown of tech law firm decoded.legal told The Register: “The current CMA is either showing its age, or else [is] just a bit of a pig’s ear, depending on how charitable you are feeling. The devil is in the detail, but the proposals look sensible. In particular, offering greater security to those looking to offer security – a public interest defence – would be welcome.”

It won’t be plain sailing

Peter Sommer, a professor of digital forensics at Birmingham City University and one of the CLRNN’s contributors, published an insightful LinkedIn post about how the current CMA90 impacts the cyber security sector.

“The key to understanding the Act was that from the outset it was designed to fill in gaps in the existing legislation rather than to provide a comprehensive response to whatever you think “cybercrime” is,” something that makes sense when read in the context of the Prestel hack.

He added: “Indeed there are frequent occasions in which the Computer Misuse Act has clearly been breached but where prosecutors decide not to pursue charges with any vigour or indeed at all because success would be unlikely to alter the court’s view of punishment in the event of conviction.”

This was the case in the recent sentencing of National Lottery hacker Anwar Batson. At the start of the hearing, prosecutors added a new charge to the indictment to which Batson pleaded guilty. It was this charge that weighed heaviest on the judge’s mind when he gave Batson nine months in prison.

The CLRNN report comes hot on the heels of separate calls from industry to reform the CMA. A launch event is being held this afternoon in Parliament which may lead to further calls. The National Crime Agency is also known to hold internal views about the suitability of the CMA. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/22/clrnn_computer_misuse_act_reform_call/

Cybersecurity Lessons Learned from ‘The Rise of Skywalker’

They’re especially relevant regarding several issues we face now, including biometrics, secure data management, and human error with passwords.

The Star Wars film franchise has fascinated society with unprecedented fervor for over 40 years, and it’s easy to see why: They’re Shakespearean tales with lightsabers and spaceships. But aside from timeless lessons about love and friendship and good versus evil, there are tertiary lessons about technology that can be useful for our progression toward a truly safe Internet.

For instance, it’s clear that the Empire has unlimited funding, and yet the Rebels manage to sneak in and out of Imperial facilities in every film with light-speed effortlessness. They clearly have the best security in the galaxy, yet are unable to keep a 7-foot-6 Wookiee and his rowdy cohorts from grabbing whatever assets they’d like, time after time. No wonder Darth Vader had anger management issues.

Each Star Wars film has been influenced by the time and events during which it was developed. The cybersecurity lessons learned in Star Wars: The Rise of Skywalker are especially relevant to issues we face today with biometrics, secure data management, and human error with passwords. 

Warning: Spoilers are coming.

(No kidding: You’ve been warned!)

Betrayal from the Inside
Early in the film, we learn that the First Order has a spy in its midst, supplying the Rebellion with valuable information. After sneaking aboard an Imperial ship (yet again), lead characters Rey, Finn, Poe, and Chewbacca are discovered, and the evil-yet-sensitive villain, Kylo Ren, orders the ship to be locked down. The spy dramatically reveals himself to be General Hux, a top member of the First Order’s leadership, who bypasses the lockdown procedures and allows the heroes to make their escape. 

Security protocols are only as good as the individuals who run them. Even the most hardened security can crumble when the bad actor comes from the inside. 

IBM’s “Cyber Security Intelligence Index” found that six out of 10 security attacks were carried out by insiders, and of those six attacks, 25% were carried out by “inadvertent actors.” In addition to investing heavily in typical security standards, thorough background searches and monitoring for suspicious employee activity can also save an organization time, money, and peace of mind. 

Biometrics: Two-Sided First Order Coin
How did our intrepid heroes manage to sneak aboard the First Order ship? With a First Order Officer’s medallion, conveniently provided by friendly scoundrel Zorii Bliss. This medallion makes any spacecraft appear as if it is being operated by an officer in the First Order and allows undetected travel anywhere in the First Order’s jurisdiction. 

This medallion reflects the upside and potential downside of biometrics. Biometrics technology is a great convenience and can be immensely secure — you only have one face, after all — but if attackers gain a copy of your fingerprints and face scan, the impact can be disastrous. They’re gaining the First Order Officer’s medallion to your social media, bank account, 401(k), etc.

If a password is stolen, it can be reset. But if your biometric data is stolen, you can’t just change your body to secure your accounts again. Once that First Order coin is getting passed around the rebel fleet, you can never get it back.

Beware storing biometrics data in the cloud and only utilize it for local hardware access. Otherwise, they could be exposed to anyone — and there’s no telling what they’ll do with it. 

Limiting Potential Gains from a Hack
In order to obtain valuable information about the location of the Sith Temple, C-3PO needed to decode Sith runes found on a stolen knife. However, his operating system wouldn’t allow him to divulge the critical information because it could have been used for nefarious purposes. A hacker accessed C-3PO’s forbidden memories, but in doing so fully wiped his memory, restoring the iconic bot to his factory settings. That was a smart move on Anakin Skywalker’s part because that built-in safety mechanism would dissuade a casual hack, knowing what the cost would be.

The iPhone and other smart devices have implemented similar security protocols. Try accessing an iPhone with the wrong passcode too many times, and the device will have to be reset and wiped to be usable again. That’s a brilliant tactic when it comes to safeguarding data. After all, if the hack requires extreme effort for a relatively useless payoff, hackers don’t have an incentive to act.

Security companies can go further to design systems that reduce the value of any attack. Using unique passwords for every account, for example, means that a hack only gets attackers into one service — not all of them. Limiting the payoff means hackers will think harder about targeting you in the first place.

Winning the Rebellion
Keeping the bad guys at bay is an eternal fight. When new security is implemented, bad actors inevitably contrive new ways to counteract it. But the Rebellion never gives up — and neither should you. Make sure there are no spies in your midst, don’t rely on cloud-based biometrics, and reduce the potential payoff from an attack. Star Wars has always shown us the consequences of even the smallest breaches — paying attention to the details keeps your galaxy safe.

Related Content:

 

Matt Davey is the COO (Chief Operations Optimist) at 1Password, a password manager that secures identities and sensitive data for enterprises and their employees. In a previous life working with agencies and financial companies, Matt has seen first-hand how important security … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/cybersecurity-lessons-learned-from-the-rise-of-skywalker/a/d-id/1336841?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Startup Privafy Raises $22M with New Approach to Network Security

The company today disclosed an approach to data security designed to protect against modern threats at a lower cost than complex network tools.

Data security startup Privafy has officially entered the market with a new security-as-a-service application and $22 million in minority investment to continue scaling its cloud-based business.

Privafy, founded by executives of Verizon and NXP Semiconductors, aims to secure data in motion as it travels across on-prem locations, clouds, mobile, and the Internet of Things. Its app relies on the integrated functionality of encryption systems and VPNs, firewalls, distributed denial-of-service (DDoS) protection, intrusion detection and prevention systems, data loss prevention, and deep content inspection technology.

The goal is a new approach to network security. Technologies developed by the networking industry to protect data are quickly becoming obsolete for today’s cloud- and mobile-based workloads, said co-founder and CEO Guru Pai in a statement. SD-WAN and cloud-based point products are often more focused on cost reduction than addressing underlying security flaws.

Privafy aims to do both. Its Cloud Services are built to protect data across business environments from unauthorized intrusions, malware, DDoS, ransomware, and other threats at a lower cost than legacy tools. A central dashboard monitors and manages Privafy security services, including NetEdge to secure on-prem connectivity, CloudEdge to secure public and private clouds, and AppEdge to protect workers on iOS, Android, Windows, macOS, and Linux.

Read more details here.  

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The Y2K Boomerang: InfoSec Lessons Learned from a New Date-Fix Problem.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/startup-privafy-raises-$22m-with-new-approach-to-network-security/d/d-id/1336853?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Nobody boogies quite like you

That spasmodic jerking around that some of us refer to as “dancing?”

It’s the latest biometric: we can be identified by our twerking, our salsa, our rumba or our House moves with an impressive 94% accuracy rate, according to scientists at Finland’s University of Jyväskylä.

To be specific, the researchers asked 73 volunteers to dance to eight music styles: Blues, Country, Dance/Electronica, Jazz, Metal, Pop, Reggae and Rap. The dancers weren’t taught any steps; rather, they were simply told to “move any way that felt natural.”

Their study, described in a paper titled Dance to your own drum, was published in the Journal of New Music Research last week.

Identifying people by their dance moves is not what the researchers were after. They had set out to determine how music styles affect how we move:

Surely one does not move the same way in response to a song by Rage Against the Machine as to one by Bob Dylan – and research has indeed shown that audio features extracted from the acoustic signal of music influence the quality of dancers’ movements.

The original question: could they determine the style of music just by watching how people are dancing? Previous research has indicated that you can: low-frequency sound generated by kick drum and bass guitar relates to how fast you bop your head around, while high-frequency sound and beat clarity have been associated with a wider variety of movement features, including hand distance, hand speed, shoulder wiggle and hip wiggle. Dancers also increase their movements as a bass drum gets louder. Jazz is associated with lesser head speed.

It could all have to do with music’s audio features, but then again, cultural norms tell us how we’re supposed to move. Jazz? Let’s swing dance! Metal? HEADBANG!

In short, testing the idea that different music will elicit different movement patterns from listeners is complicated.

There’s already a fairly large body of work using machine learning to differentiate between musical genres. Work has also been done regarding how humans identify individuals based on their distinctive bodily movements.

Building on that previous work, University of Jyväskylä researchers set out to similarly use machine learning to explore the degree to which genre can be distinguished from volunteer dancers’ bodily movements.

They designed a 12-camera optical motion-capture system to collect free dance movement data from participants moving to commercially available music from eight different genres. They also employed a machine learning model to do two things: identify participants and music genre.

The upshot: how we move our head and limbs are our dance fingerprints, or what the paper refers to as our “motoric” fingerprints – we move in mathematically similar ways regardless of what kind of music we’re bopping to.

In theory, different individuals’ movements may covary differently between any markers in any dimensions, but as it is highly unlikely that participant’ were consciously controlling these aspects of their movements, the fact that these movement features could be used to accurately classify individuals across various musical stimuli suggests that we each have our own ‘motoric fingerprint’ which is evidenced in our free dance movements, regardless of what music is playing.

But while we can be identified by how we dance to any type of music, how we dance doesn’t tell anybody what kind of music we’re listening to. Despite researchers’ expectations, their machine-learning model did a lousy job at analyzing somebody’s movements to figure out what music they were listening to. Specifically, at its best, their model’s accuracy rate was less than 25%. That’s “well below” accuracy rates for most models that classify genre from acoustic signals, according to the paper.

Once the researchers’ model had established which person danced in which way, it could subsequently identify them, based on only their dance moves, with 94% accuracy. That rate varied based on genre, though: for example, the model had a tougher time identifying individuals who were dancing to Metal. That could be because most people won’t choose to do their own, individualistic moves, the researchers suggested. Instead, they’ll adopt the stereotypical moves – like headbanging – that the Metal culture has widely adopted.

Worried that the FBI is going to start asking you to cut a rug at the airport? That’s not what we were after, according to Dr. Emily Carlson, first author of the team’s paper. What she told New Atlas:

We’re less interested in applications like surveillance than in what these results tell us about human musicality.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/uprytRWl518/