STE WILLIAMS

What Has Cybersecurity Pros So Stressed — And Why It’s Everyone’s Problem

As cyberattacks intensify and the skills gap broadens, it’s hard not to wonder how much more those in the industry can take before throwing in the towel.

I often find myself at industry events and meetings with colleagues engaging in casual chitchat about the current work environment and the challenges we as information-security professionals are currently facing. We typically share sighs of empathy as we relate common stories of how our weekends were nonexistent due to responding to a Priority 1 event. To make matters worse, we have continuing professional education credits that are due by the end of the month for one of our many expensive certifications, and we also need to walk our dogs and find time to cut the grass.

I often ask, why do we do this to ourselves? The hours are often brutal, the service is often thankless, the cyberattacks never seem to stop, and the strategies seem to be dated — all leading to a physically and mentally taxing game of whack-a-mole. As cyberattacks intensify and the skills gap broadens, it’s hard not to wonder how much more infosec pros can take before throwing in the towel. Indeed, many of my colleagues are beginning to question whether the time and energy they are investing in developing their professional skill sets is netting them a positive ROI in the department of personal well-being.

It comes down to this: A mass exodus of overworked infosec professionals is a very real threat if we, as a community, don’t take a closer look at the multitiered problems that are creating an environment ripe for job turnover and employee dissatisfaction.

What’s Going On?
According to a 2018 study published by ISC(2), more than 84% of cybersecurity professionals said they were either open to new job opportunities or already planned on pursuing a new opportunity that year. Close to half (49%) said salary was not the main reason for their sentiment. Rather, 63% of respondents said they wanted to work at an organization where their opinions on the existing security posture were taken seriously.

The fact that more than three-quarters of the industry is willing to jump ship at any given time — or at least has given thought to the idea — should be setting off alarm bells, especially given the number of job vacancies market wide. The latter does not seem to be getting better anytime soon: In a recent study conducted by ESG, 53% of companies reported a problematic shortage in cybersecurity skills.

I wonder, though: Is there truly a skills gap, or are other factors at play that gives cybersecurity professionals their pick of the 2.93 million security jobs ISC(2) calculates are open (or at least needed) across the globe?

As a cybersecurity architect who has consulted for a number of different businesses across multiple sectors, I have noticed many common refrains among security and IT operations: “Our network security specialist just resigned for company B last week,” “I just took over the security program a month ago,” “We had a security director who was working on a maturity plan, but she was offered a position for $50K more than what she makes now,” and the infamous, “We hired a new security engineer, but he didn’t show up on the first day because he ended up taking another offer.” Given the time it takes to recruit, interview, screen, onboard, and train a new employee, one can see how this can be problematic for any business. But for a hyperfocused, specialized industry such as cybersecurity that is already experiencing a labor shortage, this could be potentially detrimental.

What’s happening? There’s no single answer. It seems to be a perfect storm of a competitive job landscape, the cost and lack of continuing education programs aimed at cybersecurity, and employee dissatisfaction with their companies’ stance on information security all leading to resentment and, in extreme cases, job burnout.                   

Frustration Factors
Compensation in an extremely competitive market can be a driver of turnover in itself. According to recent data from tech staffing agency Mondo, salaries ranged from $120,000 to $185,000 for an information security manager and from $175,000 to $275,000 for a chief information security officer.

Yet, while these numbers sound great, salary is far from a singular indicator of job satisfaction. Feelings of dissatisfaction can arise when job expectations become unclear and an employee who may have been hired as a response analyst, for example, now finds himself wearing the hat of an integrations engineer, threat researcher, and fire watch captain.

Exacerbating the situation, many companies lack the personnel to fill critical security roles, which places a heavier demand on existing staff, often resulting frustration, burnout, and overall job dissatisfaction. When there are close to 3 million well-paying infosec positions and more vacancies expected to be created based on demand, it makes it extremely easy for a jaded employee to begin to look elsewhere, particularly when they have a highly in-demand skillset.

Speaking of skills, the demand for professionals to stay up to date with their training and education is driving many of them to look for higher paying positions given the increasingly hefty price tags that accompany higher education and professional certifications. Often the onus of obtaining certifications needed for specific positions falls at the feet of employees —another reason for their frustration, because we all know the cost of education continues to escalate.

In an industry where retraining and constant learning are at the core, it is easy to see how this can be a major stressor on the average infosec professional. Also of note, many universities are struggling to keep up with a curriculum that may change within a matter of months and often are lacking the resources to hire the proper personnel to educate future students.

Findings pertaining to the mental and physical health of cybersecurity professionals are also alarming. According to research conducted by Nominet, 25% of CISOs in the US and UK suffer from mental and or physical stress, with 20% turning to alcohol or drugs as a coping mechanism. Stressors ranged from fear of compromise, not enough budget to protect company assets, and concerns pertaining to visibility and proactively spotting new threats within their organizations.

This fatigue is not only being felt in the executive suite. In a worldwide study of 267 cybersecurity professionals conducted by ISSA and ESG, 40% reported their No. 1 stressor was keeping up with needs of new IT initiatives. Coming in at a close second was finding out about IT initiatives that were started by other teams within their organizations with no security oversight, cited by 39%.  

Call to Arms
I am not the biggest fan of clichés, but as the saying goes, “This is an everyone problem.” Retaining and developing talent go hand in hand with creating a mature, robust security posture. Thwarting employee frustration and turnover starts with properly equipping security personnel with the tools to do their jobs, whether that be financial support for continuing education, creating a culture in the senior executive suite of security awareness and criticality, investment in more personnel, a more defined onboarding process, clearer career paths, or mental health services for individuals who may need them. For many infosec professionals, work becomes a personal mission — often a very thankless, invisible mission to the companies they serve.

Now it is up to organizations to adapt a motto that we as cybersecurity professionals live by: “The goal is simple: Protect the human and their well-being at all costs.”   

Note: These are the personal views of Kevin Coston and not necessarily those of his employer.

(Image: pathdoc via Adobe Stock)

Related Content:

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Kevin Coston is a cloud security architect at Akamai Technologies. He currently resides in Denver, Colorado, with his fiancé and three dogs. While not conducting security research or consulting with some of the world’s largest corporations Coston enjoys spending his … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/what-has-cybersecurity-pros-so-stressed----and-why-its-everyones-problem/b/d-id/1336146?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Report: 2020 Presidential Campaigns Still Vulnerable to Web Attacks

Nine out of 12 Democratic candidates have yet to enable DNSSEC, a simple set of extensions that stops most targeted domain-based attacks.

While the Mueller report showed that Russian threat actors hacked the DNC and attempted to influence the 2016 election, very few of the presidential candidates for 2020 seem to have taken the basic steps needed to defend against attacks like the ones that affected the Clinton campaign.

Based on our research, we found that many of the candidates are struggling with one particular issue many businesses face — securing their web properties from being hijacked by malicious actors.

After reading a CNN article on how a majority of candidates were failing to use DMARC to ensure the fidelity of email messages, we conducted research to analyze the web security of the 12 Democratic candidates that participated in the most recent debate and the Trump re-election campaign to see what steps they were taking against malicious attacks.

What we found was dismaying: Nine out of the 12 Democratic candidates and the Trump re-election campaign have yet to enable DNSSEC — a simple set of extensions on top of DNS that stops most targeted domain-based attacks (you can read the full report here).

While the candidates are failing basic security for their web properties, unfortunately, we find this all too often with enterprises as well.

Candidates’ Websites Put Staff and Supporters at Risk
Because of the failure to set up DNSSEC, attackers can easily exploit these candidates’ website through DNS hijacking or DNS cache poisoning attacks, which can have serious consequences. Just like a business, unprotected web properties put customers, partners, and employees at significant risk.

This is a tactic that is widely employed by attackers. So much so that earlier this year, the DHS recently issued Emergency Directive 19-01 after DNS hijack attacks from Iran affected six federal agencies. These same types of attacks have been used to intercept email from all major providers, including Gmail, Yahoo, and Office365.

DNS cache poisoning allows an attacker to take legitimate traffic intended for one website, for example, https://peteforamerica.com/issues/, and redirect each of those visitors to malicious servers set up to spoof the site and steal information — usernames, passwords, credit cards, personal information, or potential plant fake information, and more.

Here are the three most common ways attackers can weaponize this weakness and use a business’s — or presidential candidate’s — website to target visitors:

Hyper-targeted phishing campaigns: Spoofing the DNS means that you’ll never know if you’re visiting a website that is malicious or belongs to the candidate you support. This can result in malware being implanted on an endpoint, or simply having all of the information — usernames, passwords, credit information, and more — stolen. In our scenario above, the domain name in the victim’s browser will still be peteforamerica.com, not some typosquat, so you can’t really blame the victim.

As attackers gather this data, they can then launch even more targeted phishing campaigns at staff and supporters. As we saw with the Clinton campaign in 2016, it takes one person to click a bad link to put everyone’s cybersecurity at risk.

Stealing data from candidate donors and supporters: This is the most common outcome of DNS cache poisoning attacks: setting up a replica website that looks just like the candidate’s to trick website visitors. Cyberattackers do this for multiple reasons — but often to steal your data and information. For instance, Kamala Harris highlights how easy it is for supporters to donate to her campaign right on the front page — and every page: https://kamalaharris.org/

Without DNSSEC enabled, hackers can easily take advantage of this to build a replica site, “steal” all traffic, and identify donors and capture financial information.  

Disseminating fake information or implanting spyware, malware, and keyloggers on supporters’ devices: Another popular tactic of attackers is to use DNS cache poisoning attacks to create mirror websites that distort the position of the candidate or organization and/or implant all kinds of nasty things on the unsuspecting visitor’s device (phone, laptop, iPad, etc.). This could also be used to gain a foothold in one of the candidate’s networks.
 
Perhaps just as important, a collateral finding of this research assessed the security response capabilities of the candidates, or the lack thereof.
 
We reached out to the candidates and the parties multiple times about these vulnerabilities and the simple remedial steps they should take. Only Senator Warren’s campaign employed DNSSEC from the start, and only Beto O’Rourke’s campaign responded and changed the security setting after repeated disclosures.

Worse, none of the campaigns even had a dedicated address listed for reporting security issues and concerns.

This was surprising, because fixing this vulnerability is a rather simple process in the scheme of things. All it takes is a few minutes to set up and be configured with your domain host. For instance, Cloudflare makes this possible in a few simple clicks.

If we’re going to take a leading role in global cybersecurity, we should start by learning the lessons of 2016. As this research shows, right now we’re failing on both sides of the aisle and across both parties. Don’t let your business follow suit.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Glitching: The Hardware Attack That Can Disrupt Secure Software.”

Gary Golomb has nearly two decades of experience in threat analysis and has led investigations and containment efforts in a number of notable cases. With this experience — and a track record of researching and teaching state-of-the art detection and response … View Full Bio

Article source: https://www.darkreading.com/endpoint/report-2020-presidential-campaigns-still-vulnerable-to-web-attacks/a/d-id/1336112?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

10% of Small Businesses Breached Shut Down in 2019

As a result of cybercrime, 69% of small organizations were forced offline for a limited time and 37% experienced financial loss.

Ten percent of small businesses hit with a cyberattack in 2019 were forced to shut down as a result, researchers found in a new survey focused on the consequences of cybercrime for small and midsize businesses.

To compile the report, commissioned by the National Cyber Security Alliance and conducted by Zogby Analytics, analysts polled 1,006 small business decision-makers on cybersecurity topics. They learned 88% consider themselves a “somewhat likely” target for attacks, including 46% who believe they are a “very likely” target. Nearly two-thirds (62%) say security is a top priority. One-third of respondents have an in-house IT department with 10 or more people, 30% have an IT department with fewer than 10, and 55% have an annually updated cybersecurity plan.

The numbers say small businesses are preparing for a future attack: Nearly half (46%) of respondents feel “very prepared” to quickly respond to a security incident and limit its impact, and 58% have a response plan they could immediately put into action. One-third say they would be able to full operate their organization without computers. Bigger companies are better prepared: 73% of those with 251–500 employees have a prepared response plan.

Still, cyberattacks can be devastating. Nearly 30% of businesses surveyed have experienced an official security breach within the past year, a number that ranges from 11% for businesses with 1–10 employees, to 44% among companies with 251–400 employees. Following a breach, 69% of these respondents were knocked offline for a limited time, 37% experienced financial loss, 25% filed for bankruptcy, and 10% went out of business, researchers report.

Read more details here.

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/operations/10--of-small-businesses-breached-shut-down-in-2019/d/d-id/1336156?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Planning a Zero-Trust Initiative? Here’s How to Prioritize

If you start by focusing on users, data, access, and managed devices, you will make major strides toward achieving better security.

My team and I have been on a journey toward implementing an identity-centric zero-trust approach over the last three years, leveraging existing technologies and fitting within existing budget and resources.

I was recently asked, for an organization planning a zero-trust initiative in 2020, where would I recommend prioritizing efforts when neither budget nor resources are unlimited? That is the key question for most companies considering a zero-trust initiative. Our journey will end up spanning four to five years, but by sharing our story and contributing to the Identity Defined Security Alliance (IDSA), we hope that others can move faster and achieve a stronger security posture with fewer resources.

My experience leads me to offer three key pieces of advice

First, focus on the data. Understanding where sensitive data lives and the transactional flow of that data between users, systems, and applications.

Next, direct your attention to user governance and device trust. These two items will provide you the most value, quickly.

Last, create a business plan outlining all the areas of return on investment. Include a reduction of IT spending associated with technologies that are no longer needed once your zero-trust implementation is complete, such as firewalls, VPNs, and Active Directory. Then detail the process optimizations and automations with IT that not only reduce the need to manage the legacy environment but also automate areas where IT spends the most time and resources. This is a wonderful way to show a recoup of your initial spending on zero trust.

The chart below maps out our progress and recommendation for how to prioritize the phases of your journey. The time frame for moving through each phase and the associated costs will depend on things such as size and complexity of the organization, available resources, and existing cybersecurity technologies. The graphic below depicts what it will cost LogRhythm — a 600-employee, software-as-a-service–driven, security product development company.

 

Source: LogRhythm

Phase 1/Year 1: In the first phase, focus on security basics and shoring up your compliance program, if needed. In addition, the initial phase should identify potentially sensitive data and business-critical applications that store or have access to sensitive data. Then, map out the data flows and update application inventories. This will be the basis for the governance of your users, systems, applications, roles, and so forth as you move forward.

Phase 2/Year 2: Select a single source of truth, such as a human resource management system (HRMS), where you can provision roles, applications, entitlements, and access. In addition, implement single sign-on solution (SSO) and multifactor authentication (MFA) to critical applications, if you have not already deployed them in your organization. Selecting a single source of truth provides opportunities to recoup costs associated with multiple directory technologies, and implementing SSO and its self-service capabilities (password reset, for example) can reduce help desk costs, as well as improve efficiencies in provisioning and deprovisioning (to include move, add, change requests).

Phase 3/Year 3: Implement and integrate mobile device management (or unified endpoint management) and privileged access management to only allow sensitive data to be accessed by trusted devices. The Identity Defined Security framework provides guidance on use cases and integrations needed to bring your existing identity and security technologies together.

Phases 4 and 5: These start to bring in more advanced use cases, such as a cloud access security broker (CASB) to protect sensitive data in the cloud and advanced user and entity behavior analytics (UEBA) capabilities to detect and respond to anomalous user behaviors. However, as you can see after implementing the first three phases, our perspective is that you are more than 50% of the way on the path to maturity. 

In developing the business case for the first three phases, there are several opportunities to recoup costs. In our situation, 60% of our IT help desk tickets were dealing with moves, additions, and changes associated with people. By building an integration between our identity access management system and ADP (our single source of truth), we reduced our help desk volume by 60%. With zero trust, architectural components such as backup directories, on-premises firewalls, and VPN solutions — and even Active Directory — are no longer needed, providing an opportunity to shift money in the budget to support spending on technologies that may not already be deployed, such as UEM, CASB, and UEBA.

While a zero-trust approach is not a security silver bullet, it is the best thing we have today. I jokingly compare it to the Titanic (obviously not from an execution perspective!). The Titanic was built around the concept that if a breach took place, it would flood one compartment and not the entire boat. When you look at a separate identity domain and how you authenticate, and authorize it separately, it’s the same concept. You may have a user that gets compromised, you may have a system that gets compromised, but it shouldn’t affect the rest of the organization — or it should buy you enough time to contain it before it does.

Bottom line: Zero trust is a phased approach, but if you start by focusing on users, data, access, and managed devices, you will make major steps in achieving better security. The business case can be a slam dunk when including all elements, including process optimization, efficiency gains, and recouped technology and infrastructure costs.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Turning Vision to Reality: A New Road Map for Security Leadership.”

James Carder is the CISO and VP of Labs for LogRhythm and an IDSA Customer Advisory Board Member. He brings more than 22 years of experience working in corporate security and consulting for the Fortune 500 and US government. At LogRhythm, he oversees the company’s governance, … View Full Bio

Article source: https://www.darkreading.com/risk/planning-a-zero-trust-initiative-heres-how-to-prioritize/a/d-id/1336149?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Travel database exposed PII on US government employees

A property management company owned by hotel chain Best Western has exposed 179 GB of sensitive travel information on thousands of travelers, researchers said this week.

The breach, which exposed the users of many other travel services, also reportedly put sensitive US government employees at risk.

Researchers at vpnMentor, Noam Rotem and Ran Locar, were conducting a large web mapping project, port scanning IP blocks to find vulnerabilities. In a description of the breach, they explained how they stumbled upon an Elasticsearch database running on an AWS instance. The database was completely unsecured and unencrypted, they said.

After some digging, the researchers found that the database belonged to Autoclerk, which sells server-and cloud-based property management software. In August 2019, Best Western Hotel Resorts Group bought the company to add Autoclerk’s software to its own technology stack, making it easier for its property management systems to talk to the central reservation systems used by travel agents.

The database contained information from third-party travel and hospitality platforms that used Autoclerk to communicate with each other and exchange data. 

The researchers said:

The leak exposed sensitive personal data of users and hotel guests, along with a complete overview of their hotel and travel reservations. In some cases, this included their check-in time and room number. It affected 1,000s of people across the globe, with millions of new records being added daily.

Some of those travelers were employees of the US government, including military personnel and Department of Homeland Security (DHS) staff. They were exposed because one of the systems that connected to the database was operated by a contractor to the US government, military, and DHS. The researchers added:

Our team viewed logs for US army generals traveling to Moscow, Tel Aviv, and many more destinations. We also found their email address, phone numbers, and other sensitive personal data.

Attackers could use this data to phish hotel guests and extract more information from them, said the researchers, drawing comparisons with the Russian spearphishing campaign against the Democratic National Committee.

The data exposure could also have posed more immediate threats: 

This leak also endangered the safety of personnel by giving live information about their travel arrangements, right down to their hotel room number.

The researchers told US CERT about this on 13 September 2019, but it ignored them. So they told the US embassy in Tel Aviv (where the researchers are based) on 19 September. Finally, on 26 September, they were put in touch with the Pentagon, and the database was closed down on 2 October.

Adapting an Elasticsearch server so that anyone can access it from the public internet is an intentional step. The database only binds to local addresses by default, meaning that you have to deliberately configure it to listen for requests from public IP addresses. Even if you do that, AWS includes several security features out of the box, such as identity and access management, encryption of data at rest, and integration with its own security groups.

Elasticsearch also rolled several security features into the free version of its platform in May, including TLS for encrypted communications, and role-based access control.

Neither Autoclerk nor its owners responded to our requests for comment yesterday.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ya-mCNqAOjw/

Facebook pulls fake news networks linked to Russia and Iran

Facebook has yanked four networks of coordinating accounts that it linked to Iran, Russia and election meddling.

One of the networks that was targeting the 2020 US presidential elections appeared to be linked to the Russian troll agency known as the Internet Research Agency (IRA): the operation that concocted a slew of cardboard cutout accounts to churn out divisive blogs.

Nathaniel Gleicher, head of cybersecurity for Facebook, said in a post on Monday that the networks, made up of fake and hijacked accounts, were masquerading as local accounts so as to post political content in the run-up to the 2020 presidential election.

We’ve seen this type of inflammatory, partisan content before, in the 2016 US presidential election: posts about Israel demolishing Palestinian houses, a US Congresswoman calling President Trump racist, Black Lives Matter and other race relations hot-button topics in the US, Iranian foreign policy, and more.

Facebook said that three of the account networks originated in Iran and one in Russia. They targeted a number of different regions of the world: the US, North Africa and Latin America.

It’s not the content that Facebook is taking down, Gleicher stressed. Rather, the platform is taking action based on “inauthentic behavior.” Its policy on misrepresentation, which requires that people connect on Facebook using the name they go by in everyday life, is geared to “create a safe environment where people can trust and hold one another accountable.”

On Monday, it took down 93 Facebook accounts, 17 Pages and 4 Instagram accounts for violating its policy against coordinated inauthentic behavior (CIB). It also updated that policy on Monday. Some of its new tactics:

  • Persona non grata. If, in the course of a CIB investigation, Facebook determines that a particular organization is primarily organized to conduct manipulation campaigns, it will permanently remove that organization from its platforms in its entirety.
  • Foreign or Government Interference (FGI). Facebook said that it finds two types of CIB to be particularly egregious: Foreign-led efforts to manipulate public debate in another country, and operations run by a government to target its own citizens. “These can be particularly concerning when they combine deceptive techniques with the real-world power of a state,” the platform said.

Operations focused on the US and North Africa

Facebook traced the activity of the 93 accounts, et al., to Iran and said that the operation focused primarily on the US, with some activity targeting French-speaking audiences in North Africa.

Some of the accounts had already been automatically disabled. Using the compromised and fake accounts, the networks masqueraded as locals to manage their Pages, join Groups and drive people to off-platform domains connected to Facebook’s previous investigation into the Iran-linked “Liberty Front Press,” which it removed in August 2018.

The accounts had about 7,700 followers on their Facebook Pages, and around 145 people followed one or more of the Instagram accounts.

Operations focused on Latin America

On Monday, Facebook also removed an additional 38 Facebook accounts, 6 Pages, 4 Groups and 10 Instagram accounts that originated in Iran and which focused on countries in Latin America.

These Pages and accounts also posed as locals, used fake accounts to post in Groups, and managed Pages posing as news organizations. They also shuffled traffic to off-platform domains and as well as frequently repurposing Iranian state media stories, the accounts posted tailor-made content for a particular country, including domestic news, geopolitics and public figures.

About 13,500 accounts followed one or more of the Pages, about 4,200 accounts joined at least one of the Groups, and around 60,000 people followed one or more of the Instagram accounts.

Again, Facebook discovered this coordinated network through its investigation of one of the Iran-linked networks it removed in August 2018.

Fake news entity BLMNews

Facebook also took down a small network – it only had four Facebook accounts, and it was followed by just 45 people – that purported to be the “source of African-American news all around the world,” including coverage of the Black Lives Matters movement.

The Iran-based network’s main purpose was apparently to drive traffic to an off-platform site – BLMNews – that masqueraded as a news outlet. The page typically posted about political issues, including topics such as race relations in the US, criticism of US and Israel’s policy on Iran, the Black Lives Matter movement, African-American culture and Iranian foreign policy.

The BLMNews network was yet another of the CIB networks discovered in the investigation that led to the networks takedown in August 2018, as well as takedowns in May 2019.

Russia’s back with more IRA action

Facebook also removed 50 Instagram accounts and 1 Facebook account that originated in Russia and focused primarily on the US. The campaign showed some links to Russia’s IRA troll farm, with hallmarks of “a well-resourced operation that took consistent operational security steps to conceal their identity and location,” Facebook said.

Similar to IRA activity in the past, those steps made it tough for many of the accounts to build much of a following among authentic communities. The campaign included some fake accounts that had already been automatically detected and disabled because they were inauthentic and spammy.

The accounts mostly followed, liked or commented on others’ posts to increase engagement with their own content, reusing content already shared by others, such as screenshots of posts put up by news organizations and public figures. As well, a small number of the accounts dragged out and modified old memes that the IRA has posted in the past.

That recycling of past content is why the social network analysis company Graphika, which reviewed the campaign for Facebook, dubbed it IRACopyPasta.

Most of the posts weren’t directly related to elections, but instead focused on general news and political commentary (covering the political spectrum from ultra-right to uber-left), environmental issues, racial tensions, LGBTQ issues, or a host of other topics. That’s probably because the posts are being used to develop account personas and branding, Graphika says.

If the campaign really is coming from the IRA, it’s targeting the same issues and communities. From Graphika’s report, with the quoted material being borrowed from a Senate Intelligence Committee report on Russian interference in the 2016 election:

“Socially divisive issues, such as race, immigration, and Second Amendment rights,” which were the focus of the IRA’s previous campaigns, appear clearly throughout this set. The targeting, the language and content clues, and the recreation of original IRA content, all underscore the resemblance between the original IRA and IRACopyPasta.

Graphika notes a dearth of text in the posts, which could be a sign that Russia is trying to avoid linguistic glitches that marked some of its 2016 posts. Graphika says that this time around, the accounts are generally just pushing screenshots of other people’s tweets and memes without actually commenting on the posts.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/kiW_2Z3IoYY/

Hacker breached servers used by NordVPN

Leading VPN provider NordVPN has been forced to admit that a hacker stole an expired TLS certificate key used to securely connect customers to the company’s web servers.

According to a statement, the attack happened in early 2018 at the Finnish data centre of a service provider used by the company, exploiting a vulnerability in a remote management interface which NordVPN wasn’t told about.

Not a good look for a company offering a VPN service which customers buy to boost the security and privacy of their internet connection. However, in a statement released earlier this week the company downplayed the risk of misuse:

The server itself did not contain any user activity logs; none of our applications send user-created credentials for authentication, so usernames and passwords couldn’t have been intercepted either.

There’s no evidence the stolen key was abused, nor that it could have been given its expiration.

So that’s that? Unfortunately not. Indeed this is where the story of the NordVPN hack takes a confusing turn involving rival VPN companies.

The reason we know about this incident at all is thanks to Twitter user @hexdefined who tweeted about it at the weekend:

And how did @hexdefined know about it? Because the stolen key, and probably some others, have apparently been circulating on the dark corners of the internet for some time.

The plot thickens

Earlier this week, NordVPN came clean about the incident, saying it had decided not to mention it for 18 months in case the same vulnerability was present on some of its other 3,000 servers.

As to the possibility that other VPN providers were caught up in the same hack, TorGuard released a statement admitting that one of its servers had suffered the same fate:

The TLS certificate for *.torguardvpnaccess.com on the affected server is a squid proxy cert which has not been valid on the TorGuard network since 2017…

It’s a confusing mess.

A hack happened at a service provider used by NordVPN. Somehow, two rivals were caught up in it too. It’s not clear whether this was at the same time or in a separate incident revealed by that event – so far, the statements have not made this clear.

Are these VPNs still secure?

If this had been exploited before discovery, an attacker could in principle have set up a bogus NordVPN server guaranteed by the stolen certificate and, potentially, used it for man-in-the-middle (MitM) attacks.

That risk was probably small and is no longer possible. But it’s a reminder that while VPNs offer security for network traffic in transit, and provide some degree of privacy by masking your IP address, they are still networks built out of servers, configured by people, running on infrastructure run by third-party suppliers.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/f5mz8QmYkC8/

Alexa and Google Home phishing apps demonstrated by researchers

Amazon and Google have blocked spying, phishing apps that keep your smart speaker listening after you think it’s gone deaf, lie to you about there being an update you need to install, and then vish (voice-phish) away the password you purportedly need to speak so you can get that bogus install.

Long story short, don’t believe a smart speaker app that asks for your password. No regular app does that.

Eight of these so-called “Smart Spies” were built by Berlin-based Security Research Labs (SRL) and put into app stores under the guise of being horoscope or random-number generators.

SRL says that it managed to sneak in the spyware because third-party developers can extend the capabilities of Amazon Alexa – the voice assistant running in its Echo smart speakers – and Google Home through small voice apps, called Skills on Alexa and Actions on Google Home.

Those apps currently create privacy issues, SRL says, in that they can be abused to eavesdrop on users or to ask for their passwords.

Grabbing sensitive data

To capture sensitive data like passwords or credit card numbers, SRL used the following sequence:

  1. Put a seemingly innocent application through the Amazon or Google app review process.
  2. Change the app after the review so that its welcome message sounds like an error, such as “This skill is currently not available in your country”, making users think the app has quit.
  3. Reinforce the idea that the app has quit by adding a long pause after the welcome message (achieved by having the speaker “say” an unpronounceable character sequence.)
  4. Have the app say a message that sounds like its coming from the device itself, such as “An important security update is available for your device. Please say start update followed by your password.”
  5. Capture the password as a slot value (a user input) and send it to the attackers.

Here’s what that looks like on Google Home:

Eavesdropping

To eavesdrop on users, SRL used a variation on the techniques used to grab passwords. On the Amazon Echo, the sequence looks like this:

  1. Put a seemingly innocent app through the app review process.
  2. The app has a function triggered by the word “stop”, and another function triggered by a commonly used word, or a word likely to preceed something of interest to the attacker. Both functions capture what’s said immediately after they’re triggered.
  3. Change the app after the review so that the function triggered by “stop” responds with “goodbye” followed by a long pause, making users think the app has quit.
  4. Also after the review, change the second function so that it doesn’t respond when it’s triggered. If the user accidentally says the innocuous trigger word in conversation in the several seconds that elapse before the app quits, whatever follows it is sent to the attackers.

The sequence of events is similar on Google Home, but the result is far worse. On that platform, SRL was able to create an app that said Google Home’s bye sound before putting itself into a loop that captured voice data indefinitely.

Here’s what that looks like:

Mop-up

The BBC reports that after SRL informed the companies of the vulnerabilities, Google said that it had removed SRL’s Actions and that it’s “putting additional mechanisms in place to prevent these issues from occurring in the future.”

Amazon said that it too moved fast to block the researchers’ apps and to block this type of exploit in the future:

Customer trust is important to us, and we conduct security reviews as part of the skill certification process.

We quickly blocked the Skill in question and put mitigations in place to prevent and detect this type of Skill behaviour and reject or take them down when identified.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2-76soYbixo/

Haxis of evil: Russia, China, Iran and North Korea are ‘continuous threat’ to UK, say spies

The UK’s National Cyber Security Centre (NCSC) has said in its annual review (here) that Russia, China, Iran and North Korea “continue to pose strategic national security threats to the UK”.

In the foreword, NCSC CEO Ciaran Martin said: “A significant proportion of our work has continued to take the form of defending against hostile state actors… but we can’t often talk about the operational successes and the full range of the NCSC, GCHQ and wider state capabilities that are deployed against them.”

It follows revelations from NCSC and the US National Security Agency on Monday that Russian hacker group Turla masqueraded as Iranian cybercriminals to launch attacks on government systems in the Middle East.

NCSC, the public-facing limb of the Government Communications Headquarters (GCHQ), was set up in 2016 as part of a £1.9bn strategy to oversee cybersecurity in the UK and advise businesses.

The body’s annual report, which details its efforts to combat cyber incidents in the UK, said it has handled more 658 attacks on 900 organisations, including schools, airports and emergency services.

The report outlined some of the work to combat attacks, such as the Haulster programme, which automatically flagged fraudulent intentions against more than a million stolen credit cards, protecting hundreds of thousands of people from financial loss.

man uses credit card on laptop

Dead simple: Plenty of Magecart miscreants still looking to skim off your credit card deets

READ MORE

NCSC said it had also discovered that criminals were continuing to exploit open-source e-commerce shopping platform Magento, flinging malicious card-slurping JavaScript code that skims all data entered into a page during a transaction and silently sends the results to domains controlled by them, it said.

“The NCSC conducted a successful trial to identify and mitigate vulnerable Magento carts via takedown to protect the public,” said the report. “The work now continues. To date, the NCSC has taken down 1,102 attacks running skimming code (with 19 per cent taken down within 24 hours of discovery). Without the NCSC’s Active Cyber Defence intervention, it is likely these attacks would have continued indefinitely.”

Readers who use Magento can make sure their systems are patched here.

Abuse of public-sector email domains in the UK has been another area of focus. “One such incident occurred when criminals tried to send an excess of 200,000 emails purporting to be from a UK airport, using a non-existent gov.uk address in a bid to defraud people.

“However, the emails never reached the intended recipients’ inboxes because the Active Cyber Defence system automatically detected the suspicious domain name and the recipients’ mail providers never delivered the spoof messages. The email account used by the criminals to communicate with victims was also taken down.” ®

Sponsored:
How to Process, Wrangle, Analyze and Visualize your Data with Three Complementary Tools

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/23/russia_china_iran_north_korea_threat_to_uk_ncsc/

Deepfakes, quantum computing cracking codes, ransomware… Find out what’s really freaking out Uncle Sam

Vid The US House Committee on Homeland Security grilled a panel of experts to understand how foreign adversaries could weaponise emerging technologies like AI and quantum computing in cybersecurity.

“The rapid proliferation of new technology is changing the world,” Cedric Richmond (D-LA), chairman of the Cybersecurity, Infrastructure Protection, and Security Technologies subcommittee of the DHS, said in his opening statement on Tuesday.

“Unfortunately, one man’s tool is another man’s weapon. Sophisticated nation-state actors like Russia, China, Iran, and North Korea have already weaponized new technologies to disrupt our democracy, compromise our national security, and undermine our economy. As technology improves, so will their ability to use it against us.”

Richmond led the hearing with Bennie Thompson (D-MS), chair of the House Homeland Security Committee. The pair were particularly concerned with Russian miscreants planting so-called deepfakes, a type of fake audio and/or visual content generated using machine-learning algorithms, to spread misinformation online to compromise the upcoming 2020 presidential election.

The Internet Research Agency, described as Russia’s troll farm, churned thousands of bot accounts on social media platforms like Twitter to spread fake US propaganda in the 2016 election. Politicians have also recently fallen prey to deepfake attacks, where their likenesses have been manipulated to say and do things they haven’t actually done. So the fear that the next wave of fake social media accounts will be generating and spreading deepfakes in the near distant future isn’t too unreasonable.

Experts are scrambling to study the effect the Kremlim’s disinformation campaign had on voters during the 2016 White House race. As part of that research effort, Twitter released a data set containing more than ten million tweets from suspected puppet accounts last week.

Jim Langevin (D-RI) noted that Moscow’s election campaign interference was “very well planned.” Fake accounts were set up months before they were used. There was a main group that generated fake content, a second, larger, group responsible for retweeting the fake messages, and finally real people who believed and amplified the messages further by retweeting.

Ken Durbin, a senior strategist of global government affairs cybersecurity at Symantec, who testified at the hearing, agreed. He also warned that deepfakes didn’t just pose a threat to politicians, they’re also potentially dangerous for enterprise companies, too.

“Fake content like videos, photos, audio recordings or emails represent a serious risk to individuals as well as the organization,” he said. “Imagine a deepfake of a CEO announcing a series of layoffs, or one directing an employee to wire out funds or intellectual property. It would hurt their stock price.”

The race is on for developers to come up with new strategies that can detect deepfakes. Facebook and Google have both compiled data sets made up of AI generated images and videos to help researchers train detection models, and some boffins are trying more esoteric methods.

Corporate espionage, and, erm quantum computing?

Other threats, like quantum computing, were less tangible. Google and IBM are squabbling over alleged quantum supremacy at the moment, though the capabilities discussed during the hearing by lawmakers feel like light years away. Publicly known quantum computers just aren’t that useful right now. Above all, China is the enemy, Thompson said.

“We know that China has engaged in intelligence-gathering and economic espionage, and has successfully breached [government employees], navy contractors, and non-government entities from hotels to research institutions,” he said.

“We also know that China is investing heavily in developing quantum computing capabilities, which could undermine the security value of encryption within the next decade.”

Sensitive data is typically encrypted using algorithms that scramble the information, making it difficult for adversaries to intercept and recover the data without the necessary keys. Quantum computers could hypothetically crack these encryption algorithms to decrypt classified information, but they don’t, to the best of our knowledge, exist nor will exist for some time. And in the meantime, boffins are already developing post-quantum algorithms, anyway.

A more realistic threat, of course, are good old-fashioned phishing attacks that have been used to ransack private contractor companies, steal military secrets, or interfere with power grids. The committee also considered ransomware raids that siphoned off millions in digital cryptocurrencies, and said a lack of information sharing among agencies was an issue.

“There are very few cases where we know what happened,” Robert Knake, a senior research scientist at the Global Resilience Institute at Northeastern University, told the hearing. The culture of secrecy harms the ability for companies and for the government to defend themselves against corporate espionage.

Shock

Online deepfakes double in just nine months, scaring politicians – and fooling the rest of us

READ MORE

Knake called for “collaborative defense” partnerships between both business and government. “The ‘partnership’ that has been the central tenet of our national cybersecurity policy for two decades needs to evolve to real-time, operational collaboration,” he opined. “In order for that to happen, we need collaboration platforms where the members of this partnership can trust each other.

“Government needs to be able to trust that the intelligence it shares will be protected and only shared appropriately and securely. But private companies need the same degree of assurance when they share with the government and with each other.”

He also called for the government to make it harder for China to infiltrate private US companies in espionage attacks. For example, one important question we should ask is, after cutting China out completely, “can we maintain global supply chains?” Knake said. He warned that components sold in the US, whether networking equipment or smartphones, should be manufactured stateside or in allied countries.

Niloofar Razi Howe, a senior fellow at the Cybersecurity Initiative, New America, a US national security think tank, went further and added: “Tech companies that are co-conspirators with our adversaries must be regulated.” Ahem take note, Tim Cook.

You can watch the 90-minute hearing in full below. ®

Youtube Video

Sponsored:
What next after Netezza?

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/23/homeland_security_ai/