STE WILLIAMS

VTech fondleslabs for kids ‘still vulnerable’ despite sanctions

New InnoTab child learning devices still have the same security flaw first found by researchers at Pen Test Partners two years ago.

The issues persist even after manufacturer VTech was fined $650,000 by US watchdogs at the Federal Trade Commission (FTC) via a ruling published earlier this week. The settlement deal came after the FTC scolded the children’s toymaker for both unnecessarily collecting kids’ personal information and (worse) failing to protect this sensitive data before a massive breach in November 2015.

As well as paying the fine, VTech agreed to apply privacy and security requirements so that it complied with the Children’s Online Privacy Protection Act (COPPA) and the FTC Act, as previously reported.

The 2015 hack on VTech’s online services led to the theft of sensitive customer information about millions of children and parents.

Tests by UK security consultancy Pen Test Partners at the time found it was possible to lift data from its InnoTab tablet, as El Reg reported at the time.

The same tests on a newly purchased InnoTab reveal that the same hack is still possible and nothing had been done to address the problem, according to Pen Test Partners’ Ken Munro.

The FTC settlement resulted in VTech promising to improve its security. More specifically the deal means that VTech is “required to implement a comprehensive data security program, which will be subject to independent audits for 20 years” as well as “misrepresenting its security and privacy practices”.

In response to queries from El Reg, VTech said it was working hard to fulfil its security obligations. It said that the “criminal cyber attack on VTech databases should not be compared with the physical dismantling of one of our products” since they are “fundamentally different acts” before stating that it takes security in general seriously.

While it is not appropriate to share the details, we updated our data security policy and adopted rigorous measures to strengthen the protection of our customers’ data following the cyber attack in 2015.

We can assure you that we take the commitment on cyber security we gave the FTC last week very seriously indeed. VTech is committed to and will progressively execute data security improvements so that customers of VTech products and services can rest assured the data they entrust with VTech is well protected.

Munro wasn’t impressed by what he described as a “carefully caged non-answer”. “It doesn’t deal with the hardware security issues we raised,” he added. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/18/innotab_kid_tech_still_vulnerable/

Industrial systems scrambling to catch up with Meltdown, Spectre

Vendors of industrial systems have joined the long list of vendors responding responses to the Meltdown and Spectre processor vulnerabilities.

So far, a dozen vendors have told ICS-CERT they use vulnerable processors, and The Register imagines there will be plenty more to come.

Gold stars go to just two vendors: Smiths Medical, which has determined that none of its products are vulnerable; and OSISoft, whose PI System is vulnerable, and whose advisory includes anticipated performance impacts.

Emerson Process and General Electric treat their responses as customer information only, and keep them hidden behind a regwall. So does Rockwell, for what it’s worth, but the latter company at least spoke to The Register about the impact on its systems).

Another seven vendors in the market said they are “investigating” the impact – ABB, Abbott, Johnson Johnson (added points for giving the advisory a 2017 timestamp), Philips, Schneider Electric, and Siemens.

As readers know, the bugs arose out of how processors implement speculative execution. Patches are a giant headache for vendors and users alike, causing both performance and stability issues. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/18/ics_cert_meltdown_responses/

North Korea’s finest spent 2017 distributing RATs, wipers, and phish

North Korea’s black hats launched at least six extensive malware campaigns mostly against South Korean targets during 2017.

That’s the conclusion of Cisco’s Talos Warren Mercer and Paul Rascagneres (with contributions from Jungsoo An), who spent the year watching goings-on on the Korean peninsula.

The researchers focussed on one North Korean organisation, which they dub Group 123, and its continuing campaigns against the South.

Remote Access Trojans – RATs – are Group 123’s favourite approach, with three phishing campaigns (“Golden Time”, “Evil New Year” and “North Korean Human Rights”) working to deliver ROKRAT to targets.

At least two of those campaigns were published by Talos at the time, but without a firm attribution to North Korea.

The three campaigns tried to get users to infect themselves with a payload in the Hancom Hangul Office Suite, South Korea’s market leader, exploiting vulnerabilities such as the CVE-2013-0808 EPS viewer bug to pull down the RAT.

That’s a rather old vulnerability, so when CVE-2017-0199 (arbitrary code execution from a crafted file) landed, the Norks hackers got to work. In less than a month, Talos said, Group 123 launched the FreeMilk campaign against financial institutions from beyond the Korean peninsula.

A binary called Freenki (sometimes called by another binary, PoohMilk) then hauled down a ROKRAT-like trojan.

Finally, the “Are You Happy” campaign [surely you didn’t really fall for that in the e-mail subject line? – Ed] was simply destructive: it deployed a module from ROKRAT to wipe the first sectors of the victim’s hard drive.

Oh, and happy 2018: on January 2 this year, Group 123 ushered in the new year with a redux of its Evil New Year campaign. This time, the Talos post noted, the malware-slingers are trying to evade detection with a fileless version of ROKRAT. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/18/north_korean_2017_hacking_campaign/

HTML5 may as well stand for Hey, Track Me Longtime 5. Ads can use it to fingerprint netizens

Usenix Enigma HTML5 is a boon for unscrupulous web advertising networks, which can use the markup language’s features to build up detailed fingerprints of individual netizens without their knowledge or consent.

In a presentation at Usenix’s Enigma 2018 conference in California this week, Arvind Narayanan, an assistant professor of computer science at Princeton, showed how some of the advanced features of HTML5 – such as audio playback – can be used to identify individual browser types and follow them around online to get an idea of what they’re into.

For example, different browsers process sound files in slightly different ways, and allowing an ad network – or any website – to potentially work out which version of a browser is being used on which operating system. Couple this with other details – such as the battery level and WebRTC – and you can start to form a fingerprint for an individual user.

Of course, your browser typically reveals its version number and the underlying operating system’s details to web servers when fetching pages and other materials. However, from what Narayanan is saying, it is possible for ad networks and webmasters to bypass any attempts to suppress that information by probing the browser with HTML5 for traceable details. It also means that dumping JavaScript and cookies, and relying on purely HTML5, won’t mean you’re completely free from online tracking by advertisers.

“HTML5 browsers use a library to do audio processing, but different software stacks produce a unique fingerprint in combination with other data,” he explained. “Similar techniques also work on the battery and WebRTC functions.”

audio

Fingerprint … Each browser type has its own way of processing audio that makes it easy to track, according to this slide by Arvind Narayanan

Narayanan and his team have been monitoring the behavior of ad trackers for years. In 2014, they discovered 5,000 of the world’s top 100,000 most-visited websites were, in one way or another, using a canvas fingerprinting technique to identify and follow netizens around the internet, as they moved from page to page, site to site, without their knowledge.

Further research last year found that ad networks were using session replay scripts, which he described as “analytics on steroids,” to stalk people online. Narayanan said he and his team found ad trackers on 8,000 websites leaking visitors’ information in this way – including code on the website of American pharmacy chain Walgreens, which apparently handed confidential patient records to advertisers via forms, as well as the Gradescope assignment-grading software used by Princeton.

“This [session replay technique] left website owners and users pissed off,” he said. “Once we detailed the technique, the largest ad tracking providers stopped doing it. It seems sunlight is a great disinfectant.”

But this scrutiny only works up to a point, he warned. Netizen-tracking firms aren’t going to stop following people around the ‘net and working out what interests them so they can be served targeted adverts and special offers. Narayanan was one of the team overseeing the now-imploded Do Not Track browser feature, and the ad industry was adamant: if 15 per cent or more of internet users turned tracking off, the banner networks would refuse to play ball and track them anyway.

Technical workarounds by ad blockers, such as Privacy Badger and Ghostery, are of some use, he said. But they are usually playing catch up with ad trackers, not blocking them from the start.

The only way this is going to stop is if web browser programmers step up and build in measures to curb the ability to stalk users. But Narayanan said browser makers don’t want to get involved.

“Historically, web browsers consider it’s not their problem. Vendors are attempting to be neutral on this, and leave it to users to sort out,” he said. “To users that’s like an email provider saying that they are neutral on spam. Protection of privacy is a core reason for user choice.”

There have been some encouraging moves. The Brave browser has been developed specifically to neuter naughty advertising trackers, and both Firefox and Safari are making more of an effort in this area, he said. Chrome is also, we note, making noises in that direction.

But what’s needed is a fundamental rethink, with features that ensure tracking-free browsing, just as private browsing doesn’t record session data on a local workstation. Some kind of warning, similar to the HTTPS icon, would also be useful.

It’s important that these anti-surveillance techniques are implemented, he said, because privacy is vital to society – and there’s plenty of evidence showing a lack of privacy stifles debate. “Privacy is a lubricant that allows for social adaptability,” Narayanan opined. “If we move to a state of pervasive surveillance we lose that mobility.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/17/html5_online_tracking/

Who’s using 2FA? Sweet FA. Less than 1 in 10 Gmail users enable two-factor authentication

Usenix Enigma It has been nearly seven years since Google introduced two-factor authentication for Gmail accounts, but virtually no one is using it.

In a presentation at Usenix’s Enigma 2018 security conference in California, Google software engineer Grzegorz Milka today revealed that, right now, less than 10 per cent of active Google accounts use two-step authentication to lock down their services. He also said only about 12 per cent of Americans have a password manager to protect their accounts, according to a 2016 Pew study.

We polled El Reg readers on Twitter just before we published this piece, asking: “What percentage, rounded to nearest integer, of Gmail users do you think use two-factor authentication?” Out of 838 followers who responded within the hour, 82 per cent correctly selected less than 10 per cent. The rest picked more than 10 per cent.

Two-step auth stats

Shameful … Milka’s stats at Engima

The Register asked Milka why Google didn’t just make two-factor mandatory across all accounts, and the response was telling. “The answer is usability,” he replied. “It’s about how many people would we drive out if we force them to use additional security.”

Please, if you haven’t already done so, just enable two-step authentication. This means when you or someone else tries to log into your account, they need not only your password but authorization from another device, such as your phone. So, simply stealing your password isn’t enough – they need your unlocked phone, or similar, to to get in.

Google has tried to make the whole process easier to use, but it seems netizens just can’t handle it. More than 10 per cent of those trying to use the defense mechanism had problems just inputting an access code sent via SMS.

What if you don’t have two-step authentication, and someone hijacks your account? Well, Google is on the look out for that, too.

Stages of an attack

Anatomy of a hack … An account hijacker’s actions

To spot criminals and other miscreants commandeering a victim’s webmail inbox, the Chocolate Factory has increased its use of heuristics to detect dodgy behavior. A typical attacker has a typical routine – once they manage to get into an account, they shut down notification to the owner, ransack the inbox for immediately valuable stuff like Bitcoin wallet stuff or intimate photos, copy the contacts lists, and then install a filter to mask their action from the owner.

By looking out for and alerting folks to these shenanigans, Google hopes to make account hijackings less commonplace. But, given netizens’ lack of interest in security, warnings about suspicious activity are unlikely to get people moving to protect their information. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/17/no_one_uses_two_factor_authentication/

Living with Risk: Where Organizations Fall Short

People tasked with protecting data are too often confused about what they need to do, even with a solid awareness of the threats they face.

I am the first to admit that I possess a robust naivety about the general public’s appetite for risk. How can people agree that there is a risk and then exhibit behaviors that would seem to indicate that they find the risk irrelevant or that they are immune? I eagerly consume any report or survey that might shed some light on “how” and “why” someone could justify living with (or even exacerbating) security risks.

While the news always seems to be filled with examples of companies being woefully underprepared for breaches, my discussions with the corporate security practitioners who attend IT industry conferences show me an impressively nuanced understanding of risk. This leads me to assumptions about the factors that are causing the increasingly grotesque breaches we read about. But perhaps my preconceptions need adjusting.  

The 2017 Ernst Young Global Information Security Survey, for example, is a resource that asks a lot of questions, with answers that I find fascinating and sometimes unexpected. This survey covers many aspects of security incident preparedness, and it represents the responses of almost 1,200 C-suite leaders as well as information security and IT executives/managers. These participants come from companies of all sizes, revenue levels, and industry sectors.

Unsurprisingly (to me), the surveyors found that budget, skill, and executive support are items of concern; who among us doesn’t feel we could do a better job with fancier tools and unlimited funds? But the numbers in this case are less dire than I expected. Slightly more than half of respondents expressed these woes: 59% cite budget constraints and 58% lament a lack of skilled resources. I was even more surprised by how few people feel a lack of support from higher-ups; only 29% of respondents complain about a lack of executive awareness or support.

Despite these seemingly encouraging numbers, the survey results don’t translate into concrete action from a security perspective. According to respondents, 56% said either that they have made changes to their business strategies to take account of the risks posed by cyber threats, or that they are about to review strategy in this context. Only a meager 4% of organizations are confident they have fully considered the information security implications of their current business strategies and that their risk landscape incorporates all relevant risks and threats. While this may speak to the complexity of the threatscape, it also indicates how many organizations feel completely overwhelmed by the task of addressing all the risks in their environments.

Low Grades on Data Protection, Vulnerability Identification
Most organizations don’t seem to know where to start in creating proactive security postures: 35% of the survey’s respondents describe their data protection policies as ad hoc or nonexistent. Consequently, it’s understandable that 75% of respondents rate the maturity of their vulnerability identification as very low to moderate. 

Most organizations do at least have reactive processes in place for determining whether they’ve been attacked; only 12% have no breach detection program in place. But the most worrying finding of the Ernst Young survey is that some organizations may be confused about their legal responsibilities: 17% of respondents say they would notnotify allcustomers, even if a breach affected customer information, and 10% would not even notify customers knownto be affected.

What I take from all this is that the people who are tasked with protecting data within organizations are often deeply confused or misinformed about what they need to be doing, even when there’s adequate awareness of risk and support for correcting it. Rather than preparing in advance, most organizations are reacting to alarm bells only after the damage has been done. This bodes poorly for the industry when a diverse range of organizations are one unlucky day away from serious disruption.

Given the increasing complexity of technology, the persistent obscurity of digital security regulation, and the growing sophistication of threats, this problem is sure to increase. Rather than focusing on helping businesses assemble a collection of the fanciest widgets in all the land, we as security educators and professionals should instead focus on the everyday processes of security that are as banal and crucial as regular janitorial service. While counting machines and planning network structure may be less exciting than the blinky lights of advanced gadgetry, it would seem that this is precisely what would most benefit many organizations.

Related Content:

Lysa Myers began her tenure in malware research labs in the weeks before the Melissa virus outbreak in 1999. She has watched both the malware landscape and the security technologies used to prevent threats from growing and changing dramatically. Because keeping up with all … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/living-with-risk-where-organizations-fall-short/a/d-id/1330828?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Which CISO ‘Tribe’ Do You Belong To?

New research categorizes CISOs into four distinct groups based on factors related to workforce, governance, and security controls.

If you’re a CISO or another level of security manager, new research predicts you will fall squarely into one of four “tribes” depending on the nature of your role and how the overall organization approaches cybersecurity. Each tribe has a different approach to serving as a CISO.

This is the crux of the inaugural CISO Report published today by Synopsys. The research spanned two years and involved 25 interviews with CISOs at companies including ADP, Bank of America, Cisco, Facebook, Goldman Sachs, JPMorgan Chase, Starbucks, and US Bank.

The driving idea was to learn how individual CISOs perform compared with one another, what CISOs actually do all day, and how their work is organized and executed.

“The coolest thing was that CISOs were so eager to find out what we were going to find out,” says Gary McGraw, vice president of security technology at Synopsys. Most CISOs stay within their organizations and lack data to measure performance. This study aimed to collect data that would help CISOs learn where they stand and how they can improve.

There is no “universal blueprint” for the CISO but there are common factors researchers used as a basis for comparison among CISOs they interviewed. These included workforce (organization structure, management, staff), governance (metrics, budget, projects), and controls (framework, vulnerability management, vendors). The three domains helped organize results.

Based on the data collected, researchers identified four groups of CISOs. These include:

  • Tribe 1: Security as an Enabler
  • Tribe 2: Security as Technology
  • Tribe 3: Security as Compliance
  • Tribe 4: Security as a Cost Center

“The tribe is an assignment that’s not just for an individual,” McGraw notes. “It applies both to the CISO and the firm they’re in.” A CISO’s tribe is determined by 18 “discriminators,” or factors used to tease CISOs apart. These include “CISO-board relations” and “program management.”

What’s your tribe?

Tribe 1 is, in a sense, “the goal tribe,” says McGraw. “The board understands security, the firm as a whole knows security is important. Every business unit is aligned properly with security, because security is part of the way the firm does business.”

In these firms, the CISO is the highest-level executive under the CEO. Security is business-centric; every division thinks about computer security and security is part of everybody’s job. The enterprise focus and CISO role as a senior executive set this group apart, McGraw says.

Tribe 2, which treats security as technology, is similar in the sense they have advanced security practices. “These are firms that have moved well past compliance,” McGraw explains. “The firms in tribe 2 have great CISOs and are doing a great job with security.”

However, CISOs in tribe 2 lack the “senior executive gravitas” of CISOs in tribe 1. “They’re senior people, they have a lot of power and influence, but they’re not the alpha in the room,” he says. In a software firm or another tech-focused company, tier 2 CISOs don’t need to aspire to move up because the business is already focused on tech and they don’t need the executive pull.

Tribe 3 CISOs struggle because they’re often strong leaders who know how to get things done – but their companies prioritize compliance above all else. McGraw says this often happens if a business has a data breach or gets in legal trouble. Further, historical underinvestment in cybersecurity means these firms continue to underinvest despite compliance requirements.

“Often compliance is the goal and they can’t get their firm to move past that goal,” he explains. “Compliance is a bare minimum; it’s a low bar. You have to get over that bar, for sure.”

Tribe 4 CISOs “are often overwhelmed and under-resourced,” McGraw says. “They don’t really create budgets, and sometimes they don’t request budgets. They just get given budgets.”

These are often middle-management professionals who are not called CISOs but perhaps “director of IT security” or a similar title. Their firms are relatively new to cybersecurity and haven’t yet begun to prioritize it. McGraw anticipates tribe 4 is the largest group overall, taking all businesses outside this study into consideration.

Improving the CISO’s Stance

Knowing your tribe can help change your tribe, a process that requires a shift in business strategy and leadership. The CISO Project report emphasizes the importance of identifying and managing risk, developing and retaining the right talent, and establishing middle management to serve as a gateway from entry-level security roles up to the C-Suite.

Troy Hunt, information security author and instructor at Pluralsight, explains how CISOs can create a security-focused culture within the enterprise. “The objectives of security are often not consistent with the objectives of the business and development teams,” he says. Many people want to know how they can make security concepts more pervasive.

One of his recommendations is to help different departments on the same page. If a business has separate security and development teams, there’s often tension between the two.

“I’ve seen a lot of trouble with security and dev teams just getting along and speaking the same language,” Hunt says. “There’s often a lot of friction when developers think the security team is there to get in their way and stop things from getting done.”

Skill development is another key component, he says, echoing the CISO Project report. Hunt recommends finding and focusing on “security champions,” or people who are particularly motivated to learn more about security. Find this talent and send them to workshops and conferences, he says, then have them come back and teach other people.

“There’s so much in the industry and so much changing that if you can find those people, that’s a really valuable thing,” he says.

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/operations/which-ciso-tribe-do-you-belong-to/d/d-id/1330840?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Rolls Out Security Center for G Suite Enterprise

New dashboards give admins a look at data such as suspicious device activity and spam email delivery across the business.

Google is integrating a new security center into G Suite to give administrators a more granular view of security metrics for employees’ devices, and guidance for managing them.

The idea behind this update is to give admins a single place to see their enterprise security posture. In one dashboard, a series of individual windows presents data including suspicious device activity and how spam and malware emails are targeting users across the business.

“It basically helps administrators by providing them a single comprehensive view into the security posture of the organization,” says Chad Tyler, product manager for the Security Center.

Admins can click on individual graphs to learn more about specific types of data and act on them. If you want to learn more about phishing attacks, you can view what types of phishing emails users are seeing, and who receives the most malicious messages. If someone is often targeted, you know to ensure they have additional precautions like two-factor authentication.

In another example, admins can also view which files are triggering data loss prevention alerts. Based on this data, they can take action to see which users are sharing information. The data in Security Center is collected from devices logged into their corporate Google accounts.

“A lot of this information is based on usage logs we have around auditing within the administrator console,” says Tyler. “When a user is using Gmail, there are logs associated with the different things sent and received. This is the organization’s view of what’s going on.”

In a separate window, the Security Center has a list of security guidance recommendations. Admins can see their current settings and read up on Google’s recommended settings to reduce risk. Tyler points out that all best practices will look the same in each admin’s Security Center so it’s worth considering individual settings to determine which is best for your organization.

This component of Security Center is less of a notification systems and more of a management tool, says Tyler. Google will update them based on new security information or new settings.

The Security Center is solely for admins and won’t present alerts or best practices to end users, he adds, noting that Gmail already has measures in place to prevent successful attacks.

“There’s already a lot of protection for the end users, to keep them from clicking what’s known to be spam,” Tyler says. “This is just giving information to administrators to better understand what’s going on and make those higher-level decisions.”

Security Center is part of G Suite Enterprise and will automatically appear in admin consoles over the next few days.

Related Content:

 

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/cloud/google-rolls-out-security-center-for-g-suite-enterprise/d/d-id/1330835?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Where to Find Security Holes in Serverless Architecture

Serverless architectures take away business responsibility for server management, but security should still be top of mind.

Application security is getting a twist with the rise of serverless architectures, which introduce a new way of developing and managing applications – and a new wave of related security risks.

Serverless architectures, also known as Function as a Service (FaaS), let businesses build and deploy software without maintaining physical or virtual servers. That’s the job of providers like Amazon, Microsoft, Google, and IBM, which run popular serverless architectures AWS Lambda, Azure Functions, Google Cloud Functions, and IBM BlueMix Cloud Functions, respectively.

A common use case for serverless applications is altering media files. If someone uploads a file to an AWS S3 bucket, an application can invoke a function to automatically resize the image. If someone sends an SMS in a chatbot application, a separate function could send a return SMS.

Businesses are looking to serverless architectures to drive simplicity and reduce cost. Applications built on these platforms scale as cloud workloads grow, so developers can focus on product functionality without worrying about the operating system, application server, or software runtime environment, explains Ory Segal, PureSec CTO.

“You can stitch together applications that are events-driven, and at the same time you don’t have to manage any of the infrastructure – it automatically scales,” says Segal. “If there’s one event, one function will get evoked. If there’s [more], then the provider is responsible for [supporting] as many functions as you need events.”

Billing is based on CPU time, he says of the cost benefit. If there’s no computing being done, the organization doesn’t pay for anything. Vendors charge per 100 milliseconds of compute. “It’s very simple to develop in serverless, it’s very cheap to develop in serverless,” he adds.

The Security Risks of Serverless

However simple and cost-effective, this architecture has its security issues. Serverless applications are still at risk for breaches and traditional security solutions are not relevant in this space, says Segal. Users hand over the responsibility of security patches to providers.

PureSec today published its “Serverless Architectures Security Top 10,” a list of security risks in these services. Researchers compiled scans from more than 5,000 serverless projects on GitHub, serverless projects using algorithms created by PureSec, and partner data and insights.

“There’s a big chunk of IT security that is now the responsibility of the cloud provider,” he explains, adding that security admins can’t install tools like antivirus, firewalls, and IDS. “You don’t control the environment. You don’t control the network, you don’t control the servers.”

Major security issues include a larger attack surface. Serverless functions pull data from a broad range of event sources (HTTP APIs, cloud storage, IoT device communications), which increases the attack surface when messages can’t be scanned by Web application firewalls. Given the newness of serverless architecture, the attack surface can also be complex to understand.

PureSec’s Top 10 list digs into specific risks. The first, and most critical, is Function Event-Data injection. Injection flaws are a common risk, but in serverless architecture they’re not limited to direct user input. Serverless functions can take input from any type of event source (cloud storage, SQL database) and each input could be controlled by an attacker.

The second most-critical risk is Broken Authentication. Serverless applications can pack dozens to hundreds of different functions. Some may glue processes together; others may consume events of different source types. Applying robust authentication is necessary and complicated. Users must secure the serverless function and the applications with which it interacts.

“A weak authentication implementation might enable an attacker to bypass application logic and manipulate its flow,” the report explains. This could let an attacker execute functions and perform actions that weren’t supposed to be exposed to unauthenticated users. PureSec advises businesses to use the authentication tools provided by their serverless environment.

The growth of serverless architecture is introducing a “paradigm shift” in security, Segal says. “If we used to secure the infrastructure, the perimeter, the network, we now have to secure the serverless execution itself.” Developers are responsible for designing robust applications and ensuring their code doesn’t introduce any vulnerabilities to the application layer.

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/cloud/where-to-find-security-holes-in-serverless-architecture/d/d-id/1330842?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Threats from Russia, North Korea Loom as Geopolitics Spills into Cyber Realm

Threat actors from both nations ramped up their activities sharply in 2017, Flashpoint says in a new threat intelligence report.

Cyberthreat activity from Russia and North Korea ramped up last year in response to several geopolitical factors, while that from China — long a source of problems for US organizations — tapered off a bit, a new business risk intelligence report from Flashpoint shows.

Flashpoint’s report provides an assessment of how cybercriminals and nation-state actors evolved their tactics, techniques, and procedures over the past year and what enterprises can expect from them in the short term. Such threat intelligence can often help organizations acquire a better awareness of threats surrounding them so they can prepare for it better.

The Flashpoint report shows that ransomware continued to be a major driver for profit-motivated attacks and will likely remain that way in 2018 as well. But also emerging as a threat to organizations were geopolitical conflicts spilling over into cyberspace.

Threat activity by state-sponsored actors in North Korea, for instance, ramped up sharply in response to the tightening international sanctions against the country over its controversial nuclear missile program. “North Korea really does seem to be engaged in a large-scale effort to steal funds to support the regime,” says Jon Condra, author of the intelligence report and Flashpoint’s director of Asia Pacific Research.

North Korean attacks on cryptocurrency exchanges and the SWIFT financial network and the growing use of ransomware attacks by threat actors in the country suggest that the government there is feeling the crunch from the sanctions, Condra says. A lot of the activity stemming from North Korea these days is the sort typically associated with financially motivated cybercriminals, not nation-state actors. “North Korea is notoriously unpredictable. We see them as a continuing threat to almost any organization,” he says.

The threat from Russia is somewhat different. Recently, threat actors from the country appear to have ramped up cyber espionage and disinformation campaigns aimed at Western governments. Russia’s suspected meddling in the 2016 US presidential election and the 2017 French elections and the leaking of classified NSA cyberattack tools by the Russian-speaking Shadow Brokers group in 2016 are some examples of likely nation-state sponsored activities from the country. “Russia has embraced cyber espionage and cyber-enabled disinformation as a core component of its international strategy,” Condra says.

Moves by the US and European Union to tighten or extend some existing sanctions against Russia could trigger more such cyber threat activity from the country, Flashpoint said.

In Flashpoint’s assessment, nation-state-sponsored threat actors in Russia have the ability to do catastrophic damage to critical systems and infrastructure resulting in destruction of property and possible loss of life. China, though less active last year, has the same ability, as do the so-called Five Eyes nations: the United States, UK, Canada, Australia, and New Zealand.

Flashpoint has currently pegged North Korea as a Tier 4 threat with the ability to cause moderate damage like temporarily disrupting core business functions and critical assets. But the country’s ability to marshal state resources as necessary to meet its objectives makes it a more dangerous player. “North Korea in particular is likely capable of using destructive and highly disruptive attacks in kinetic conflict scenarios to support military objectives,” the Flashpoint report said.

In addition to nation-state threats, expect to see more activity from hacktivists, hate groups, and jihadists, according to the security vendor. The Turkish Aslan Neferler Tim (ANT) has been one the most active hacktivist outfits since the start of 2017 and has carried out a string of distributed denial-of-service attacks using attack infrastructure based in the US, Austria, and Turkey. While its targets are primarily Turkish, ANT has attacked airports, banks, and government organizations in the US, Greece, Denmark, Germany, and several other countries.

The continuing political polarization in the US has also resulted in a resurgence of cyber activity by hate groups and non-jihadist threat actors. Many of them used the Internet, social media platforms, and messaging services such as Discord to disseminate propaganda and to publicize protests such as the deadly Unite the Right rally in Charlottesville last August, Flashpoint said. Groups like Antifa and the Resist Trump movement, too, used these channels to maintain their visibility among supporters, the report said.

To organizations struggling with daily attacks by common cybercriminals, the danger from sophisticated nation-state foes can sometimes seem remote. But as the report from Flashpoint highlights, geopolitical conflicts, hacktivist actions, and other seemingly unrelated developments have been increasingly spilling over into the cyber realm.

The trend has driven growing interest in threat intelligence service among organizations. Many want to build context around their internal telemetry by combining it with external threat data. The use of such services is especially prevalent in large organizations with established security operations centers, says John Pescatore, director of emerging security threats at the SANS Institute.

“Mature SoC processes can make good use of threat data. It can help them more quickly adjust filters and shields for protecting against threats” that might still only be developing, Pescatore says.

Related content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/threats-from-russia-north-korea-loom-as-geopolitics-spills-into-cyber-realm/d/d-id/1330841?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple