STE WILLIAMS

Citrix Breach Underscores Password Perils

Attackers used a short list of passwords to knock on every digital door to find vulnerable systems in the vendor’s network.

The recent cyberattack on enterprise technology provider Citrix Systems using a technique known as password spraying highlights a major problem that passwords pose for companies: Users who select weak passwords or reuse their login credentials on different sites expose their organizations to compromise.

On March 8, Citrix posted a statement confirming that the company’s internal network had been breached by hackers who had used password spraying, successfully using a short list of passwords on a wide swath of systems to eventually find a digital key that worked. The company began investigating after being contacted by the FBI on March 6, confirming that the attackers appeared to have downloaded business documents. 

Password spraying and credential stuffing have become increasingly popular, so companies must focus more on defending against these types of attacks, according to Daniel Smith, head of threat research at Radware.

“Actors will use password spraying [instead of] brute force attacks to avoid being timed out and possibly alerting admins,” Smith said in a statement. “Once the user is compromised the actors will then employ advanced techniques to deploy and spread malware to gain persistence in the network.”

While brute-force attacks against a single system are easy to detect, password spraying spreads the attack out over many systems and over time. By spreading the attempts, the brute-force password attack can escape notice if companies don’t connect related security events across the network. 

Authentication Attacks

Attacks based on authentication have become a top concern for companies. In its recent report, security firm Rapid7 found that four of the top five security events detected in its clients’ networks in 2018 involved authentication. Often the attacker is only caught because they are attempting to log in from a computer not related to the company.

While password spraying typically uses a list of common passwords, in many cases, attackers will use passwords leaked from other breaches, hoping that employee reuse their passwords at work. A list of the top-1,000 passwords is effective 75% of the time, according to the U.K.’s National Cyber Security Centre

The agency recommends that companies deploy technologies that have proven effective against password-spraying attacks, use multi-factor authentication, and regularly audit employees’ passwords against a list of the top-1000 or 10,000 most popular passwords.

For companies that base their security on workers’ credentials, detecting an attacker is difficult because the rogue user has successfully logged in.

“It is a non-trivial problem,” says Troy Hunt, an independent expert on cyber- and password security. “We are asking companies to detect a legitimate username-password credential as an attack.”

Because Citrix provides a variety of services to companies, including a popular remote access service, the attackers could use it as a step into other companies, Radware’s Smith said.

“Nation states actors typically target MSP (managed service providers) and companies like Citrix due to their client base and intellectual property,” he said. “Other than espionage or financial profit, MSPs can also be targeted and leveraged in supply chain attacks that are used as a staging point to distribute additional malware.”

Citrix has committed to updating its customers on the breach. In addition to its forensics investigation in conjunction with a third-party firm, the company further secured its internal network and is cooperating with the FBI, Citrix’s Black said in his post last week.

Resecurity emerges with details

Citrix and the FBI are not the only two organizations that appear to have details on the breach. Boutique security firm Resecurity claims that the attack against Citrix began on October 15, 2018, used a list of nearly 32,000 user accounts, and is connected to Iranian interests. 

“The incident has been identified as a part of a sophisticated cyber-espionage campaign supported by nation-state (sic) due to strong targeting on government, military-industrial complex, energy companies, financial institutions and large enterprises involved in critical areas of economy,” the company stated in an analysis posted on March 8, the same day as Citrix’s statement. 

“Based our recent analysis, the threat actors leveraged a combination of tools, techniques and procedures (TTPs) allowing them to conduct targeted network intrusion to access at least 6 terabytes of sensitive data stored in the Citrix enterprise network, including e-mail correspondence, files in network shares and other services used for project management and procurement,” the firm stated.

Resecurity claimed that its researchers notified Citrix of the breach in December, linking the attack to a known Iranian operator IRIDIUM. Citrix, however, declined to comment further on its investigation into breach or whether it was contacted by Resecurity.

“We are focused on the comprehensive forensic investigation into the incident that we are conducting with leading third-party experts and have no comment on Resecurity’s report or claims,” a company spokesperson said. 

While claiming to be trusted by Amazon, Microsoft, Eurosport, and JP Morgan, among others, Resecurity has apparently only recently emerged from stealth. The Los Angeles, Calif., startup only has two analyses posted, including the Citrix breach, with press releases dating back only to September 2018. The company has not returned a request for comment from Dark Reading.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/application-security/citrix-breach-underscores-password-perils/d/d-id/1334139?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Web Apps Are Becoming Less Secure

Critical vulnerabilities in Web applications tripled in 2018, according to a new study.

Buggy Web applications continue to be one of the biggest security weaknesses for a majority of organizations. A new report shows that in fact, the problem actually appears to be getting worse.

Positive Technologies analyzed data from Web application security assessments that the company conducted for clients throughout 2018. The analysis showed a three-fold increase in the number of critical vulnerabilities present in Web applications compared to 2017.

On average, each Web application that Positive Technologies inspected contained 33 vulnerabilities. Of those, six were high-severity flaws, compared to just two the prior year.

More than two-thirds of the apps (67%) contained critical vulnerabilities such as insufficient authorization errors, arbitrary file upload, path traversal, and SQL injection flaws. That number was higher than the 52% of applications that contained such flaws in 2017 and the 58% in 2016.

Leigh-Anne Galloway, cybersecurity resilience lead at Positive Technologies, says the company’s analysis showed Web applications were consistently buggy regardless of industry or whether the app was homegrown or commercially purchased. “Most Web applications have a low level of security,” that’s putting user and business data at risk.

The cause is not easy to pinpoint. “But 83% of vulnerabilities are code vulnerabilities, and critically dangerous ones as well. This suggests that during development, not enough attention is paid to safety,” Galloway says.

The security vendor’s analysis is consistent with that of others in recent months. In an October 2018 report, WhiteHat Security described the number of high-severity security vulnerabilities in Web applications as increasing at a rate that is making remediation nearly impossible for organizations using traditional methods. Microservices in particular are riddled with more serious vulnerabilities per line of code than traditional applications, WhiteHat said.

The WhiteHat report identified the growing use of insecure third-party components as one reason for the high and increasing prevalence of vulnerabilities in modern Web applications. The accelerating adoption of agile DevOps processes and the resulting emphasis on speedy application delivery is another factor. “The quicker applications are released, particularly those that are comprised of reusable components, the faster more vulnerabilities are introduced,” WhiteHat said in its report.

The trend portends major trouble for enterprise organizations. Seventy-two of the Web applications in the Positive Technologies study had vulnerabilities that enabled unauthorized access and 19% had flaws that would give an attacker complete control of the application and the underlying server. “If such a server is on the network perimeter, the attacker can penetrate the internal corporate network,” the security vendor said.

Seventy-nine percent of Web applications contained weaknesses that enabled access to debug and configuration information as well as source code, session identifiers, and other sensitive data. That’s the second year that the number of applications with such vulnerabilities has increased—in 2016 just 60% of applications had such issues and in 2017 that number was 70%.

Most Common Vulnerabilities

What are the most common vulnerabilities in Web applications? Positive Technologies’ analysis unearthed some 70 different types of vulnerabilities in total in Web apps. Security configuration errors—such as default settings, common passwords, full path disclosure, and other information-leak errors—were present in four out of five apps, making this class of vulnerability the most common. Cross-site scripting errors were present in 77% of applications; 74% had authentication-related issues; and more than half (53%) had access control flaws. In most cases, the vulnerabilities stemmed from coding errors and could only be fixed by coding changes.

“Vulnerabilities associated with information leaks have become extremely widespread,” Galloway says. “Moreover, many applications do not protect against unauthorized access, which allows a hacker to get privileges and act more freely within the system.”

Galloway says it’s hard to say with certainty what impact Agile and DevOps practices have had on application security. “Unfortunately, not every company has a correct idea of these practices,” she says. Many organizations have reinforced the view that security is hindering the development of applications and are postponing cyber defense issues in pursuit of new functionality, Galloway notes.

The reality is that code security analysis is required at all stages of application development, she notes. Using a Web application firewall is a must as well, since attackers upgrade their methods much faster than companies are able to build protection. “For example, it can take weeks and months to fix code errors, and new exploits can be used by attackers a few hours or days after the appearance of vulnerability or [proof of concept] information.”

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/web-apps-are-becoming-less-secure/d/d-id/1334143?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

There May be A Ceiling on Vulnerability Remediation

Most organizations are doing all they can to keep up with the release of vulnerabilities, new research shows.

Security has no shortage of metrics — everything from the number of vulnerabilities and attacks to the number of bytes per second in a denial-of-service attack. Now a new report focuses on how long it takes organizations to remediate vulnerabilities in their systems — and just how many of the vulnerabilities they face they’re actually able to fix.

The report, “Prioritization to Prediction Volume 3: Winning the Remediation Race,” by Kenna Security and the Cyentia Institute, contains both discouraging and surprising findings.

Among the discouraging findings are statistics that show companies have the capacity to close only about 10% all the vulnerabilities on their networks. This percentage doesn’t change much by company size.

“Whether it was a small business that had, on average, 10 to 100 open vulnerabilities at any given time, they had roughly the same percentage of vulnerabilities they could remediate as the large enterprises, where they had 10 million-plus open vulnerabilities per month,” says Ed Bellis, CTO and co-founder of Kenna Security, 

In other words, Bellis said, the capacity to remediate seems to increase at approximately the same rate as the need to remediate. “The size thing that tipped us off that there might be some upper threshold on what organizations were able to to fix in a given time frame,” says Wade Baker, partner and co-founder of the Cyentia Institute., 

The time frame for remediating vulnerabilities differs depending on the software’s publisher. Microsoft and Google tend to have software with vulnerabilities remediated most quickly by organizations both large and small, Bellis says. The software with the longest remediation time? Legacy software and code developed in-house.

There are also dramatic differences in time to remediate between companies in different industries. Investment, transportation, and oil/gas/energy led the way in the shortest time to close 75% of exploited vulnerabilities, hitting that mark in as few as 112 days. Healthcare, insurance, and retail/trade took the longest for remediation, needing as much as 447 days to hit the milestone.

Data suggests that some organizations are able to do better than the averages — in some cases, remediating more vulnerabilities than were discovered and actually getting ahead of the problem. What the researchers don’t yet know is precisely what those high-performing companies are doing that is different. That, they say, is the subject of the next volume of their research.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/there-may-be-a-ceiling-on-vulnerability-remediation/d/d-id/1334142?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook sues developers over data-scraping quizzes

Facebook on Friday sued two Ukrainian men, Andrey Gorbachov and Gleb Sluchevsky, for allegedly scraping private user data through malicious browser extensions that masqueraded as quizzes.

The company also alleges that the deceptive extensions injected unauthorized ads into Facebook users’ News Feeds when their victims visited through the compromised browsers.

From Facebook’s civil complaint:

As a result of installing the malicious extensions, the app users effectively compromised their own browsers because, unbeknownst to the app users, the malicious extensions were designed to scrape information and inject unauthorized advertisements when the app users visited Facebook or other social networking site as part of their online browsing.

According to the complaint, from 2016 to 2018, Sluchevsky and Gorbachov allegedly ran at least four web apps: “Supertest,” “FQuiz,” “Megatest,” and “Pechenka.”

The apps ran quizzes promising answers to questions such as “Do you have royal blood?, “You are yin. Who is your yang?” and “What kind of dog are you according to your zodiac sign?” among many others.

The apps were advertised and shared on Facebook, but they were available on public websites associated with several domains, including megatest.online, supertest.name, testsuper.su, testsuper.net, fquiz.com, and funnytest.pro.

Both of the defendants are based out of Kiev and work for a company called the Web Sun Group. Sluchevsky presents himself as the company’s founder.

Scraped social profiles

Facebook says that their extensions enabled the two to illegally scrape users’ publicly viewable profile information, such as name, gender, age range, and profile picture, when infected users visited social networking sites – including Facebook.

Facebook didn’t name the other social networking sites that the apps allegedly scraped.

It did say, however, that the alleged scraping is akin to illegally trespassing on its own servers:

Defendants used the compromised app users as a proxy to access Facebook computers without authorization.

The apps also allegedly got at private information such as Facebook users’ friend lists.

Facebook discovered and shut down the malicious apps while investigating malicious extensions in 2018. The company says that the two men compromised the browsers of approximately 63,000 Facebook users and caused the company over $75,000 in damages.

The platform is seeking an injunction and restraining order against the two developers, to keep them from creating any more apps targeting Facebook users.

Facebook is also requesting financial relief for the costs of investigating the defendants’ operation and restitution of any funds the two might have made off the use of Facebook users’ data.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/kRHYh2GB4wk/

That marketing email database that exposed 809 million contact records? Maybe make that two-BILLION-plus?

Updated An unprotected MongoDB database belonging to a marketing tech company exposed up to 809 million email addresses, phone numbers, business leads, and bits of personal information to the public internet, it emerged yesterday.

Today, however, it appears the scope of that security snafu may have been underestimated.

According to cyber security biz Dynarisk, there were four databases exposed to the internet – rather than just the one previously reported – bringing the total to potentially more than two billion records weighing in at 196GB rather than 150GB.

Anyone knowing where to look on the ‘net would have been able to spot and siphon off all that data, without any authentication.

“There was one server that was exposed to the web,” Andrew Martin, CEO and founder of DynaRisk, told The Register on Friday. “On this server were four databases. The original discovery analysed records from mainEmailDatabase. The additional three databases were hosted on the same server, which is no longer accessible.

“Our analysis was conducted over all four databases and extracted over two billion email addresses which is more than the 809 million first discussed.”

The databases were operated by Verifications.io, which provides enterprise email validation – a way for marketers to check that email addresses on their mailing lists are valid and active before firing off pitches. The Verifications.io website is currently inaccessible.

The database first reported included the following data fields, some of which, such as date of birth, qualify as personal information under various data laws:

  • Email Records (emailrecords): a JSON object with the keys id, zip, visit_date, phone, city, site_url, state, gender, email, user_ip, dob, firstname, lastname, done, and email_lower_sha265.
  • Email With Phone (emailWithPhone): No example provided but presumably a JSON object with the two named attributes.
  • Business Leads (businessLeads): a JSON object with the keys id, email, sic_code, naics_code, company_name, title, address, city, state, country, phone, fax, company_website, revenue, employees, industry, desc, sic_code_description, firstname, lastname, and email_lower_sha256.

The image below shows Verifications.io’s four MongoDB databases exposed to the internet, as identified by Dynarisk:

Image of exposed databases

Martin said the severity of the security blunder is less than some may fear because there are no credit card numbers, medical records, nor any other bits of super-sensitive information involved.

“The issue here is this is a gigantic amalgamation of data all in one place,” he explained. “The leaking of this information may breach data protection regulations in various countries. The leak may also violate the privacy and security provisions between Verifications.io and their clients within their contracts.”

Bob Diachenko, a security researcher for consultancy Security Discovery, found the first Verifications.io database online, and said the marketing tech biz, based in Tallinn, Estonia, acknowledged the gaffe and hid the data silos from public view after he flagged it up.

Verifications.io told Diachenko that its company database was “built with public information, not client data.” This suggests at least some of email addresses and other details in the company’s databases were downloaded or scraped from the internet.

Diachenko didn’t immediately respond to a request for comment.

bucket

Amazon tries to ruin infosec world’s fastest-growing cottage industry (finding data-spaffing S3 storage buckets)

READ MORE

Security researcher Troy Hunt, who maintains the HaveIBeenPwned database of email accounts that have been exposed in online data dumps, said about a third of the email addresses in the Verifications.io database are new to HaveIBeenPwned. The other two thirds presumably were culled from the same online sources that supplied Hunt’s archives.

Martin said Verifications.io’s claim that its data came from public sources is open to interpretation. “These data sources might have been public at one time in the past and then not public at a later time,” he said. “It would be interesting to know if the company had a process of continuous compliance where they would validate if they were still allowed to store the data over time.”

Dtex, a security biz that focuses on the dangers of rogue or slipshod employees within businesses, said in its recent 2019 Insider Threat Intelligence Report that 98 per cent of incidents involving data left exposed in the cloud can be attributed to human error.

MongoDB versions prior to 2.6.0, released in 2014, were network accessible by default. Reversing that default setting hasn’t persuaded people to securely configure their MongoDB installations, though. Out of the box, MongoDB requires no authentication to access, a detail a lot of folks appear to overlook. ®

Updated to add

Vinny Troia, who stumbled upon the exposed Verifications.io data along with Diachenko, maintains roughly 810 million netizens were exposed by the misconfigured MongoDB installation.

Dynarisk, meanwhile, told us it counted up more than two billion records from all the databases, and, after further analysis, identified a total of 999 million unique email addresses.

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/08/verificationio_database_hole/

Hey Insiders! DTrace can now run riot in Windows 10, if you really want it to

Windows 10 has been tweaked to let devs enjoy the delights of DTrace while chasing down pesky bugs.

Microsoft’s Hari Pulapaka took to Twitter to share the news, though he swiftly followed it up with a blog post explaining that when he said “Windows 10”, he actually meant “Insider Builds from 18342” onwards.

The move is the latest to demonstrate that Microsoft is far from the anti-open-source beast of old.

The next release of Windows 10 also has a change aimed specifically at getting the thing up and running on Linux Kernel-based Virtual Machines (KVM).

Glue image, via Shutterstock

Microsoft tweaks Windows 10 on Arm64 to play nicely with KVM

READ MORE

To make things work, the Windows team added a new kernel extension driver, Traceext.sys, to expose the functionality required by DTrace. Pulapaka explained: “The Windows kernel provides callouts during stackwalk or memory accesses which are then implemented by the trace extension.”

At this point, security fans will be stroking their chins thoughtfully. Allowing DTrace to run riot in the kernel stomps on some of Windows’ built-in security. As DTrace can effectively make changes in functions being analysed, Microsoft’s PatchGuard must be disabled, which Pulapaka confirmed on Twitter.

PatchGuard, formerly known as Kernel Patch Protection (KPP), is designed to stop miscreants from tinkering with the Windows kernel and will also stop DTrace from doing its thing.

Pulapaka remarked that the team knew what was needed to be done to make the two co-exist, but that it was “a lot of work” and they were keen for developers to get their hands on the new toys.

As it stands, it is important to understand that booting with a kernel debugger attached will leave PatchGuard disabled. SecureBoot also needs to be disabled to actually set the necessary options.

DTrace has its roots in Sun Microsystems’ Solaris operating system, allowing developers to troubleshoot problems in real time and see what processes are doing in the guts of the system, either in user or kernel mode. It also allows devs to dynamically add tracepoints, detect deadlocks and so on.

The journey to Windows from Solaris was a bumpy one. After Oracle acquired Sun, the tool floundered somewhat until Big Red eventually open-sourced the thing. At its Ignite event last year, Microsoft announced that it had ported DTrace to Windows.

“DTrace on Windows” lurks under OpenDTrace on GitHub, and Microsoft plans to merge its changes over the coming months. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/12/dtrace_windows_10/

Reg webinar: Tune in for some knowledge on how to become an effective leader in IT security

Promo With companies of all sizes anxious to protect themselves from the growing danger of cyberattacks, what does it take to reach a leading role in the security field?

Tune in to our webinar on Thursday 21 March at 17:00 UTC to hear Scott King, an experienced systems engineer and former chief information security officer at Boston-based security firm Rapid 7, share the wide-ranging knowledge he has gained over his long career in IT security.

People can take different paths to a top position in security: some go directly from analyst to leadership, others have a more technical background in general IT, or excellent tactical skills acquired in a consultancy or vendor role.

You need a more pragmatic approach when communicating with business leaders about security risks and how to deal with incidents than the easy one you might take when talking to technical teams. How do you make the mental shift?

Scott offers valuable hints and tips on how to balance strategy and tactics, how to deliver at every stage, and how to understand the benefits of pragmatism in your security role.

Register to sign up here.

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/12/webinar_how_to_become_an_effective_leader_in_it_security/

Raiding party! UK’s ICO drops in unannounced on couple of dodgy-dialling dirtbag outfits

The UK’s data protection watchdog today raided two businesses suspected of making millions of nuisance calls.

The Information Commissioner’s Office has been investigating the companies, based in Brighton and Birmingham, for a year after receiving roughly 600 complaints about them.

The calls – said to involve road traffic accidents, personal injury claims and household insurance – did not identify the firms or allow people to opt out of receiving them.

This is a breach of direct marketing rules, the Privacy and Electronic Communications Regulations (PECR), so the ICO today sent in enforcement officers to seize computer equipment and documents for analysis.

“Today’s searches will fire a clear warning shot to business owners who operate outside the law by making nuisance marketing calls to people who have no wish to receive them,” said Andy Curry, head of the anti-nuisance call team.

“The evidence seized will help us identify any illegal business activities and assist us to take enforcement action, which may include action against the directors, on behalf of the victims who have turned to us for help.”

The ICO’s ability (or lack thereof) to raid businesses suspected of breaking data protection or direct marketing laws was made famous during the Cambridge Analytica saga, when it was forced to go to court to gain a warrant.

Since then, the watchdog has been given greater powers to conduct no-notice inspections, as well as receiving streamlined warrants, which the commissioner recently said was as a direct result of that probe.

A secondary effect of the late-night raid on the Cambridge Analytica offices was that it made the enforcement officers’ kit the envy of the data protection Twitterati.

The ICO was also last year granted the power to levy personal fines of up to £500,000 on the directors of dodgy-dialling companies in a bid to prevent execs from simply liquidating their companies to dodge penalties handed to them under PECR. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/12/ico_raid_nuisance_callers/

ProtonMail back up in Russia after regime chokes access over ‘terrorist activity’

ProtonMail is “back to running normally in Russia now” after the country blocked access to the encrypted email service, claiming that students at a sports competition were using it to spread anti-regime propaganda.

The Russian-language Habr news aggregator reported yesterday that Russian telcos MTS and Rostelecom were sinkholing locals’ inbound requests to ProtonMail’s SMTP servers, discovering the issue after users started asking why the service’s email newsletters weren’t arriving. Habr uses ProtonMail to send its bulletins.

Habr author Pas posted in Russian: “We began to rake out the mail logs and found that the connections of our servers to ProtonMail MX servers (185.70.40.101, 185.70.40.102) end with network timeouts. It looked strange for a number of reasons and was similar to the use of the blocking mechanism practiced in Russia.”

Pas was also able to obtain and publish a letter from Russia’s FSB spy agency dated 25 February 2019 ordering one of the ISPs to block ProtonMail. As part of a reasonably organised police state, it is plausible the FSB knew about the protests in advance. The FSB letter said, in part:

We have seen more frequent cases of false reports of terrorist activity aimed at objects of social and critical infrastructure. In January 2019, Russian cities saw mass evacuations of schools, administrative buildings and shopping centers. According to the Prosecutor General’s Office of the Russian Federation, there were 1,300 court cases started in 2018 related to the Criminal Code chapter 207 – false notification about an upcoming act of terrorism. According to experts at the Interior Ministry, material damages from mass evacuations in January 2019 alone totaled around 500 million roubles.

In its work, the Center [of Information Security, an FSB unit] detected internet resources used for mass dissemination of intentionally false information about terrorist acts.

It then went on to list internet resources that must be blocked by 20 February 2020, in order to “ensure security during the XXIX World University Winter Games” (the Universiade) in Krasnoyarsk.

“Allegedly, the reason for the block is because of criminals using ProtonMail to send threats,” chief exec Andy Yen told The Register, “but the method of the block (preventing messages from being sent to ProtonMail, as opposed to blocking delivery of messages from ProtonMail) seems inconsistent with that claim.”

Yen said his firm had restored Russian users’ access (“We don’t want to share the technical details for reasons that you can probably understand”), adding: “Users in Russia suspect (and the timing seems to confirm) that it might have more to do with the massive protests which took place yesterday.”

The Russian authorities recently stepped up their plans to seize control of the World Wide Web within their borders, which they refer to as Runet (Russian internet).

This is not the first time ProtonMail has fallen foul of authoritarian governments. A year ago the Turkish regime of Recep Tayyip Erdoğan ordered ProtonMail to be blocked – which was easily worked around with a VPN. ®

Reg reporter Max Smolaks carried out some of the translations for this article.

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/12/protonmail_blocked_russia_fsb/

The 12 Worst Serverless Security Risks

A new guide from the Cloud Security Alliance offers mitigations, best practices, and a comparison between traditional applications and their serverless counterparts.

Serverless computing has seen tremendous growth in recent years. This growth was accompanied by a flourishing rich ecosystem of new solutions that offer observability, real-time tracing, deployment frameworks, and application security.

As awareness around serverless security risks started to gain attention, scoffers, and cynics repeated the age-old habit of calling “FUD — fear, uncertainty and doubt — on any attempt to point out that while serverless offers tremendous value in the form of rapid software development and huge reduction in TCO, there are also new security challenges.

The Evolving Serverless Ecosystem
One of the key indicators for a mature technology is the ecosystem that evolves around it. Having a thriving community, extensive documentation, best-practices guides, and tooling is what will drive organizations to trust new technologies and adopt them.

Recently, the Cloud Security Alliance (CSA) joined forces with PureSec, where I am CTO and co-founder, to develop an extensive serverless security guide. The guide draws much of its content from last year’s effort, but with the addition of two important risk classes.

The guide, titled “The 12 Most Critical Risks for Serverless Applications,” was written for both security and development audiences dealing with serverless applications but goes well beyond pointing the risks. It also provides best practices for all major platforms. The risk categories are defined as follows:

Risk 1: Function Event-Data Injection
Serverless functions can consume input from different types of event sources, and each event source has its own message format and encoding schemes. Various parts of these event messages may contain attacker-controlled or untrusted inputs that should be carefully inspected.

Risk 2: Broken Authentication
Since serverless promotes a microservices-oriented system design, applications may contain dozens or even hundreds of functions. Applying robust authentication can easily go awry if not executed carefully.

Risk 3: Insecure Serverless Deployment Configuration
Cloud providers offer many configuration settings to adapt services for specific needs. Out-of-the-box settings are not necessarily always the most secure. As more organizations are migrating to the cloud, cloud configuration flaws will become more prevalent.

Risk 4: Overprivileged Function Permissions and Roles
Managing function permissions and roles is one of the most daunting security tasks organizations are facing when deploying applications to the cloud. It is quite common to see developers cut corners and apply a “wildcard” (catch-all) permission model.

Risk 5: Inadequate Function Monitoring and Logging
While most cloud vendors provide extremely capable logging facilities, these logs are not always suitable for the purpose of providing a full security event audit trail at the application layer.

Risk 6: Insecure Third-Party Dependencies
While the problem of insecure third-party libraries is not specific to serverless, being able to detect malicious packages is more complex in serverless environments given the lack of ability to apply network and behavioral security controls.

Risk 7: Insecure Application Secrets Storage
One of the most frequently recurring mistakes related to application secrets storage, is to simply store these secrets in a plain text configuration file that is a part of the software project. Another common mistake is to store these secrets in plain text, as environment variables.

Risk 8: Denial-of-Service and Financial Resource Exhaustion
Serverless architectures bring promises of automated scalability and high availability; however, as with any other type of application, it is critical to apply best practices and good design in order to avoid bottlenecks.

Risk 9: Serverless Business Logic Manipulation
Business logic manipulation is a common problem in many types of software. However, serverless applications are unique, as they often follow the microservices design and contain numerous functions chained together to form the overall logic. Without proper enforcement, attackers may be able to tamper with the intended logic. 

Risk 10: Improper Exception Handling and Verbose Error Messages
Line-by-line debugging options for serverless-based applications are limited (and more complex) when compared with debugging capabilities for standard applications. As a result, developers frequently adopt the use of verbose error messages, which may leak sensitive data.

Risk 11: Legacy/Unused Functions Cloud Resources
Over time, serverless functions and related cloud resources may become obsolete and should be decommissioned. The reason behind pruning obsolete components is to reduce unnecessary costs and eliminate avoidable attack surfaces. Obsolete serverless application components may include deprecated serverless function versions, unused cloud resources, unnecessary event sources, unused roles or identities, and unused dependencies.

Risk 12: Cross-Execution Data Persistency
Serverless platforms offer application developers local disk storage, environment variables, and memory to perform their tasks. To make serverless platforms efficient in handling new invocations, cloud providers might reuse the execution environment for subsequent invocations. If the serverless execution environment is reused for subsequent invocations, belonging to different users or sessions, it is possible that sensitive data will be left behind and might be exposed.

It’s important to note that the purpose of the guide is to raise awareness and help organizations innovate with serverless securely, not to spread fear. Security risks exist in any type of platform, and serverless is no different. CSA’s goal in raising these issues is to encourage organizations to adopt new technologies while avoiding risks and common mistakes.

Editors’ note: Ory Segal is a CSA Israel chapter board member. He led the effort behind the “The 12 Most Critical Risks for Serverless Applications” guide.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Ory Segal is a world-renowned expert in application security, with 20 years of experience in the field. Ory is the CTO and co-founder of PureSec, a start-up that enables organizations to secure serverless applications. Prior to PureSec, Ory was senior director of threat … View Full Bio

Article source: https://www.darkreading.com/cloud/the-12-worst-serverless-security-risks/a/d-id/1334079?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple