STE WILLIAMS

9 Years After: From Operation Aurora to Zero Trust

How the first documented nation-state cyberattack is changing security today.

It’s January 12, 2010. In a blog post, Google publicly discloses that it has been the victims of a targeted attack originating in China. The attack resulted in the theft of intellectual property, but the attackers didn’t stop with Google — they targeted at least 20 different organizations across the globe, in an attack that would later become known as Operation Aurora.

Operation Aurora was a shock for many organizations because it made everyone face a new kind of threat, one that previously was only whispered about around the watercooler. A government-backed adversary, with near-unlimited resources and time, had struck the world’s largest Internet company — and almost got away with it.

No one wanted to be the first to call Operation Aurora a nation-state attack. The possibility was certainly there, but the fear was that by rushing to attribution and getting it wrong could mean the first person to speak would be viewed as Chicken Little for the rest of his or her career.

Later, leaked diplomatic cables would show this attack was “part of a coordinated campaign of computer sabotage carried out by government operatives, private security experts and Internet outlaws recruited by the Chinese government.” With confirmation from several sources all reaching the same conclusion — that Aurora was in fact government sanctioned and sponsored — our beliefs about what constituted reasonable choices for the state of security within the enterprise would never be the same.

Here at Akamai, one of the companies targeted by Aurora, the attacks became a primary driver for change.

Target: Domain Admins
Akamai was affected by the Aurora attacks because a domain administrator account was compromised. From there, the attackers were able to enter any system they wanted, including the system they targeted. Fortunately, while systems were compromised, the specific data the attackers were seeking didn’t exist. So, in a way, Akamai was lucky. Still, there was an incident, and the underlying hazards needed to be addressed.

All across the industry, we talk about trust, but the systems and processes used to establish trust have been broken or abused time and time again. Our journey for addressing this trust challenge began by examining how Akamai managed systems administration.

We started by replacing accounts that could log in to anything with narrow, tailored accounts that were not the principal account for the user. Doing this created a situation where a single error couldn’t lead to the fall of the company, but rather a situation where a series of errors and failures would be required before that can happen.

When people think of blocking and tackling of security, getting domain administration right and implementing the right tools and policies are where you start. But — while not minimizing this task and sweeping change — this was only the beginning. Over the next several years, we migrated further and further away from passwords to point authentication. This was essentially an in-house SSO, but even that was altered to focus on X509 certificates and, later still, push-based authentication.

Lessons Learned
It’s been a nine-year journey. Nine years since Aurora, and we’re still not done changing. We went from a place to where, if you were on the network, you had access to everything to now, when you’re not even on the network. Today, services and applications are only available to those who need access to them. It’s no longer about trusting where you are; it’s about trusting that you’re you. So when you’re compromised, the adversary can access only the tools and services available to you, and nothing else.

Over the last decade, a new concept has started to take hold in the security industry. We call it a number of things — zero trust, BeyondCorp, nano-segmentation, micro-segmentation — but the goal of this idea is to move away from location-based trust on the network. We followed a parallel path, breaking new ground along the way.

We got it right in a lot of places, but there were plenty of lessons to learn. Don’t be afraid to realize that you’ve chased down the wrong path. 802.1x for our corporate network, in the grand scheme of things, was the wrong path. We learned a lot by doing it, and if we hadn’t done that, we’d be in a worse place today. But we’re going to basically throw out all of that hard work in the next few years as we move to an ISP-like model for our physical buildings, and that’s OK.

Change is a constant in the security industry, and being willing to change as needed is one of the key growth factors in any business — large or small. It’s taken nine years to figure out what we wanted and to get to where we are. And we’ve taken this journey so that others can do it more seamlessly going forward.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Andy Ellis is Akamai’s chief security officer and his mission is “making the Internet suck less.” Governing security, compliance, and safety for the planetary-scale cloud platform since 2000, he has designed many of its security products. Andy has also guided Akamai’s IT … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/9-years-after-from-operation-aurora-to-zero-trust/a/d-id/1333901?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Mastercard, GCA Create Small Business Cybersecurity Toolkit

A new toolkit developed by the Global Cybersecurity Alliance aims to give small businesses a cookbook for better cybersecurity.

Small and mid-sized businesses have most of the same cybersecurity concerns of larger enterprises. What they don’t have are the resources to deal with them. A new initiative, the Cybersecurity Toolkit, is intended to bridge that gulf and give small companies the ability to keep themselves safer in an online environment that is increasingly dangerous.

The Toolkit, a join initiative of the Global Cyber Alliance (GCA) and Mastercard, is intended to give small business owners basic, usable, security controls and guidance. It’s not, says Alexander Niejelow, senior vice president for cyber security coordination and advocacy and MasterCard, that there’s no information available to the small business owners. He points out that government agencies in the U.S. and the U.K. provide a lot of information on cybersecurity for businesses.

It’s just that, “It’s very hard for small businesses to consume that. What we wanted to do was remove the barriers to effective action,” he says, and go beyond broad guidance to giving them very specific instructions presented, “…if at all possible in a video format and clear easy to use tools that they could use right now to go in and significantly reduce their cyber risk so they could be more secure and more economically stable in both the short and long term.”

Improving security for small businesses can have an enormous international impact, Niejelow says. “Around the world, small businesses are critical to people’s economic success and survival. At the same time we as an industry and a group of countries have left small businesses behind when it comes to cybersecurity.”

The GCA has partnered with several organizations, with Mastercard’s sponsorship, to create the GCA Cybersecurity Toolkit. The partners include the Center for Internet Security, the Cyber Readiness Institute, the City of London and the City of New York. According to the announcement of the initiative, The Cybersecurity Toolkit includes a number of specific sections, including:

  • Operational tools that help them take inventory of their cyber-related assets, create and maintain strong passwords, use multi-factor authentication, perform backups of critical data, prevent phishing and viruses;
  • How-to materials, such as template policies and forms, training videos, and other foundational documents they can customize for their organizations;
  • Recognized best practices from leading organizations in the industry including the Center for Internet Security Controls, the UK’s National Cyber Security Centre Cyber Essentials, the Australian Cyber Security Centre’s Essential Eight, and Mastercard.

Phil Reitinger, president and CEO of GCA says that they hope to see a dramatic uptake of information from the toolkit in a very short period of time. “Our stated goal here is to have a broad effect, and the stated goal is we want to reach a million businesses in 1,000 days,” he says.

As for how those businesses should use the information, “We’ve tried to put a bunch of tools together that small businesses can actually use,” Reitinger explains, continuing, “If we make it so simple that the family dry cleaner with a mom, a dad ,and two kids can do what they need to do, then the rest will flow from that.”

“Small businesses individuals are not dumb,” Reitinger says. “They are exceedingly smart people but a truck driver is good at driving a truck; he’s not so good necessarily at securing his own computer.” And Niejelow says that business owners shouldn’t need to be cybersecurity professionals. He explains, “It’s time we reduced the complexity of this issue and start making it more approachable so that our businesses can get back to doing what they do extremely well.”

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/mastercard-gca-create-small-business-cybersecurity-toolkit/d/d-id/1333914?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

POS Vendor Announces January Data Breach

More than 120 restaurants were affected by an incident that exposed customer credit card information.

North Country Business Products, a point-of-sale terminal and network provider based in Bemidji, Minnesota, has announced a “data security incident” that impacted more than 120 of its restaurant customers located across the Midwest and West regions last month.

The company said it learned of suspicious activity on Jan. 3 and, after an investigation, found that a third party had access to restaurant customer information between Jan. 3 and Jan. 24. The information disclosed included credit card holder names and numbers, expiration dates, and CVVs.

North Country says it has corrected the issue and is in the process of notifying customers. 

Read more here.

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/pos-vendor-announces-january-data-breach/d/d-id/1333921?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

As Businesses Move Critical Data to Cloud, Security Risks Abound

Companies think their data is safer in the public cloud than in on-prem data centers, but the transition is driving security issues.

More business-critical data is finding a new home in the public cloud, which 72% of organizations believe is more secure than their on-prem data centers. But the cloud is fraught with security challenges: Shadow IT, shared responsibility, and poor visibility put data at risk.

These insights come from the second annual “Oracle and KPMG Cloud Threat Report 2019,” a deep dive into enterprise cloud security trends. Between 2018 and 2020, researchers predict the number of organizations with more than half of their data in the cloud to increase by a factor of 3.5.

“We’re seeing, by and large, respondents are having a high degree of trust in the cloud,” says Greg Jensen, senior principal director of security at Oracle. “From last year to this year, we saw an increase in this trust.”

It’s a definite shift from a time in the not-so-distant past when businesses felt the cloud was less secure than their on-prem data centers. Cloud services are no longer nice-to-have elements of IT; they handle core functions related to all aspects of business operations. Software-as-a-service (SaaS) applications, in use among 84% of respondents, help remove cost and complexity of on-prem infrastructure.

Organizations have begun to test business-critical services in the cloud in recent years, Jensen says. Within the past couple of years there has been a “tipping point” at which a large percentage of businesses are diving in. More than 70% of survey respondents say the majority of their cloud-based data is sensitive, an increase from 50% who said the same last year.

The rise of automation has contributed to a change in mindset and businesses’ sense of safety, Jensen continues. And while the cloud brings several benefits, the expectation that it solves all problems is flawed. “Cloud does still take work,” he adds, and does require human effort.

CISOs on the Cloud Security Sidelines
Most (82%) of respondents polled have experienced security events due to confusion in the shared responsibility model. It’s not for lack of effort: Nintey-one percent have formal methodologies for cloud use; however, 71% think employees violate policies and lead to malware and data compromise.

While many cloud security providers offer native security controls, it’s up to the organization to apply and manage those controls or ones offered by third parties. Researchers found the less customers are responsible for, the more confused they are about security responsibilities. For instance, 54% of respondents expressed confusion with how they should be securing SaaS, even though their responsibility is limited to two things: data and user access and identity.

People who should know about this responsibility are in the dark. Only 10% of CISOs polled fully understand the shared responsibility model, compared with 26% of CIOs who reported no confusion. Researchers attribute the gap to CISOs’ lack of involvement in cloud services.

“CISOs are really one of the newer C-level roles of the cyber enterprise today, and they’ve struggled attaching themselves in more of a collaborative way,” Jensen says. And while CISOs, CIOs, data privacy officers, and other executives should share responsibility to protect data, it’s typically the person in charge of security who takes the fall when there’s a major cyberattack.

Of course, it doesn’t help when different cloud providers have different models. Eighty-nine percent of respondents say the varying models have been a “significant challenge,” and 46% have had to dedicate one or more resources to it; 43% are managing with existing resources.

The Problems with Poor Visibility
Visibility remains the top cloud security challenge, report 38% of respondents. Thirty percent say they are challenged by the inability of existing network security controls to provide visibility into public cloud workloads. Jensen says this finding is consistent with 2017 findings.

“What we’re seeing is this issue, very similar to last year, the No. 1 security challenge cloud organizations are dealing with is detecting and reacting to what we call security event telemetry in the cloud,” he adds. Security teams’ inability to detect and respond to events has been at the center of several high-profile data breaches, researchers note in the full report.

Only 12% of respondents can see more than 75% of security event data. Nineteen percent can analyze 61% to 75% of security data, and 27% (the highest percentage) can view 41% to 60%.

Third parties that have access to an organization’s cloud data can drive risk. Business partners, supply chain partners, contractors, auditors, part-time employees, customers, and other individuals all use different devices and operate under different policies than full-time workers. Enterprise file sync-and-share (EFSS) services, one of the most common types of shadow IT applications, are often used to share data inside and outside organizations.

“There are challenges around how companies are losing control of their intellectual property,” which increases their exposure to data breaches, says Jensen. About half (49%) of businesses were hit with malware due to third-party compromise; 46% reported unauthorized data access.

Shadow IT is a key driver of cloud security challenges. Most organizations report having a formal policy to review and approve cloud applications; however, 92% of this year’s respondents are concerned those policies are being violated. Nearly 70% are aware of a “moderate or significant” amount of shadow IT apps in use, and 50% say the use of unsanctioned cloud apps has led to unauthorized access to corporate data.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/as-businesses-move-critical-data-to-cloud-security-risks-abound/d/d-id/1333924?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Virus attack! Hackers unleash social media worm after bug report ignored

What happens when you report a vulnerability to a website and it completely ignores your request, in spite of running a bug bounty program that’s supposed to pay for disclosures?

Some hackers might just walk away, but a group of app developers in Russia chose another approach. They used the vulnerability to spam thousands of users on Russia’s largest social network.

The group, called Bagosi, develops apps that run on St Petersburg-based VKontakte (VK), a social network with over 500m users owned by Russian Internet company Mail.ru.

According to ZDNet, the group discovered a vulnerability in the social network and alerted developers there a year ago.

In a post on VKontakte, Bagosi explained that the social network ignored the bug report and didn’t pay the person that discovered it for their submission or acknowledge it in any way. This is in spite of the fact that VKontakte runs a bug bounty program with Hacker One. VK told Naked Security that the program has been running since 2015 and has paid out $250,000 in bounties. However, Hacker One also told us that the VK program is self-managed, meaning that the social network handles bug reports using its own internal teams rather than relying on Hacker One’s employees.

Bagosi decided to bring the vulnerability to users’ attention in a spectacular way. It wrote a VK post containing a script that would activate when viewed. The script posted a link to the post on any group or page that the victim managed.

Bagosi used some obfuscating tactics, according to explanatory posts that it made on VK. It accessed random reviews from the Google Play store and also randomised headlines to help dodge anti-spam filters, it said.

Clearly, VK can move quickly when it wants to. The app developers launched the attack on 14 February, and the social network shut it down quickly. A VK spokesperson told Naked Security:

Within the first minute of the vulnerability being discovered, we began deleting the undesirable posts, and within 20 minutes, the vulnerability was completely fixed.

Still, the page spread quickly before VK blocked the vulnerability. Bagosi explained in a VK post:

The page has accumulated more than 100k views. Since VK takes into account only unique views, it can be concluded that ~140k people have become “victims” of the worm.

VK had banned the group’s account from the website after detecting the spam, only reversing the ban after realising that the worm didn’t steal any user data.

Bagosi said it had done its best to report the error, but it was ignored. This raises the question: Is it ok to launch a benign proof of concept that you know will go wide, to bring a flaw to people’s attention, or should you stay quiet?

We asked Dan Kaminsky what he thought. Kaminsky is arguably the king of responsible disclosure, best known for managing to keep a major DNS flaw under wraps for months while he worked with major internet companies to introduce a fix. He said:

Benign proof-of-concepts tend not to actually manipulate production systems. This one did. That doesn’t make it malicious, but if the goal is to protect users, researchers can be friendlier.

There is a middle ground that doesn’t involve spamming thousands of people to make a point.

He added:

At these end of the day, these sorts of spats between vendors and researchers are not in the interests of user safety.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/yiFhAoTNEz8/

Ep. 020 – Leaky containers, careless coders and risky USB cables [PODCAST]

The Naked Security podcast explains the recent security hole in Linux products such as Docker and Kubernetes, ponders whether Apple’s insistence on 2FA for developers will bring rogue apps under control, and tells you whether to worry about booby-trapped USB cables.

With Anna Brading, Paul Ducklin and special guest Greg Iddon.

This week’s stories:

If you enjoy the podcast, please share it with other people interested in cybersecurity, and give us a vote on iTunes and other podcasting directories.

Listen and rate via iTunes...
Sophos podcasts on Soundcloud...
RSS feed of Sophos podcasts...

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/yHave-9zQ6Q/

Who will stand up for European democracy? Us! says US software giant Microsoft

American tech giant Microsoft revealed this morning it has detected a wave of attacks against European democratic institutions as miscreants continue malware insertion attempts.

Helpfully coinciding with the announcement that Microsoft, staunch defender of all things democratic, would be rolling out its AccountGuard service, the team at Redmond said it had spotted attacks against 104 accounts of users working in European institutions.

The Microsoft Threat Intelligence Center (MSTIC) said the attacks targeted employees of the German Council on Foreign Relations, European offices of The Aspen Institute and The German Marshall Fund. The accounts were spread over Belgium, France, Germany, Poland, Romania, and Serbia.

The attacks are depressingly similar to those carried out against US institutions, with the usual suspects present and correct in the spear-phishing campaign: legit-looking email addresses and malicious URLs all aimed at slurping credentials and delivering malware.

European lawmakers warned last week that “malign actors” would be a factor in the upcoming European Parliament election, in a reference to meddling by the likes of Russia. Prominent German figures have also had their run-ins with hackers as personal data was spat out for all to see earlier this year.

The threat intel unit reckoned many of the attacks, which occurred between September and December 2018, came from a group Redmond called “Strontium”. We’d like to think that’s in reference to the long-running 2000 AD comic strip “Strontium Dog“, created by John Wagner and the much-missed Carlos Ezquerra, and featuring a mind-reading mutant named Johnny Alpha. The reality is likely a little more prosaic.

Cult comics aside, Microsoft swiftly notified the affected organisations so steps could be taken to secure systems.

Extending AccountGuard

While hand-wringing continued over the potential for miscreants meddling in the electoral system, Microsoft also extended the AccountGuard technology in France, Germany, Sweden, Denmark, Netherlands, Finland, Estonia, Latvia, Lithuania, Portugal, Slovakia, and Spain. The US has enjoyed the service since 2018, and the UK was on the receiving end of the software giant’s largesse in November.

More countries will be added to the programme over the coming months.

AccountGuard is a service Microsoft extends to pretty much anyone in supported regions who is involved in the democratic process, from candidates through to think-tanks and non-profit outfits. Enrolled users enjoy notification of attacks over personal and organisational systems as well as assistance from Microsoft in getting things secured and keeping them that way.

Assuming, of course, the organisations are signed up for Office 365. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/20/accountguard_microsoft/

Prep for The Next Cybersecurity Arms Race at Black Hat Asia

Don’t miss out on some of the world-class Briefings and Trainings on offer for cybersecurity professionals concerned about the most pressing threats of 2019.

As you get ready for Black Hat Asia in Singapore next month, organizers want to ensure you don’t miss out on some of the Briefings and Trainings for cybersecurity professionals concerned about the most pressing threats of 2019.

Most notably, veteran cybersecurity expert Mikko Hyppönen will be at the show this year to present the keynote Briefing on “The Next Arms Race.” You’ll want to see Hyppönen examine how many countries are investing in offensive cyberpower; he’ll give you an eye-opening look at the very beginning of a new arms race.

For more intense hands-on training to prepare for today’s cybersecurity challenges, check out the “Tactical OSINT For Pentesters” 2-Day Training. You’ll learn how to use OSINT (Open-Source Intelligence, collected from publicly-available sources) data along with the significance of this data, and how it can be enriched and used offensively for attacking and compromising modern digital infrastructures.

Those who sign up for this Training will be given a framework to manage and prioritize all the data collected during the course along with private lab access good for one month so you can practice skills learned during the training.

Pentesting Industrial Control Systems” is another fantastic 2-Day Training that offers a similar degree of practical, hands-on training in how to find and exploit vulnerabilities in industrial control systems. Sign up for this Training and you’ll learn everything you need to start ICS pentesting, from the basics of ICS vulnerabilities to the ins and outs of exploiting Windows Active Directory weaknesses to take control of ICS systems, most of which rely on Windows.

The Training will end with a challenging hands-on capture the flag exercise: the first CTF in which you capture a real flag! Using your newly-acquired skills, you’ll try to compromise a Windows Active Directory, pivot to an ICS setup, and take control of a model train and robotic arms.

Finally, close out your time at Black Hat Asia 2019 with “Locknote: Conclusions and Key Takeaways from Black Hat Asia 2019,” the perennially popular end-of-show Briefing that Black Hat founder Jeff Moss uses to close out the event. It features a candid discussion between Moss and members of the Black Hat Review Board, which is a great opportunity to catch up on all the key takeaways of the show and get a condensed overview of what to expect in the year ahead. Don’t miss it!

Black Hat Asia returns to the Marina Bay Sands in Singapore March 26-29, 2019. For more information on what’s happening at the event and how to register, check out the Black Hat website.

Article source: https://www.darkreading.com/black-hat/prep-for-the-next-cybersecurity-arms-race-at-black-hat-asia/d/d-id/1333909?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Anatomy of a Lazy Phish

A security engineer breaks down how easy it is for unskilled attackers to trick an unsuspecting user to submit credentials to a phishing site.

Phishing is one of the most effective ways hackers can compromise a network. Instead of requiring the skills and time to target specific organizations, perform reconnaissance, discover vulnerabilities, and select attack vectors, attackers can indiscriminately blast out phishing emails and wait for users to be tricked into submitting their credentials to a phishing site. I recently came across a phishing page that shows the interworkings of a phish and just how easy it is for unskilled and lazy attackers to host a credential harvesting page.

Like any phish, this starts with an email that appears to be from a reputable source and sends the recipient to a malicious site. In this case, the link directs to a page that is designed to look exactly like Microsoft’s online login page, where the user is asked to enter his or her username and password. After all, this is exactly what the attacker wants — the user’s credentials.

So far, this is a standard phishing attack. The attacker sends a link in an email to trick the end user into visiting a phishing site that aims to steal the user’s credentials. But I didn’t stop there. I wanted to see what else could be found on this website, so I navigated to the homepage of the site and discovered the following:

Credential Harvester Directories

Above, we see the directories and contents of the credential harvester left by the attacker on their publicly accessible home page. Drilling into the “new” folder within this directory, I discovered that the attacker left their entire exploit source code in a zip file titled “bless.zip.” Fully extracted, this zip file holds various .php files that contain instructions for the login process on the phishing site and for blocking certain clients from accessing the webpage. Further examination of this source code shows exactly how the attacker siphons user information, and who they’re trying to prevent from viewing their site.

In the action.php file below, we see what happens when a victim submits credentials to this phishing site.

The .php code records the user’s IP address; performs a geolocation lookup on the IP address to determine its country of origin; and records the date and time of access, the user’s browser type, and the username (or phone number) and password that the user submits to the phishing page. The $sent variable reveals the email address where the attacker sends credentials, tailored to this specific phishing campaign to hide the attacker’s personal identity. The email $headers variable contains the sending email address for this credential harvester: wirez[@]googledocs[.]org. A DuoLabs report that analyzes phishing kits at scale suggests that this sender address appears in more than 115 unique phishing kits.

Examination of the other .php files shows additional information about the exploit kit. In the file block.php, the kit specifically checks for keywords in the hostname of clients visiting the site. Terms such as “phishtank,” “google,” “trendmicro,” and “sucuri.net” in the client hostname will result in the exploit kit sending the client to a 404 Not Found page rather than the impersonated Microsoft login site. This code aims to prevent security-oriented organizations from accessing the exploit page and identifying it as a phishing site, and thwart users visiting from cloud-based services from accessing the site. The file includes 568 IP addresses that are blocked from viewing its login page.

The content of the examined .php files and the fact that they were publicly accessible on the homepage of the phishing site demonstrates that this attacker was either not technically savvy or felt that controlling access to their exploit source code and hiding the email account receiving victim credentials was not worthy of their time. In either case, it’s a great example of why phishing is so dangerous: It takes minimal effort and skill on the attacker’s end and only one user to fall victim to the attack to effectively compromise an organization.

There’s no one technical solution that can prevent all phishing attacks from being successful. What’s needed are layers of security structured to prevent the delivery of a phish, detect phishing emails that do make it into an organization, alert security personnel when a phish is delivered, and prevent users from visiting malicious phishing sites.

Most importantly, end users need to be aware of the threat that phishing poses to their organization and empowered with knowledge to determine whether an email is legitimate. When an organization is targeted by an attacker, it will be layers of security and users’ knowledge that ultimately determines whether a phishing email leads to a breach, or if the email is simply discarded by technical controls or an informed end user.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Jordan Shakhsheer is an information security engineer at Bluestone Analytics. She has extensive experience conducting incident response and digital forensic investigations. Jordan’s work includes eradicating threat actors from critical infrastructure, and producing actionable … View Full Bio

Article source: https://www.darkreading.com/application-security/the-anatomy-of-a-lazy-phish/a/d-id/1333879?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google’s working on stopping sites from blocking Incognito mode

Google Chrome’s Incognito mode hasn’t been an impenetrable privacy shield: For years, it’s been a snap for web developers to detect when Chrome users are browsing in private mode and to block site visitors who use it.

Google’s known all about it. And finally, 9to5Google reports, it looks like the company plans to close the loophole that’s enabled sites to detect when you’re using Incognito mode.

That loophole: websites have detected Incognito mode by trying to use an API that the mode turns off.

There are many ways to detect Incognito mode: as 9to5Google suggests, if you search for “how to detect Incognito mode,” you’ll find that developers have contributed ways to do so on Stack Overflow.

One easy way has been to sniff out that API: a developer can simply try to use Chrome’s FileSystem API, which is disabled in Incognito mode. That API is used by apps to store files, be it temporarily or more permanently. Incognito shuts it off entirely so that the API won’t create permanent files that could jeopardize somebody’s privacy.

This is what some websites do, particularly if they’ve got content behind a paywall, as does the Boston Globe: they detect and block Incognito users, since such users can’t be tracked and have used the mode to bypass paid subscription requirements.

From a Stack Overflow commenter:

[The] site could detract value by detecting incognito. Boston Globe’s site won’t display its articles in incognito, preventing the user from circumventing their free articles quota.

“This is brilliant!” one dev said after the method was posted in January 2015. “Clean and elegant,” said another in October of that year.

Well, get ready to kiss it goodbye, said yet another developer on Saturday, pointing to a series of recent commits to Chromium’s Gerrit source code management.

The commits show that Google’s working on implementing a virtual file system for Chrome to present when it’s in Incognito mode and a site asks for one. The virtual file system will be created in RAM, to ensure it will be deleted once a user leaves Incognito. 9to5Google’s Kyle Bradshaw:

This should easily shut down all current methods for detecting if Chrome is Incognito.

The developer who’s handling the detection prevention feature said that he’s hoping that it will launch in Chrome 74, with the use of a flag. It should be enabled by default in Chrome 76.

According to Chromium Dash, Chrome 74’s stable release is scheduled for April 23. The stable release for Chrome 76 is slated for July 30.

This could all be just a stopgap, though, given that Google would eventually like to ditch the FileSystem API altogether. According to an internal design document obtained by 9to5Google, once the virtual file system is in place, Google is going to suss out “how many legitimate uses of it remain once the Incognito detection abusers move on.”

Bradshaw quoted from the internal document:

Since there’s no adoption of the FileSystem API by other browser vendors, it appears to be only used by sites to detect incognito mode. By making this harder, hopefully the overall usage of the API goes down to the point that we can deprecate and remove it.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/YdfLnLS4Zr0/