STE WILLIAMS

Dead retailer’s ‘customer data’ turns up on seized kit, unencrypted and very much for sale

Servers that once belonged to defunct Canadian gadget retailer NCIX turned up on the second-hand market without being wiped – and their customer data sold overseas – it is claimed.

Those boxes, allegedly, stored plaintext credit card data for approximately 260,000 people, and purchase records for 385,000 shoppers.

Travis Doering, of infosec shop Privacy Fly, claimed he discovered the security cockup in the simplest way possible: he spotted the machines advertised on Craigslist, answered the ad, and inspected what was on offer.

According to the security consultant in a writeup this week, the hardware haul turned out to be 18 Dell Poweredge boxes from NCIX’s server farm, plus storage kit, and 300 desktop machines. They were seized by the retailer’s landlords after NCIX failed to pay CA$150,000 in rent, and sold off via auction to another person, who then apparently hawked the equipment to interested buyers via Craigslist last month.

The chain’s database files, dating back to 2007, were unencrypted on the machines, and covered all aspects of the business, according to Doering:

The nciwww database contained a thousand records from affiliates listing plain text passwords, addresses, names, and some financial data. In another table of information, I found customer service inquiries including messages and contact information. There were also three hundred eighty-five thousand names, serial numbers with dates of purchase, addresses, company names, email addresses, phone numbers, IP addresses and unsalted MD5 hashed passwords. The database also contained full credit card payment details in plain text for two hundred and fifty-eight thousand users between various tables.

Other database tables contained millions of records created through the entire life of NCIX, we’re told. Customers in the databases lived in the US as well as Canada, it was claimed.

Infosec world and US Army veteran Jake Williams described it as “one of the most egregious data breaches ever”…

The contact offering the kit for sale, known only as “Jeff,” also explained that he’d sold NCIX data to more than one overseas customer: $15,000 got each buyer “thirteen terabytes of SQL databases and various VHD and Xen server backup files,” it was alleged.

“I cringed at the thought of that data being sold once, as it was dangerous enough. Then during further conversation, Jeff mentioned at least five other buyers,” Doering claimed. “Jeff described one as a competing retailer, while the other three Jeff claimed to ‘not want to know’ their intentions or business.”

Doering noted that the straightforward measure of turning on full-disk encryption would have sufficed to prevent any leak. Alternatively, destroying the storage beyond salvage would have been a good move, in our view.

Since NCIX is nothing but a corpse now, those whose privacy has been breached – any customer or employee – have little chance for any redress, we fear. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/21/ncix_servers_sold/

Sealed with an XSS: IT pros urge Lloyds Group to avoid web cross talk

A pair of IT workers have criticised banks within the Lloyds Banking Group (LBG) for substandard security. The group denies anything is amiss, maintaining it follows industry best practice on cyber-security.

Each of the three LBG banks – Lloyds, Halifax, and Bank of Scotland – has implemented transport layer security by running https so that transactions run to a secure server. But the three financial institutions are nonetheless vulnerable to a common class of web security vulnerability often exploited by phishing fraudsters: cross-site scripting (XSS) flaws.

A software developer and an infosec researcher have separately said that websites maintained by Lloyds, Halifax, and Bank of Scotland all have an XSS vuln, allowing attackers to potentially read and modify the contents of the login form, as well as subsequent pages such as account information in secure banking sessions.

The issue at the three Lloyds Banking Group subsidiaries were uncovered by software developer Jim Ley and reported to each bank. A lack of response prompted him to approach The Register.

Ley developed a live proof-of-concept, seen by The Reg, for each bank showing how the unresolved web flaws could be leveraged to run login-harvesting phishing attacks.

This illustrates the risk that, unless the flaw is resolved, convincing phishing scams that leverage the web security shortcoming might be developed, he warned.

Independent security researcher Paul Moore confirmed our tipster’s warning, adding that Halifax Bank is vulnerable to a somewhat related problem.

“[Halifax Bank’s] lack of adequate security headers allows the injection of malicious scripts to both collect and alter anything the user enters, regardless of TLS,” Moore told El Reg.

“Banks should deploy the correct security headers before third party dependencies go rogue… many sites are vulnerable if they don’t deploy security headers correctly,” he added.

Halifax security header rating needs some improvement

Halifax Bank’s security header rating scores a B

Halifax Bank rates a “B” on Sophos’s Security Headers benchmark, which may on the surface seem like a passing grade but belies the problem. The devil lies in the detail, according to Moore.

“A ‘B’ isn’t bad, but the difference between an ‘A’ and ‘B’ here is the existence of a CSP [Content Security Policy]1 header. If they disallowed inline scripts, they’d get an ‘A’ and wouldn’t be vulnerable to this attack,” Moore said.

Moore’s (benign) proof-of-concept demo from Halifax Bank can be found here, which he flagged up to the infosec community through Twitter.

El Reg relayed these criticisms to reps at LBG, alongside a request for comment. The bank said it welcomed the reports while downplaying their significance:

We employ multi-layered security controls across our systems. We take responsible disclosures seriously and always follow up to ensure that the best methods are followed.

Both techies were unimpressed with this reply. Each independently stated they had found it difficult to report problems to LBG. “If they made the reporting process easier, I’d be happier,” Moore commented.

The Reg has seen an email from LBG’s digital security team stating they were “aware of this issue”, adding that its techies “are already working on it”. ®

Bootnote

1Content Security Policy is a security technology designed in large part to minimise XSS problems.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/20/lloyds_banking_insecure/

No, that Sunspot Solar Observatory didn’t see aliens. It’s far more grim

On September 6, the Sunspot Solar Observatory in New Mexico, USA, was evacuated and sealed off without explanation, sparking wild conspiracy theories as to why.

Since it’s an observatory, the favorite theory was that it had spotted aliens, and the lockdown was part of a coverup to prevent public panic.

No, there weren’t any green-skinned people. On Sunday this week, the Association of Universities for Research in Astronomy shed more light on its decision to temporarily shut the observatory, in concert with the National Science Foundation, and said the closure was due to an unspecified criminal investigation. The good news is that the boffinry center was reopened this week.

sunspot_solar_observatory

New MeX-Files: The curious case of an evacuated US solar lab, the FBI – and bananas conspiracy theories

READ MORE

The association, which operates the Sunspot facility, went on, though, to say: “We became concerned that a suspect in the investigation potentially posed a threat to the safety of local staff and residents. For this reason, AURA temporarily vacated the facility and ceased science activities at this location.” That eggheads added that the small number of staff at the remote location made protection difficult.

More has now emerged on what happened, with Reuters reporting on Wednesday that the FBI is investigating a janitor who allegedly used the facility’s Wi-Fi network “to send and receive child pornography.”

Telly station KRQE added that investigators linked uploaded and downloaded child sex abuse material to the observatory’s IP address, sparking the probe.

The FBI also obtained a warrant to search the suspect’s home, Reuters said. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/20/sunspot_solar_observatory_fbi/

NSS Labs sues antivirus toolmakers, claims they quietly conspire to evade performance tests

NSS Labs has thrown a hand grenade into the always fractious but slightly obscure world of security product testing – by suing multiple vendors as well as an industry standards organisation.

Its lawsuit, filed in California this week against CrowdStrike, Symantec, ESET, and the Anti-Malware Testing Standards Organization (AMTSO), has alleged no less than a conspiracy to cover up deficiencies in security tools.

These vendors not only knew of bugs in their code and failed to act, but they were “actively conspiring to prevent independent testing that uncovers those product deficiencies,” NSS Labs claimed. The lawsuit hopes to illuminate bad practices that harm consumers, Vikram Phatak, chief exec of NSS Labs, claimed in a statement.

At the heart of the matter, NSS labs has accused the named security vendors of forging a pact to collectively boycott NSS – an independent test lab. Why? Well, if one of them avoided a test all others participated in then it looks bad, but if there’s a collective “no thanks,” then any opprobrium is avoided.

The charge is serious: vendors have come up with a scheme to avoid tests that may expose vulnerabilities they’d rather not have to invest in repairing, never mind the negative PR backlash from poor results. AMTSO – which aims to establish standards for fair testing – is allegedly “actively preventing unbiased testing” and facilitating this bad practice. In addition, Crowdstrike and other unnamed vendors have clauses in their user contracts that prohibit testing without permission, NSS Lab alleged.

“If it is good enough to sell, it is good enough to test,” Phatak argued.

This isn’t the first time NSS Labs and Crowdstrike have locked horns: last year CrowdStrike filed an injunction against NSS Labs to prevent the release of test results during the RSA Conference. The lawsuit failed.

In a statement, Crowdstrike dismissed NSS’s legal offensive as baseless:

NSS is a for-profit, pay-to-play testing organization that obtains products through fraudulent means and is desperate to defend its business model from open and transparent testing. We believe their lawsuit is baseless.

CrowdStrike supports independent and standards-based testing — including public testing — for our products and for the industry. We have undergone independent testing with AV-Comparatives, SE Labs, and MITRE and you can find information on that testing here. We applaud AMTSO’s efforts to promote clear, consistent, and transparent testing standards.

El Reg also asked the other named parties in the lawsuit to comment. We’ll update this story as more information comes to hand. ®

Bootnote

Other security testing labs are available with other examples including AV-Comparatives, AV-TEST, and SE Labs, among others. Fore what it’s worth: the anti-malware market is split between consumer and corporate sales with enterprise revenues forming the largest part of the market, even for the likes of Symantec.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/20/security_testing_contratemps/

Developer goes rogue, shoots four colleagues at ERP code maker

Cops have named the programmer who went on a gun rampage at WTS Paradigm – a US maker of enterprise resource planning software – this week. He shot four colleagues, leaving one in a critical condition.

At around 10.20am on Wednesday, Anthony Tong, who had worked at the company in Middleton, Wisconsin, for little over a year, showed up at the office, pulled out a concealed gun, and opened fire. Staff fled the building, and took shelter in nearby businesses.

The scumbag then shot at police officers who showed up within minutes of the attack on WTS Paradigm. The cops returned fire, taking down their suspect. He was pronounced dead on arrival to hospital.

gun

Blood spilled from another US high school shooting has yet to dry – and video games are already being blamed

READ MORE

“The entire WTS Paradigm team is shocked and heartbroken by the incident that occurred today at our Middleton office,” the ERP company said in a statement yesterday.

“Our deepest thoughts are with all of our staff and their loved ones. In a situation like this, you learn how great a community really is. We cannot thank the Middleton Police Department, the Dane County Sheriff’s Office, and other emergency personnel enough for their amazing response.”

In a press conference on Thursday, police said the dead man had worked at WTS since April of last year, did not have a criminal history, and was acting alone at the time of the shooting. At this time, there is no indication as to what caused the murderous outburst and they have appealed for witnesses to get in contact.

One victim is still in a critical condition, while two others have serious injuries. A fourth worked received a graze from a bullet. The victims have not been identified. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/20/developer_work_shooting/

Microsoft’s Jet crash: Zero-day flaw drops after deadline passes

The Zero Day Initiative has gone public with an unpatched remote-code execution bug in Microsoft’s Jet database engine, after giving Redmond 120 days to fix it. The Windows giant did not address the security blunder in time, so now everyone knows about the flaw, and no official patch is available.

The bug, reported to Microsoft on May 8 with a 120-day deadline before full disclosure, was described on Thursday by ZDI, here. It was discovered by Lucas Leong of Trend Micro Security Research.

The bad news: it’s a remote-code execution vulnerability, specifically, an out-of-bounds memory write. The good news is that an attacker can only trigger the bug by tricking the victim into opening a specially crafted Jet file, and any arbitrary malicious code smuggled in the document is executed only with the user’s privileges (we’ve all made sure that users don’t have admin privilege, right?) The booby-trapped Jet file can also be opened using JavaScript, so someone could be fooled into viewing a webpage that uses JS to open the file, causing the code to run if it’s picked up by the database.

The other good news is that the Jet database engine is not terribly well deployed: it’s mostly associated with Microsoft Access and Visual Basic. However, if you are using it, you probably will want to stop users from opening any maliciously rigged files.

In its formal advisory, ZDI said the problem is in Jet’s index manager. A crafted file in the Jet format triggers “a write past the end of an allocated buffer” when opened by the software. ZDI’s proof-of-concept exploit code is on GitHub.

This thread from 0patch cofounder Mitja Kolsek provides useful details about the conditions that the PoC will and won’t work under. Kolsek confirmed that the bug will work on a “local click” in Windows 7, and while exploitation of the bug requires a 32-bit environment, “even on 64-bit Windows, IE rendering processes are 32-bit – and can use Jet.”

ZDI said it believes “all supported Windows version[s] are impacted by this bug, including server editions.” Microsoft, we’re told, has confirmed it’s working on a patch. Since it wasn’t included in September’s Patch Tuesday, it may arrive in the October cycle.

0patch promised its own micropatch will land soon in this tweet:

ZDI emphasized that this issue is not related to CVE-2018-8392, which Fortinet disclosed last week after it got the Patch Tuesday treatment. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/20/microsofts_jet_database_zero_day/

Turn the NIST Cybersecurity Framework into Reality: 5 Steps

Actionable advice for tailoring the National Institute of Standards and Technology’s security road map to your company’s business needs.

The first version of the National Institute of Standards and Technology’s Cybersecurity Framework (NIST CSF) was published in 2014 to provide guidance for organizations looking to bolster their cybersecurity defenses, and has more recently been updated as Version 1.1. It was created by cybersecurity professionals from government, academia, and various industries at the behest of President Barack Obama and later made into federal government policy by the new administration.   

While the vast majority of organizations recognize the value in such a universally recommended, collaborative effort to improve cybersecurity in businesses of all sizes, adapting and implementing the framework is easier said than done. The content of the NIST CSF is freely available for all, so we’re not going to discuss it in great depth here. Instead, we’re going to set out five steps to help you turn the NIST CSF into a reality for your organization.

Step 1: Set your target goals.
Before you begin to think about implementing the NIST CSF, organizations must take aim at setting up their target goals. The first hurdle to this typically is establishing agreement throughout the organization about risk-tolerance levels. There is often a disconnect between upper management and IT about what constitutes an acceptable level of risk.

To begin, draft a definitive agreement on governance that clarifies precisely what level of risk is acceptable. Everybody must be on the same page before you proceed. It’s also important to work out your budget, set high-level priorities for the implementation, and establish which departments you want to focus on.

It makes a lot of sense to start with a single department or a subset of departments within your organization. Run a pilot program so that you can learn what does and doesn’t work, and identify the right tools and best practices for wider deployment. This will help you to craft further implementations and accurately estimate the cost.

Step 2: Create a detailed profile.
The next step is to drill deeper and tailor the framework to your specific business needs. NIST’s Framework Implementation Tiers will help you understand your current position and where you need to be. They’re divided into three areas:

  • Risk Management Process
  • Integrated Risk Management Program
  • External Participation

Like most of the NIST CSF, these should not be taken as set in stone. They can be adapted for your organization. You may prefer to categorize them as people, process, and tools, or add your own categories to the framework.

Each one runs from Tier 1 to Tier 4.

Tier 1 – Partial generally denotes an inconsistent and reactive cybersecurity stance.
Tier 2 – Risk Informed allows for some risk awareness, but planning is consistent.
Tier 3 – Repeatable indicates organization-wide CSF standards and consistent policy.
Tier 4 – Adaptive refers to proactive threat detection and prediction.

Higher levels are considered a more complete implementation of CSF standards, but it’s a good idea to customize these tiers to ensure they’re aligned with your goals. Use your customized tiers to set target scores and ensure that all key stakeholders agree before you proceed. The most effective implementations will be closely tailored for specific businesses.

Step 3: Assess your current position.
Now it’s time to conduct a detailed risk assessment to establish your current status. It’s a good idea to conduct an assessment both from within the specific functional area as well as independently across the organization. Identify open source and commercial software tools capable of scoring your target areas and train staff to use them, or hire a third party to run your risk assessment. For example, vulnerability scanners, CIS benchmark testing, phishing tests, behavioral analytics, etc. It’s crucial that the people performing the risk assessment have no knowledge of your target scores.

The team implementing the CSF now aggregates and checks the final scores before they’re presented to the key stakeholders. The goal at the end of this process, is to give your organization a clear understanding of the security risk to organizational operations (including mission, functions, image, or reputation), organizational assets, and individuals. Vulnerabilities and threats should be identified and fully documented.

For example, in the diagram below, the organization has identified three functional areas: Policy, Networks, and Applications. These could span the hybrid cloud or could be broken into different environments so they can track on a more detailed level, in which case an additional consideration is whether different functional leads will be responsible for on-premises and cloud deployments.

Along the left, the heat map lists the different CSF functions and can be expanded to any level of detail. Using a four-point scale, green designates all is OK, yellow infers the area needs work, and red warrants close analysis and correction. Here, the “identify” core function is broken out for the purpose of comparing the assessed scores against a cross business-unit core group. The SME and core scores are averaged, compared to the organization’s target, and a risk gap is then calculated. A higher gap warrants quicker remediation. Looking at the table, the organization’s “Protect” and “Respond” areas are the most vulnerable.

Step 4: Gap analysis action plans
Armed with a deeper knowledge of risks and potential business impacts, you can move on to a gap analysis. The idea is to compare your actual scores with your target scores. You may want to create a heat map to illustrate the results in an accessible and digestible way. Any significant differences immediately highlight areas that you’ll want to focus on.

Work out what you need to do to close the gaps between your current scores and your target scores. Identify a series of actions that you can take to improve your scores and prioritize them through discussion with all key stakeholders. Specific project requirements, budgetary considerations, and staffing levels may all influence your plan.

Step 5: Implement action plan
With a clear picture of the current health of your defenses, a set of organizationally aligned target goals, a comprehensive gap analysis, and a set of remediation actions, you are finally ready to implement the NIST CSF. Use your first implementation as an opportunity to document processes and create training materials for wider implementation down the line.

The implementation of your action plan is not the end. You will need to set up metrics to test its efficacy and continuously reassess the framework to ensure that it’s meeting expectations. This should include a an ongoing  process of iteration and validation with  key decision makers.  In order to get the maximum benefit you will need to hone the implementation process and further customize the NIST CSF to fit your business needs.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Mukul Kumar is Cavirin’s CISO and vice president of Cyber Practice, bringing to Cavirin over 18 years of IT and security experience, including his previous role as CISO and VP of Cyber Practice at Balbix. Prior to this position, Kumar served as the chief security officer at … View Full Bio

Article source: https://www.darkreading.com/analytics/turn-the-nist-cybersecurity-framework-into-reality-5-steps/a/d-id/1332796?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Account Takeover Attacks Become a Phishing Fave

More than three-quarters of ATOs resulted in a phishing email, a new report shows.

Why spoof an email address for phishing messages when you can hijack an account and send them from the real one? That’s the theory behind account takeover (ATO) attacks, and it’s one being put into practice in a growing number of criminal cases.

According to a new report from Barracuda, which draws on a study that looked at 50 randomly selected organizations, nearly 40% of respondents reported at least one ATO attack in the second quarter of 2018.

“On average, when a company got compromised, the compromise resulted in at least 3 separate account takeover incidents,” according to the report. Of the incidents, 78% resulted in phishing email being sent.

“Cybercriminals are able to professionally customize emails to trick even the most discerning eye all the way up to the CEO level,” says Ryan Wilk, vice president of customer success at NuData Security. “These phishing emails trick victims into clicking on links or on documents that appear legitimate, only to automatically download key loggers or other malware tools used to harvest credentials.” 

The report’s authors noted that their results could have underreported the actual incidence of ATO attacks since they relied on incidents reported by companies. Many organizations either aren’t aware that they’ve been the victim of such an attack or are reluctant to admit to having been victimized.

Read more here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/account-takeover-attacks-become-a-phishing-fave/d/d-id/1332859?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Retail Sector Second-Worst Performer on Application Security

A “point-in-time” approach to PCI compliance could be one reason why so many retailers appear to be having a hard time.

The retail industry’s cybersecurity preparedness continues to lag behind almost every other sector despite efforts by the major credit card associations to bolster retail security via the Payment Card Industry Data Security Standard (PCI DSS).

Third-party risk management firm SecurityScorecard recently analyzed a total of 1,444 domains in the retail industry with an IP footprint of at least 100. Researchers from the firm passively monitored externally facing IPs of the retail domains for a period of about five months to see what vulnerabilities they could find.

The exercise showed the retail industry had the second-lowest application security performance among major sectors. In a list of 18 industries, the retail sector ranked 17th, just above the entertainment industry, in terms of having the most vulnerable applications. Last year, the retailer industry was the fourth lowest performer, meaning it dropped in application security performance in the preceding 12 months rather than improved.

Retailers also ranked dead last in terms of their ability to protect against social engineering attacks. SecurityScorecard’s analysis showed that criminals employing phishing and other social engineering methods to steal data and commit fraud were likely to have more success with retailers than organizations in any other industry.

The findings are important because criminals target retailers more so than almost any other sector apart from healthcare and banking and finance. In recent years, numerous retailers have experienced spectacular data breaches that have compromised tens and sometime even hundreds of millions of payment cards.

Visa, Mastercard, American Express, and other major card associations have required retailers to implement a set of evolving security controls for protecting card data at rest, in use, while stored, and during transactions. The PCI security standard has been in place for well more than a decade.

Yet many retailers are not fully compliant with it, even though they can face stiff financial penalties in the event of a breach. In fact, SecurityScorecard found that nearly 91% of the retail domains analyzed had issues that likely put them in noncompliance with four or more PCI DSS requirements.

Retailers fared especially poorly with respect to PCI DSS Requirement 6, pertaining to application security. Ninety-eight percent of the domains that SecurityScorecard analyzed had issues that likely put them in noncompliance. Ninety-one percent had problems with a subsection of Requirement 6, pertaining to the need for promptly patching software and systems against known security vulnerabilities.

Fouad Khalil, head of compliance at SecurityScorecard, says his company considered a variety of issues related to application security when assigning performance rankings to various industries.

Security issues that were identified during SecurityScorecard’s passive monitoring of the retail domains were weighted to account for differences in severity, Khalil says. When available, SecurityScorecard used industry-accepted standards, such as NIST’s Common Vulnerability Scoring System v2, to assign severity ranking. When an identified issue did not have a formal severity ranking available, SecurityScore used recognized authorities and its own internal resources to determine severity.

“These weighted issue types are then rolled up into a factor score for application security,” he says. “We repeated this same process for every major US industry, and when we compared the retail industry’s factor score to the rest, it came second-lowest,” Khalil explains. To determine compliance or noncompliance with PCI DSS requirements for app security, SecurityScorecard flagged vulnerabilities that were “litmus test indicators of noncompliance” with a particular PCI requirement, he notes.

A “point-in-time” approach to PCI compliance could be one reason why so many retailers appear to be having a hard time with the application security requirement and several of the other requirements, SecurityScorecard said in its report. It is not just enough to implement PCI-manadated security controls, but also to maintain them on an ongoing basis, especially with regard to issues like patching and applying software updates.

SecurityScorecard used a somewhat similar process to arrive at its ranking for social engineering threats. In this case, the company looked at issues including retail employees using their corporate account information to sign up for services, such as social networks, personal finance accounts, and marketing lists, that can be exploited. In addition, SecurityScorecard monitored employee dissatisfaction levels using publicly available data, Khalil says. As with application security, the retail industry fared badly in comparison with other industries on this front, too.

In this instance, the retail industry’s generally younger workforce may be a factor, according to SecurityScorecard. Many retail sector employees who are targets of phishing and social engineering scams don’t know enough about the threat to be able to recognize it.

Related Content:

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/application-security/retail-sector-second-worst-performer-on-application-security/d/d-id/1332860?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Think Like An Attacker: How a Red Team Operates

Seasoned red teamers explain the value-add of a red team, how it operates, and how to maximize its effectiveness.

If you want to stop an attacker, you have to think like an attacker.

That’s the general mindset of someone on the red team, a group of people within an organization responsible for, well, attacking it. Their goal is to act like the adversary and figure out different ways to break into a company so it can strengthen its defenses.

“The whole idea is, the red team is designed to make the blue team better,” explains John Sawyer, associate director of services and red team leader at IOActive. It’s the devil’s advocate within an organization; the group responsible for finding gaps the business may not notice.

Red teaming is markedly different from penetration testing, though the two are often confused, he continues. In the early days of pen testing, it resembled modern-day red teaming.

“When we talk about ethical hacking and pen testing in the late 90s, it was no-holds-barred kind of penetration testing,” says Sawyer. As pen testing became mainstream, it also became commoditized. Now, instead of testing the system as a whole, one-off pen tests target specific parts of the ecosystem: Web application tests, social engineering tests, network tests.

“At its core, pen testing is trying to find as many vulnerabilities as you can, usually within a specific timeline,” says Josh Schwartz, director of offensive engineering at Oath. Pen testers are given a target system, product, or source code, and try to find as many bugs as they can. While pen tests are still useful, they don’t test the business to the extent that a red team does.

A red team considers the full ecosystem, Sawyer says, and its ultimate goal is to figure out how a determined threat actor would break in. Instead of solely trying to breach a Web application, red teamers might combine multiple attack vectors – a combination of external attacks, maybe a social engineering phone call, trying to gain access to a physical office.

“The main function of red teaming is adversary simulation,” says Schwartz. “You are simulating, as realistically as possible, a dedicated adversary that would be trying to accomplish some goal. It’s always going to be unique to the target. If you’re going to get the maximum value out of having a red teaming function, you probably want to go for maximum impact.”

Smooth Operators

Red team operations start with gathering open-source intelligence, or OSINT, on the organization and building a threat profile, says Tyler Robinson, head of offensive services and managing senior security analyst with InGuardians. The team considers every aspect of their target company: its industry, monetary value, its risk factors, its worst-case scenario.

As Schwartz puts it: “What does the apocalypse look like for your company?”

A chat with the organization can unearth valuable intel: insight into the business, where their crown jewels are, what they value. For a financial institution this might be reputation and money; for a healthcare firm it might be health data and sensitive patient data.

It’s worth noting an organization using a red team likely has a mature security posture, says Sawyer. The red team assumes security controls are in place, a SOC is monitoring these controls, and an incident response plan exists in the event of a breach. If the company has never done a penetration test, he adds, it’s likely not ready to get hit with a red team.

When it’s time to plot the offensive, Robinson considers the ways someone could physically break in. This could involve a Google Maps scan to scope out entrances, or YouTube and Instagram to check for employee badges. Red teams will also investigate Web applications and do password sprays to see if the company is vulnerable. “All we need is one foothold,” he says.

Red teams will also scour the Dark Web to learn the latest hacker tools and tactics, how they work, and what’s new and being used in the wild. What the red team does is identical to what they find. “We try to maintain that edge,” he says. “Constant retooling, constant battling.”

The team ends up chaining together a small series of attacks – low-level vulnerabilities, misconfigurations – and use those to own the entire domain without the business knowing they were there, he says. Typically, few employees know when a red team is live.

Sawyer’s team recently worked with a financial trading organization. They combined a variety of social engineering and physical attacks, along with external network testing, to break in. Red teamers went on site, dressed like the employees, and arrived with badges similar to theirs in order to bypass physical security controls, he explains.

Once inside, they could gain access to offices and connect to their machines and networks. “That was in coordination with other activities we had,” he says, noting that they also leveraged phishing and phone calls to break the target’s defenses.

Robinson’s red team team was recently able to take over the network of a major organization by breaking into a printer. “We owned a very large financial organization through a single printer,” he emphasizes, adding how this illustrates the need for organizations to focus on the basics of security, including securing all networked devices. There’s a lot of money going toward next-gen tools, he says, but the real value is in the fundamentals of proper configuration.

Red and blue teams may work together in some engagements to provide visibility into the red team’s actions. For example, if the red team launches a phishing attack, the blue team could view whether someone opened a malicious attachment, and whether it was blocked. After a test, the two can discuss which actions led to which consequences.

“We want to ultimately say that while we found these ways to get in, we really think by improving these places we were able to get in, you’ll have more complete protection,” Sawyer says.

Red Team Recruitment: How to Hire

“Our rule of thumb is there’s always three operators” in a red team, says Robinson. Sawyer says a red team needs at least two people to be effective, though many range from two to five. While a large company might have 12-25 people, says Schwartz, only three or four will work on a single operation.

Each red team is made up of different skill sets to maximize the group’s effectiveness.

It helps to have at least one person knowledgeable in physical security; someone who can understand the safeguards around the business, pick locks, bypass door codes and security cameras. You might also have social engineers who can send phishing emails, call up the organization, or appear on-site pretending to be an employee or delivery person.

And, of course, you need technical chops. Sawyer points to a range of valuable skills to have on a red team: Web exploitation, hardware expertise, reverse engineering, understanding of Windows and Active Directory, post-exploitation, and gaining access to sensitive data.

It’s also interesting to pull in subject matter experts based on the target, Schwartz says. If you’re outsourcing a red team, it could help to bring an employee onto their project and make them part of the attack group. “People generally want to be part of those types of activities because they’re educational,” he adds.

In-House vs. Outsourcing

More and more companies are starting to realize if they limit themselves to the core fundamentals of security, they’re waiting for something bad to happen in order to know whether their steps are effective, says Schwartz. Red teaming can help them get ahead of that.

“Security is one of those areas it’s tough to get funding for,” says Sawyer. “It’s seen as a sinkhole … it’s hard because unless you have a breach or something is attacking you, how do you know that the stuff you’re investing in is doing a good job?”

How your company acquires red teaming capabilities depends on its size and budget. Many companies are building red teams in-house to improve security; some hire outside help.

“There are some ways to outsource red teaming and red teaming activities,” says Schwartz. “It’s a good way to start,” he notes, and smaller businesses can buy these skills from various consulting companies and in doing so, make a case for hiring an internal red team.

The main reason behind building a red team internally is because as it grows and improves along with defenses. As security improves, so do the skills of red teamers. Offensive experts and defenders can attack one another, playing a cat-and-mouse game that improves enterprise security, he continues. Internal teams are also easier to justify from a privacy perspective.

Overall, the pros argue a full red team can help prepare for modern attackers who will scour your business for vulnerabilities and exploit them – but they’ll help you stop real adversaries.

“The difference between a red team and an adversary is, the red team tells you what they did after they did it,” Schwartz says.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/think-like-an-attacker-how-a-red-team-operates/d/d-id/1332861?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple