STE WILLIAMS

Club Penguin Rewritten breach caused by rogue admin backdoor

Last Friday, the hugely popular gaming site Club Penguin Rewritten (CPRewritten) suffered a data breach that exposed four million user accounts.

Having account data including email addresses, usernames, IP addresses and passwords hacked is bad enough in any event but this was made much worse by the fact it came on the back of a separate breach in January 2018 affecting 1.7 million accounts, made public more than a year later.

The cause of the latest breach? This, it seems, is where the story enters even darker territory.

According to someone connected to CPRewritten who contacted news site Bleeping Computer this week, the hack happened after hackers accessed a hidden PHP database back door put there by a former site admin last year.

Defending against breaches caused by vulnerabilities or misconfigurations is hard enough but stopping hackers from abusing a weakness put there deliberately is unplayable unless it is detected first.

Identified only as ‘Codey’, this individual is said to have departed in February 2018 in strained circumstances that included alleged harassment of other staff.

July breach

CPRewritten launched in 2017 in order to continue the earlier Club Penguin (CP), which was shut by owners Disney in the same year.

A year later it was announced that Club Penguin, too, would be closing, a decision that was reversed a month later after extra funding was found.

It is claimed that the rogue admin wanted the site to close at that time for reasons that aren’t explained.

It’s not known who exfiltrated the data of four million accounts last week, but they clearly knew what they’d come for.

The breach is believed to have begun at around 11pm BST last Friday, about an hour after which an admin noticed that the server’s resources were being used heavily.

CPRewritten only realised that this was connected to a breach the next day. By the time it took defensive measures, the hackers had already tried to…

…damage records and steal valuable accounts with rare virtual items [exchangeable for money] collected from the game.

What to do

The first task is to change the account password, something the site will presumably require users to do anyway when they next log in (as far as we can tell, the ‘Padlock’ two-factor authentication is not yet available to turn on).

The fact that the data hashes were stored using Bcrypt will be seen as good news. However, this isn’t a magic shield and might still be vulnerable to attackers with enough time on their hands.

A bigger concern might be communication.

Both breaches suffered by the site were made public by the Have I Been Pwned? (HIBP) breach notification site that can also now deliver alerts of new incidents in Mozilla Firefox.

Or, if you like, the first breach took over a year to become public knowledge via a third-party and it’s still not clear what if any steps CPRewritten has taken to publicise last week’s incident beyond sending an email.

What users might value more is a clear explanation of what was compromised and how it happened from the horse’s mouth – not to mention more information on the steps being taken to stop it happening again.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/x6kzQPgIdNM/

Space agency uses Raspberry Pi to solve satellite encryption puzzle

How does the European Space Agency (ESA) communicate securely with satellites and space missions?

Surprisingly, until relatively recently it often didn’t – something which is still true for smaller, cheaper satellites such as CubeSats.

Now ESA hopes that an experiment consisting of a small module built around a tiny Raspberry Pi Zero board controlled from a laptop on the ground will close this hypothetical security issue at very low cost.

It’s called the Cryptography ICE Cube (or CryptIC), measures only 10x10x10cm, and is the brainchild of a special ESA department called the International Commercial Experiments service, or ICE Cubes for short.

Currently installed on the Cygnus NG-11, launched in April 2019, the CryptIC box is a small unit shielded from the high radiation levels in space using a plastic coating.

However, while the coating protects the electronics from the worst of the radiation, it isn’t enough to stop interference with the microprocessors used to make encryption possible. ESA software product assurance engineer, Emmanuel Lesser, explains:

In orbit the problem has been that space radiation effects can compromise the key within computer memory causing ‘bit-flips’.

This is enough to disrupt communication as keys used on the ground and in space no longer match up.

The traditional solution to this is to use radiation-hardened equipment, but this is expensive.

Instead, the CryptIC is testing the feasibility of using microprocessor cores based around customisable, field-programmable gate arrays (FPGAs), which in effect offer redundancy should one core be affected by radiation.

If one core fails then another can step in, while the faulty core reloads its configuration, thereby repairing itself.

A second track is looking at using a backup key built into the CryptIC itself, which can’t be compromised from the ground.

The module also integrates a small ‘floating gate’ CERN-designed dosimeter which measures radiation levels. Meanwhile, the team is even testing different flash memory chips to see which performs the best.

The team will begin testing the module’s defence against encryption bit-flipping within weeks, after which it will be left to run for a year to make sure it’s reliable enough to be used on live missions.

Evidently, the project still has some proof-of-concept work ahead of it. But given the recent dramatic growth of low-cost satellites, it’s perhaps surprising that nobody had got around to solving this complex problem until now.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/tN4YYmqMBwE/

Convince your users to obey the cybersecurity rules: Tune in live online and find out how

Webcast Security professionals like you have a tough job. You can bang on about risks, threats, attack types and other scary stuff, explain the ins and outs of compliance, issue dire warnings about what might happen if your listeners don’t do the right thing – and they remain supremely unperturbed.

And all the time you know precisely who will be carrying the can when things go wrong. If you want to learn what you can do to ease your frustration and make yourself heard, the answer is at hand.

In our upcoming webinar, titled Lions and Tigers and Hackers, Oh My!, you will hear security blogger Graham Cluley, Robert O’Brien, CEO of cybersecurity software developer MetaCompliance, and The Reg’s own Jon Collins share expert advice that will help you sharpen your ability to cut through the knotty tangle of intransigence and avoidance.

As well as covering the usual security topics such as passwords, phishing and the like, the discussion will focus on a useful set of techniques, tricks and tips that can drive security awareness into the minds of the users you need to convince.

If you want the power to enable the best possible security practice, tune in to the live webinar, brought to you by MetaCompliance, on 12 September.

Sign up for the event right here.

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/02/cybersecurity_webcast/

Why Every Organization Needs an Incident Response Plan

OK, perhaps that’s obvious. The question is, how come so many organizations still wait for something to happen to trigger their planning?

It’s human nature to procrastinate, especially when people aren’t quite sure of the right way to approach a task.

But when it comes to an incident response (IR) plan, the time to develop one is before a security breach occurs. Unfortunately, far too often it takes an incident to trigger planning.

And that, all security pros know, is far from ideal.

Why Do I Need an Incident Response Plan?
Having an IR plan in place is a critical part of a successful security program. Its purpose is to establish and test clear measures that an organization could and likely should take to reduce the impact of a breach from external and internal threats.

While not every attack can be prevented, an organization’s IR stance should emphasize anticipation, agility, and adaptation, says Chris Morales, head of security analytics at Vectra.

“With a successful incident response program, damage can be mitigated or avoided altogether,” Morales says. “Enterprise architecture and systems engineering must be based on the assumption that systems or components have either been compromised or contain undiscovered vulnerabilities that could lead to undetected compromises. Additionally, missions and business functions must continue to operate in the presence of compromise.”

The capabilities of an IR program are often measured on the level of an organization’s maturity, which defines how proactive an organization is. Companies that are able to map policies to the level of risk appropriate to the business are better prepared in the event of a security incident.

By way of example, Morales explains that the goal for a small business should be to reach a level of repeatable process, which includes having a maintained plan, concrete roles and responsibilities, lines of communication, and established response procedures. These are the necessary stepping stones that would allow it to appropriately address the bulk of incidents it would likely see.

“However, for organizations with highly valuable information with a high-risk level, a formal plan is not enough, and they need to be much more intelligence-driven and proactive in threat-hunting capabilities,” Morales says.

Starting from Scratch
Many companies find themselves in the position of having to start writing their IR plans from scratch, as was once the case for Trish Dixon, vice president of IronNet’s Cyber Operations Center (CyOC).

“It was an interesting dynamic to think that you can just jump right in and start writing an incident response plan when you haven’t really taken into account the rest of your company’s policies,” Dixon says.

Without knowing a company’s continuity plan, failovers, or its most critical systems, it’s impossible to write an IR plan that understands the impact an incident will have.

If, for example, the most critical part of the business is its infrastructure, you can’t have an effective IR plan without knowing how long it can be down before it starts costing the company money, Dixon says.

“From doing a business-impact analysis, it’s a lot easier to start mapping out and designing your incident response framework around that,” she says.

While there is no “right way” to design a plan, there are best practices, such as those set forth in the NIST framework, for creating and testing an optimal IR plan that will allow organizations to be more resilient in the event of a cyberattack.

At the very least, every organization should have a framework or concept down to understand the critical steps to take in the event of an incident.

“As you continue to evaluate your policies when you audit them, make sure the IR plan and policy are updated as well,” Dixon advises. Reviewing a policy once annually is absolutely not enough.

Auditing the IR policy quarterly is in line with best practices, but Dixon says organizations have to test it almost on a daily basis.

“You may have an event come in that’s not necessarily categorized as an incident,” she says, “but you should always refer back to your incident response plan to be able to say, ‘Had this event been this type of incident, what would we have done?'”

Measuring IR Success
Testing IR daily creates a necessary and inquisitive mindset that habitually asks “if this had been X” in order to determine whether an incident is escalated and/or who to contact. Companies need to gain as much information as possible so as to act on the presence of attackers.

Being proactive allows organizations to better react with a deeper understanding of the threat actor’s intentions and how the organization’s defenses relate to potential threats. That’s why threat awareness is one of the core metrics used to assess an organization’s maturity and capabilities for IR success, Morales says.

Every detail and every event that happens can help defenders decide what to do in response to an incident so they are better positioned to quickly and sufficiently isolate, adapt, and return to normal business operations should they ever encounter a worst-case scenario.

A lot of organizations begin with an incident response framework, such as NIST’s “Computer Security Incident Handling Guide,” and use that as a guide for developing a unique IR plan specific to the company. But understanding who all of the players are is one of the most critical starting points when developing or updating an IR plan.

Indeed, people can get tunnel vision within their operations centers and forget they may need to involve the business section, sales, and IT, so those people are not written into the plan, Dixon says.

What’s most important for organizations to keep in mind is that the IR plan needs to be applicable to their business.

“A framework is a framework. It’s a recommendation for best practices. It’s not meant to suggest that every situation is applicable to all organizations across the world,” Dixon says. “People need to be comfortable with adjusting the frameworks to apply to their organization.”

Why Every Organization Needs an Incident Response Plan
It’s human nature to procrastinate, especially when people aren’t quite sure of the right way to approach a task.

But when it comes to an incident response (IR) plan, the time to develop one is before a security breach occurs. Unfortunately, far too often it takes an incident to trigger planning.

And that, all security pros know, is far from ideal.

Why Do I Need an Incident Response Plan?
Having an IR plan in place is a critical part of a successful security program. Its purpose is to establish and test clear measures that an organization could and likely should take to reduce the impact of a breach from external and internal threats.

While not every attack can be prevented, an organization’s IR stanceshould emphasize anticipation, agility, and adaptation, says Chris Morales, head of security analytics at Vectra.

“With a successful incident response program, damage can be mitigated or avoided altogether,” Morales says. “Enterprise architecture and systems engineering must be based on the assumption that systems or components have either been compromised or contain undiscovered vulnerabilities that could lead to undetected compromises. Additionally, missions and business functions must continue to operate in the presence of compromise.”

The capabilities of an IR program are often measured on the level of an organization’s maturity, which defines how proactive an organization is. Companies that are able to map policies to the level of risk appropriate to the business are better prepared in the event of a security incident.

By way of example, Morales explains that the goal for a small business should be to reach a level of repeatable process, which includes having a maintained plan, concrete roles and responsibilities, lines of communication, and established response procedures. These are the necessary stepping stones that would allow it to appropriately address the bulk of incidents it would likely see.

“However, for organizations with highly valuable information with a high-risk level, a formal plan is not enough, and they need to be much more intelligence-driven and proactive in threat-hunting capabilities,” Morales says.

Starting from Scratch
Many companies find themselves in the position of having to start writing their IR plans from scratch, as was once the case for Trish Dixon, vice president of IronNet’s Cyber Operations Center (CyOC).

“It was an interesting dynamic to think that you can just jump right in and start writing an incident response plan when you haven’t really taken into account the rest of your company’s policies,” Dixon says.

Without knowing a company’s continuity plan, failovers, or its most critical systems, it’s impossible to write an IR plan that understands the impact an incident will have.

If, for example, the most critical part of the business is its infrastructure, you can’t have an effective IR plan without knowing how long it can be down before it starts costing the company money, Dixon says.

“From doing a business-impact analysis, it’s a lot easier to start mapping out and designing your incident response framework around that,” she says.

While there is no “right way” to design a plan, there are best practices, such as those set forth in the NIST framework, for creating and testing an optimal IR plan that will allow organizations to be more resilient in the event of a cyberattack.

At the very least, every organization should have a framework or concept down to understand the critical steps to take in the event of an incident.

“As you continue to evaluate your policies when you audit them, make sure the IR plan and policy are updated as well,” Dixon advises. Reviewing a policy once annually is absolutely not enough.

Auditing the IR policy quarterly is in line with best practices, but Dixon says organizations have to test it almost on a daily basis.

“You may have an event come in that’s not necessarily categorized as an incident,” she says, “but you should always refer back to your incident response plan to be able to say, ‘Had this event been this type of incident, what would we have done?'”

Measuring IR Success
Testing IR daily creates a necessary and inquisitive mindset that habitually asks “if this had been X” in order to determine whether an incident is escalated and/or who to contact. Companies need to gain as much information as possible so as to act on the presence of attackers.

Being proactive allows organizations to better react with a deeper understanding of the threat actor’s intentions and how the organization’s defenses relate to potential threats. That’s why threat awareness is one of the core metrics used to assess an organization’s maturity and capabilities for IR success, Morales says.

Every detail and every event that happens can help defenders decide what to do in response to an incident so they are better positioned to quickly and sufficiently isolate, adapt, and return to normal business operations should they ever encounter a worst-case scenario.

A lot of organizations begin with an incident response framework, such as NIST’s “Computer Security Incident Handling Guide,” and use that as a guide for developing a unique IR plan specific to the company. But understanding who all of the players are is one of the most critical starting points when developing or updating an IR plan.

Indeed, people can get tunnel vision within their operations centers and forget they may need to involve the business section, sales, and IT, so those people are not written into the plan, Dixon says.

What’s most important for organizations to keep in mind is that the IR plan needs to be applicable to their business.

“A framework is a framework. It’s a recommendation for best practices. It’s not meant to suggest that every situation is applicable to all organizations across the world,” Dixon says. “People need to be comfortable with adjusting the frameworks to apply to their organization.”

Related Content:

 

 

Image Source: TeraVector via Adobe Stock

Kacy Zurkus is a cybersecurity and InfoSec freelance writer as well as a content producer for Reed Exhibition’s security portfolio. Zurkus is a regular contributor to Security Boulevard and IBM’s Security Intelligence. She has also contributed to several publications, … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/why-every-organization-needs-an-incident-response-plan/b/d-id/1335395?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Black Hat: A Summer Break from the Mundane and Uncontrollable

Enjoy the respite from the security tasks that await you back at home. Then prepare yourself for the uphill battles to come. Here’s how.

Next week, security practitioners from across the globe will make their summer pilgrimage to Las Vegas for Black Hat, DEF CON, and other security gatherings. As in years past, there will be no shortage of surprises:

  • Attendees, press, vendors, and analysts will clamor for insight on a tactic or technique that will break what was once thought unbreakable.
  • A geopolitical event will cast a shadow over the week like the Edward Snowden and DIRNSA keynote did in 2014.
  • A vendor will have the most over-the-top party (my bet, Rapid7).
  • The funniest T-shirt will capture the spirit of this year’s get-together.
  • Attendees will be mesmerized by the latest hacking demo or “drop the mic” vulnerability announcement.

What’s more — and most important — attendees for one week can forget the less exciting, mundane, and more challenging tasks that await them back at home. Tasks such as patch management, identity management, and other basics that most affect the security health of an organization and about which security leaders have the most influence.

Why is focusing on the external and sensational far more compelling than the internal and controllable? The answer is what I describe as “breach fixation.” Here are four examples:

In Search of the EZ Button
The EZ button is what I call a popular trend in the corporate world in which executives attempt to solve a business problem in one fell swoop by implementing a technology solution or outsourcing the entire problem to a third-party provider. Instead of trying to make substantial progress on your own, you chuck the whole thing over to someone else and make it their problem. On the corporate side, think of business process outsourcing as where you take a huge problem (IT and billing) and expect … “voilà!” — problem solved. Perhaps this reflects a relentless pursuit of the instant gratification derived from US fast food. Perhaps …

Internal Resistance
Security might be your job, but it’s just one more additional thing for laypeople in your organization to worry about. Aside from clear mandates on the topic, compliance-driven requirements, or a recent “near-death” experience, most organizations are still balancing security needs with day-to-day pressing needs in order to win more customers and increase revenue. This is a good thing. Security is asking other people to improve the organization above and beyond what individual workers are held accountable for on a daily basis. It’s important to understanding that this is the natural order and that security leaders are likely to encounter pushback on additional security controls.

Bias for Products over Processes
I get it. Product equals scalability. To make substantial progress on a security problem in a large 20,000-seat corporate environment you need technology. However, when the underlying risk decisions, business processes, and operations have not been addressed in a meaningful way, products only solve part of the problem and give security leaders a false sense of security. One example I come across in the application security world involves web application firewalls (WAFs). When the PCI DSS first mandated the implementation of WAFs to protect web applications, organizations went out and bought WAFs, implemented them, and in large numbers did not implement any semblance of blocking. WAFs without blocking are really glorified Layer 7 logging devices. Worse, they provide a false sense of security.

Fixing Processes Is Hard
Let’s face it: Reengineering existing business processes to improve security is hard. Doing so requires a deep understanding of existing security processes, an understanding that most organizations don’t have outside of the security team itself. The expanding consulting ecosystem focused on providing clients feedback on NIST security processes reflects that. The different levels of the Capability Maturity Model Integration (CMMI) Scale show just how challenging process improvement can be:

  • Level 1, Initial: Processes are unpredictable, poorly controlled and reactive.
  • Level 2, Managed: Processes are characterized for projects and is often reactive.
  • Level 3, Defined: Processes are characterized for the organizations and is proactive.
  • Level 4, Quantitatively Managed: Processes are managed and controlled.

As security practitioners privately know, most organizations are fortunate to achieve Level 2 and rarely are their security processes quantitatively managed and controlled. That’s because improving security processes is an uphill battle, though well worth the effort, especially after a welcome respite at Black Hat.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

John Dickson is an internationally recognized security leader, entrepreneur, and Principal at Denim Group Ltd. He has nearly 20 years of hands-on experience in intrusion detection, network security, and application security in the commercial, public, and military sectors. As … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/black-hat-a-summer-break-from-the-mundane-and-uncontrollable/a/d-id/1335397?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Our hero returns home £500 richer thanks to senior dev’s appalling security hygiene

On Call Welcome back to On Call, a special corner of The Register where readers can share tales of their cries for help and the deaf ears on which they fall.

Today’s yarn, which includes a non-Linux-based solution to Active Directory woes, comes from a reader we shall call “Clive”, who was struck with a run of bad luck at the hands of the capricious employment gods.

Having taken the P45 walk of disappointment from his first post-university job, Clive was luxuriating in a new position of IT manager. Sadly, it was not to last.

“I was in the job barely three months when the managing director came to me and advised me that the company was struggling and that he needed to let me go.”

Cue an escorted second walk, accompanied by a box of belongings, directly out of the building.

If you’re expecting Clive, as a scorned IT manager, to indulge in nefarious and vengeful activities, you’d be wrong. He instead began looking for a new employer.

“About a week later,” said Clive, “I received a telephone call from the asset recovery agents of a well-known accountancy firm.”

It seems that the company had gone bust since his departure, Clive’s name and number had turned up, “and how would I like to earn £500 doing half a day’s work?”

Faster than you can say Homes Under The Hammer, Clive accepted: “All I had to do was get the administrators of the company back into the network, back up all the files to a supplied hard drive and submit an invoice.”

Continuous paper printer

Rise of the Machines hair-raiser: The day IBM’s Dot Matrix turned

READ MORE

Simple stuff for the ex-IT manager, surely?

Clive met the administrator at his old workplace and found his old computer at his old desk, ready to go. Alas, whoever was in charge of security was better than whoever had been actually running the business and, unsurprisingly, Clive found his account password had been changed.

Anticipating a problem, Clive told us: “I had brought with me a password reset CD, that I had used several times before. I inserted it into the PC, booted the software and reset my password.”

Sadly, that approach had been thought of, and Clive discovered that his Active Directory account had, quite correctly, been disabled.

What to do?

Fearing that £500 might be about to slip through his fingers, Clive was trying to remember the login details of the backup account when he noted something strange.

“The senior web developer’s PC was switched on, odd as every other machine on the premises had been powered off.”

He sauntered over and gave the mouse a tentative jiggle. Would the screen turn on? Would there be another login box to taunt him?

No. “It was a beautiful page of Visual Studio code.”

Better still: “Right there in the middle of the code was this guy’s username and password in the clear.”

As is too often the case, the senior developer (or “code monkey”, as Clive described him) had an account festooned with admin rights. Certainly enough for Clive to log in as him, re-enable his old account and then do the deed as far as retrieving the data required.

“A job well done,” he observed. Although probably not in reference to the practice of slapping cleartext credentials into a source file.

As for what happened next? Clive left the defunct company behind and went on to become a contractor. He even completed some of the projects left unfinished for grateful customers (for, we’d wager, a good deal more than £500).

However, “the events of the day,” concluded Clive, “left an impression on me.”

Ever saved the day thanks to a co-worker’s terrible working practices? Or perhaps, their final, vengeful act? Of course you have, and you should tell On Call all about it. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/02/on_call/

DARPA to Bring its Smart Ballot Boxes to DEF CON for Hacking

The agency this week will share the source code and hardware specifications for the secure voting system prototypes.

US Defense Advanced Research Projects Agency (DARPA) researchers will set up three new smart electronic ballot-box prototypes at DEF CON’s famed Voting Village next week in Las Vegas, but they won’t be challenging hackers at the convention to crack them: They’ll be helping them do so.

“We are providing the source code specifications, tests, and actually even providing participants at DEF CON with an easy way of actually putting their own malicious software into [the devices],” explains Daniel Zimmerman, principal researcher with Galois, a DARPA contractor working on the project. “We’re not daring them but actually helping them break this.”

DARPA’s smart ballot box is the Defense Department agency’s prototype, featuring a secure, open source hardware platform that could be used not only in voting platforms, but also in military systems. It’s part of a broader DARPA project called System Security Integrated Through Hardware and Firmware (SSITH), which is developing hardware security architectures and tools that are better protected from hardware vulnerabilities exploited in software. DARPA ultimately hopes to build secure chip-level processors that thwart hardware hacks as well as software-borne attacks.

Zimmerman, whose team is developing methods and tools to measure the security of the processors, says the smart ballot box prototypes at DEF CON are a way for DARPA to get a broader evaluation of just how secure the processors really are. “This goes beyond ‘yes, it’s secure, or no, it’s not,'” he explains. The project is aimed at getting as comprehensive a security analysis of the technology as possible, meaning “a wider range of people being able to hammer on these systems to try to find flaws,” Zimmerman adds.

The DEF CON demonstrations are the start of a two-year public evaluation of the processors, he says. The team will release the source code and hardware specifications this week. “The source code will be out, the hardware specs will be out there,” he says, and by the end of the year, a “low-cost version of [the ballot box prototype] you can buy and hack at home.”

The smart ballot box, which is about the size of a two-drawer filing cabinet with a letter-sized printer lid on top, runs on a small embedded RISC 5 processor with a FreeRTOS-based custom software app. There’s a separate touch screen where “voters” mark their votes, and a connected printer spits out the ballots. The touch screen and printer aren’t part of the hacking experiment: just the ballot box.

The smart ballot box reads the barcoded ballots to determine whether they are valid for the “election.” It allows voters to confirm their votes and either cast or ditch (aka “spoil”) them. “We’re not doing an end-to-end verifiability crypto system this year,” notes Zimmerman, but instead, a more visible verification process so participants can see the operation. DARPA instead is employing basic cryptography for the system to accept ballots.

He says hackers at DEF CON could, for example, try to compromise the ballot box to accept duplicate ballots or spoiled ballots. Or they could fool the box into reading a different result than the actual one on the ballot. “We will have a reporting system that takes the output from the ballot box and uses it to compute the election results so they then can be compared with pieces of paper in the ballot box,” he says.

But the DARPA smart ballot box is not anything close to a real prototype product or system. It’s all about providing an interesting system to hack and find holes. “This was never intended to be a viable product; we’re trying to be very clear about that,” he says. And each of three ballot boxes will be based on a different SSITH processor that DARPA has built.

Election systems are in the hot seat now, so putting out prototypes for that area is likely to attract more researchers than a less familiar military system might, he notes.

It Took a Village
DEF CON’s wildly popular Voting Village first debuted in 2017, a year after the 2016 US presidential election was rocked by Russia’s online meddling campaign, raising concerns over how a nation-state or other threat actor could disrupt or tamper with election systems and voting machines. The Voting Village has served as a hands-on workshop, of sorts, for hackers or burgeoning hackers to take a crack at decommissioned voting systems, equipment, and simulated election websites. In the very first year, participants found two zero-day flaws within the first 90 minutes the event began.

There were 30 pieces of voting equipment in the room, including Sequoia AVC Edge, ESS iVotronic, Diebold TSX, WinVote, and Diebold Expresspoll 4000 voting machines. In 2018, there was even more voting machine equipment – and successful hacks – as well as a replica database that housed the real, publicly available state of Ohio voter registration roll. One attendee was able to break through two layers of firewalls in front of the server but ultimately couldn’t pull the data.

DARPA’s open source hardware, not surprisingly, is expected to be the hot feature of the Village this year. While the SSITH processors are unlikely to see the light of day in today’s commercial – and mostly proprietary – voting machines and election equipment in the foreseeable future, the project has security experts calling for more open voting system architectures.

“As far as open source hardware, I think it probably has a long way to go before we see it” in elections or other computing environments, notes Zimmerman.

Carsten Schuermann, an election security expert who famously hacked a WinVote voting machine at the very first DEF CON Voting Village, says open source is key to ensuring transparency of voting systems. He says he isn’t sold that open source systems necessarily mean better security, but they would provide election and government officials with better insight into how secure the voting and election equipment they buy and use really are.

“I believe voting machine vendors usually are trying to do their best [with security] within the budget they have, and they also do the minimum thing to satisfy the requirements the government gave them,” says Schuermann, who is an associate professor at the IT University of Copenhagen.

Like other experts, he worries about public confidence in election systems and their outcomes, especially in the wake of the 2016 US election. If vendors are keeping experts in the dark on their security, it can cause mistrust among the electorate, according to Schuermann.

Microsoft, meantime, has built an open-source election system software development tool called ElectionGuard, which employs vote verification via encryption methods so voters can confirm their votes were counted and election officials can verify results. The vendor demonstrated a prototype voting system last month and already has inked partnerships with voting system vendors such as Smartmatic and Clear Ballot. It also said Dominion Voting Systems is “exploring” using ElectionGuard in its products. 

ElectionGuard is not scheduled or expected to be part of the DEF CON Voting Village as of this posting. The full Voting Village schedule has not yet been released.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

 

 

 

 

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/darpa-to-bring-its-smart-ballot-boxes-to-def-con-for-hacking/d/d-id/1335418?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

PCI Security Council, Retail ISAC Warn Retailers on Magecart Attacks

Online card-skimming activities grew sharply this summer fueled by the availability of attack kits and other factors, Malwarebytes says.

The Retail and Hospitality ISAC (RH-ISAC) and the PCI Security Standards Council (PCI SSC) Thursday issued a joint bulletin warning e-commerce sites about the growing threat to payment security from online card-skimming activity.

The alert came out the same day as a report from Malwarebytes that noted a sharp increase this summer in activities by Magecart operators — an umbrella term for groups behind card-skimming attacks.

According to Malwarebytes, in July its security controls blocked some 65,000 attempts to steal payment card data via card-skimmers on compromised online stores. US-based shoppers represented 54% of those targeted in the Magecart attacks followed by shoppers in Canada (16%) and Germany (7%).

In addition to an increase in the number of compromised e-commerce sites, Malwarebytes also observed a steady increase in what it described as “spray and pray” attacks on e-commerce sites hosting code on Amazon S3 buckets.

Troy Leach, CTO at the PCI SSC, says this week’s bulletin with RH-ISAC stemmed from growing concern among stakeholders over the threat. “At our most recent PCI SSC Board of Advisors meeting, retail representatives identified this as an ongoing challenge to identify and monitor,” Leach says. “When we contacted the RH-ISAC and Payment Processors via FS-ISAC, they confirmed an increase in these attacks,” as well he says.

Online card skimming is not new. Magecart attacks have been happening since at least 2015. NuData Security, a Mastercard company, has estimated that Magecart groups have successfully compromised over 17,000 domains so far. Others have pegged the number much higher.

Magecart victims include numerous large organizations including British Airways, which was recently fined $229 million under GDPR over the incident, as well as Ticketmaster and Newegg.

Carlos Kizzee, vice president of intelligence at the RH-ISAC, says his organization does not have any numbers yet on the financial impact these attacks are having on online merchants. But breaches like the one at British Airways and Newegg highlight just how signficant it can be. “With trillions of dollars flowing through the retail and hospitality sector every year, it comes as no surprise that financial gain is the primary motivation for the majority of threat actors targeting this sector,” Kizzee says.

JavaScript Sniffers

In online card-skimming attacks, threat actors insert what’s often little more than a few lines of JavaScript code directly into an e-commerce website or into a third-party application or service that a site might be using. Some examples of third-party applications and components in which attackers typically conceal their JavaScript card sniffers include advertising scripts, visitor tracking utilities, live support features, and content management tools.

Magecart actors and other card-skimming outfits use a variety of methods to try and infect a website or third-party, including exploiting vulnerable plugins, brute force login attempts, phishing, and other social engineering techniques, the PCI SSC and RH-ISAC said in their bulletin.

The sniffers are typically designed to check which Web page the user is on, and are triggered when a victim submits card information during the checkout process. Attackers use the sniffers to collected credit-card data and associated data such as the cardholder’s name, billing address, phone number, and password. The stolen data is then either stored on the compromised server or sent to an attacker-controlled system, they noted.

The JavaScript sniffers can be very hard to detect and often the card-skimming activity takes place without the merchant knowing about it. The sniffers can also be very persistent: one in five Magecart-infected sites got re-infected in days, the two organizations said, quoting a third-party report.

Jerome Segura, director of threat intelligence at Malwarebytes, says multiple factors are driving the increase in online card skimming. Among them is the growing availability of skimmer kits for launching attacks, he says.

A kit called Inter for sale in underground markets has been especially popular among attack groups in recent months, he says. In a report earlier this year, Fortinet described Inter as a highly customizable, easily configurable skimmer available in underground forums for $1,300 per license.

“Most skimming attacks we see are a result of a breach of the e-commerce platform itself,” Segura says. Often, these are sites that haven’t been patched, or are vulnerable to brute-force attacks and other exploits. “Supply-chain attacks, where a third-party script has been compromised are more dangerous, although not as common.”

The PCI Council and retail ISAC offered several best practices that online merchants can use to mitigate their exposure to the threat. To detect card-sniffers for instance, organizations should consider using file-integrity monitoring or change-detection tools, perform internal and external vulnerability scans, and should conduct periodic penetration tests.

To prevent infection, organizations should patch security vulnerabilities, implement updated malware detection tools, limit access to critical data, and use strong authentication for accessing system key components, they said.

“We want to note that a great amount of our emphasis is on the risks presented from beyond known third-party integrations,” RH-ISAC’s Kizzee says.

These may be an extension of third-party integrations that are generally not known by the companies that own and maintain the e-commerce websites. “They are thus a source of risk that companies are neither aware of, nor actively managing, in their risk management activities,” Kizzee says.

Related Content:

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/pci-security-council-retail-isac-warn-retailers-on-magecart-attacks/d/d-id/1335420?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

47% of Android Anti-Malware Apps Are Flawed

Protection failures come at a time when malicious Android software is becoming more of a problem.

Mobile platforms are not free from malware. That’s why experts tend to recommend anti-malware protection for all mobile device users and platforms, assuming, of course, that the anti-malware software works. But new research on 21 Android anti-malware apps indicates this may be a very bad assumption.

Comparitech, which performs product reviews and comparisons, tested 21 separate anti-malware packages for Android and found 47% of them failed in some way. The protection software came from companies both large and small, with roughly a quarter coming from companies including AVL, ESET, Webroot, and Malwarebytes that also have desktop anti-malware products.

“We basically put test viruses on a bunch of Android phones and then ran them all through the various antivirus programs. Most of [the products that failed] just didn’t see the virus,” says Paul Bischoff, editor at Comparitech. A total of eight products missed the Metasploit payload used to test the anti-malware software: AEGISLAB Antivirus Free, Antiy AVL Pro Antivirus Security, Brainiacs Antivirus System, Fotoable Super Cleaner, MalwareFox Anti-Malware, NQ Mobile Security Antivirus Free, Tap Technology Antivirus Mobile, and Zemana Antivirus Security.

The failures come at a time when malicious Android software is becoming more of a problem. Lukas Stefanko, a malware researcher at ESET who compiled data on malware found on the Google Play store, found that in the month of July, Google hosted 205 harmful apps that were downloaded more than 32 million times. The most common malware, and downloaded most often, was in the form of hidden advertising malware, with subscription scams downloaded next most frequently, followed by stalkerware.

As bad as the missed malware may be, Comparitech’s report found three anti-malware apps that had more serious flaws — flaws that could actively endanger the privacy or security of the user. VIPRE Mobile, AegisLab, and BullGuard were each found to have critical issues.

VIPRE Mobile, for example, could leak users’ address books to attackers because of poorly implemented access control. The other apps had critical issues, as well. Comparitech reports it disclosed these vulnerabilities to the vendors during testing and that all have been patched, with the patches verified through additional testing.

According to Bischoff, one critical lesson from Comparitech’s testing is that organizations should perform their own tests before deploying any mobile anti-malware in the field. In addition to testing for basic efficacy, “You also need to take into account whether the apps are tracking you,” he warns.

He points out that some antivirus companies have been known to track user devices and be very aggressive in refusing to cancel subscriptions or change licensing terms. “There are a lot of things that an enterprise should take into consideration, whether it’s performance or whether they want their employees to be tracked by a third-party company through the app,” Bischoff says.

Privacy and security are, after all, two different things, he points out. “Even though these apps protect you from malicious attacks, they don’t protect you from themselves,” Bischoff says.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

 

 

 

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/mobile/47--of-android-anti-malware-apps-are-flawed/d/d-id/1335422?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cisco Pays $8.6M in First False Claims Suit for Vulnerabilities in Security Product

A security consultant reported vulnerabilities in Cisco’s Video Surveillance Manager in 2009 – but the company ignored the issues and fired the consultant.

Cisco Systems agreed on July 31 to pay $8.6 million to settle litigation that the enterprise-technology giant violated the US False Claims Act for years when it failed to patch vulnerabilities in its family of video-surveillance products while continuing to sell the devices and software to the US government.

The whistleblower lawsuit, brought by law firm Constantine Cannon LLP, alleged that Cisco violated the US act when it sold vulnerable video-surveillance systems to federal agencies, state governments, and the District of Columbia. The lawsuit originated from issues in Cisco’s Video Surveillance Manager that were reported in 2009 by a security consultant, James Glenn, while working for a Cisco partner. After initially listening to Glenn, the company fired him and continued selling the product, the lawsuit maintained.

“Cisco markets the product as particularly suited for government customers, and knows that the product is routinely sold to government customers, even though Cisco knows that these critical security flaws render the product largely ineligible for purchase by government entities,” the court filing states.

The settlement is part of a trend of companies having to increasingly pay fines for their lapses in cybersecurity. In July, a UK regulator notified British Airways that the airline will face a $229 million fine for a 2018 data breach, the largest fine to date under the European Union’s General Data Protection Regulation (GDPR).

For security companies, the settlement should be seen as a warning that they could be held liable for vulnerabilities in their products if the issues are not handled in a timely manner.

“The False Claims Act exists to protect the government from being sold products that fail to comply with their standards and specifications,” says Anne Hartman, a partner at Constantine Cannon. “That part is not novel, but with regard to the cybersecurity standards, this is the first time that the government has recovered money on that theory.”

Because the vulnerable product exposed government agencies to greater cyber-risk, the lawyers brought the whistleblower lawsuit under the US False Claims Act in federal court in the Western District of New York. Under the settlement, which is not a court ruling, Cisco agreed to partially refund the government. The $8.6 million penalty includes 20% paid to Glenn for his role in reporting the issue to the government.

“It will inspire other people to step forward and speak up when they have inside information about vulnerabilities and breaches,” says Mary Inman, a partner at Constantine Cannon. “We are now in an era when we are increasingly reliant on tech companies, and we need — more than ever — the voices of whistleblowers like James.” 

While Cisco accepted responsibility for the vulnerable product, the company partially blamed the issue on changing perceptions of the acceptability of vulnerable software.

“We intend to stay ahead of what the world is willing to accept,” said Mark Chandler, chief legal officer for Cisco, in a blog post published by the company. “Nothing illustrates better the way standards are changing than our engagement in resolving a dispute involving video security software products sold by us in Cisco’s fiscal years 2008 through 2013. In short, what seemed reasonable at one point no longer meets the needs of our stakeholders today.”

The fine is a relatively modest sum to pay to shut down an 8-year-old lawsuit. Cisco underscored that “the total sales at issue were well under one one-hundredth of one percent of Cisco’s total sales.”

Glenn, the consultant who brought the lawsuit, argues that companies need to be ready to work with anyone who reports a vulnerability in order to make their products better.

“Companies often see security researchers as troublemakers and disrupters, but the only way [that is true is] if their objectives are not the same as the researchers — to find these problems and get them fixed,” he says.

In 2013, Cisco published information on three vulnerabilities that would allow an attacker to gain access to the video information and “create, modify and remove camera feeds, archives, logs and users.” The vulnerabilities are the same as those originally reported in 2009, Glenn says.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

 

 

 

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/cisco-pays-$86m-in-first-false-claims-suit-for-vulnerabilities-in-security-product/d/d-id/1335423?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple