STE WILLIAMS

Pawnbroker pwnd: Cash Converters says hacker slurped customer data

Pawnbroking and secondhand goods outlet Cash Converters has suffered a data breach.

Customers were notified of the leak on Thursday by email, samples of which have been posted on social media.

Cash Converters said it had discovered that a third party gained unauthorised access to customer data within the company’s UK webshop.

Credit card data was not stored. However, hackers may have accessed user records including personal details, passwords, and purchase history from a website that was run by a third party and decommissioned back in September. The current webshop site is not affected, the firm said.

Cash Converters said it has reported the incident to authorities in the UK and Australia alongside launching an urgent investigation itself and taking steps to safeguard its site.

Troy Hunt‏, the researcher behind the popular haveibeenpwned breach notification site, said on Thursday that he wasn’t previously aware of the problem but the incident did bear the hallmarks of a breach at Cash Converters.

El Reg asked Cash Converters to comment but we’re yet to hear back. We’ll update this story as and when we hear more.

A spokesman for the ICO confirmed it was looking into the reported breach. “We’re aware of an incident at Cash Converters UK and will be making enquiries,” he said. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/16/cash_converters_breach/

‘Reaper’: The Professional Bot Herder’s Thingbot

What’s This?

Is it malicious? So far it’s hard to tell. For now it’s a giant blinking red light in security researchers faces warning us that we’d better figure out how to secure the Internet of Things.

Justin Shattuck contributed to this article. 

This isn’t your mama’s botnet. This is a proper botnet. If you were the world’s best Internet of Things botnet builder and you wanted to show the world how well-crafted an IoT botnet could be, Reaper is what you’d build. It hasn’t been seen attacking anyone yet, and that is part of its charm.

The interesting aspect of Reaper is not its current size, but its engineering, and therefore its potential. But from a pure research perspective, we’re interested in how Reaper is spreading. Instead of targeting weak auth like a common thingbot, Reaper weaponizes nine (and counting) different IoT vulnerabilities. Consequently, we think the current media focus on “the numbers,” instead of the method, is a tad myopic.

Size and Position
Brian Krebs puts the current size of Reaper at over 1M IoT devices. We have data that suggest it could grow to include over 3.5M devices and growing at 85,000 devices per day. The reason Reaper could get so big is that, unlike its predecessors, Mirai and Persirai, Reaper uses multiple attack vectors. Mirai used default passwords. Persirai used the blank username+password combo, which, frankly, we think is such a doofus security error on the part of the manufacturer that we feel it barely deserves to have a CVE.

Reaper is almost showing off by not even trying the password cracking, and instead just exploiting different vulnerabilities (RCE’s, web shells, etc.) in nine different IoT vendor devices.

Reports on the “size” of Reaper vary. We’ve scanned 750,000 unique devices that match the nine vulnerabilities currently exploited by Reaper. We regularly scan 85,000 new “reaper-compatible” devices per day. We don’t know which of them are actually infected, but there’s no reason that Reaper itself couldn’t infect them, unless its authors didn’t want it to.

The nine vulnerabilities currently used by Reaper are fairly rudimentary, as vulnerabilities go. If the thingbot authors were to include a few dozen, existing vulnerabilities that fit Reaper’s device targeting profile, we think they could grow the thingbot an additional 2.75 M nodes. If they wanted to. Adding that 2.75 M to the 750,000 that are currently “Reaper-compatible” gives the number 3.5 M. (Note: We will not be disclosing the additional CVE’s, as that would simply expedite the authors’ exploits.)

The actual size of Reaper is probably limited to whatever size its authors want it to be. Right now it feels like its authors are experimenting. Building and testing. Maybe Reaper is pure research. We don’t know, and that’s kind of why we respect it.

Is It Malicious?
So far, Reaper hasn’t been seen attacking anyone with massive volumetric DDoS attacks. Yes, that’s a good thing. At least one of us thinks it might never be seen attacking anyone. If Reaper were to start being used as the ultimate Death Star weapon, that would cheapen its value. It would also result in active takedown campaigns.

Remember how at least two strike-back bots were created to combat Mirai after it attacked Krebs, OVH, and Dyn? Brickerbot actively wiped the filesystems of infected IoT devices (in many cases, turning them into little more than bricks). Hajime was more polite and merely blocked ports and left a cute little note informing the device owner that their device was participating in attacks and please stahp!

If Reaper starts attacking people with DDoS, it will turn from a marvel of thingbot infrastructure engineering into (yawn)  another volumetric attack tool. The bot herders would be hunted down by law enforcement (ala the Mirai case), and the bot would be disassembled.

What Is It Doing?
Right now, Reaper is an object lesson for IoT manufacturers and security researchers. It’s like aF5's depiction of Persirai—the mother of Reaper? giant blinking red light in our faces every day warning us that we’d better figure out how to secure IoT.

We’ve been monitoring the Persirai botnet for the last six months. We regularly measured Persirai at 750,000 IP cameras. Persirai was never seen attacking anyone, either, and we speculated about what it could be doing. For example, besides DDoSing victims, there are about a dozen different ways that a bot herder could monetize a botnet of this size. Off the top of our heads, in no particular order:

  • Spam relays (each bot could send 250 emails a day)
  • Digital currency mining (increasingly unlikely, though)
  • Tor-like anonymous proxies, which can be rented
  • Crypto ransom
  • Clickjacking
  • Ad fraud
  • Fake ad, SEO Injection
  • Fake AV fraud
  • Malware hosting

Reaper’s mission could be any one, or even several of those.

Since Reaper is also composed of many digital video devices, we could speculate this: What if both Persirai and Reaper are actually surveillance networks? Think of the intel you could gather with access to millions of video cameras. Nation-states with active intelligence programs would be drooling all over themselves to get access to that data. The US, China, Russia, and North Korea are all obvious suspects because who else but a nation-state could process or store all the input?

Is There a Lesson Yet?
We expect to see to see more thingbots arise as IoT becomes the attack infrastructure of the future. Just because Reaper is the latest, doesn’t mean it will be the last. If Reaper doesn’t attack anyone or give away its intentions, it may enter the same mythical space occupied by the Conficker worm of the 1990s. At its peak, Conficker infected over 10 million Windows computers and caused great concern because it could have done an insane amount of damage. But it was never activated, and it remains a study in bot construction.

The obvious lesson is that the state of IoT security is still incredibly poor, and we need to do a better job of threat modeling the Internet of Things.

Get the latest application threat intelligence from F5 Labs.

David Holmes is the world-wide security evangelist for F5 Networks. He writes and speaks about hackers, cryptography, fraud, malware and many other InfoSec topics. He has spoken at over 30 conferences on all six developed continents, including RSA … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/f5/reaper-the-professional-bot-herders-thingbot/a/d-id/1330439?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Death of the Tier 1 SOC Analyst

Say goodbye to the entry-level security operations center (SOC) analyst as we know it.

It’s one of the least glamorous and most tedious information security gigs: sitting all day in front of a computer screen, manually clicking through the thousands of raw alerts generated by firewalls, IDS/IPS, SIEM, and endpoint protection tools, and either ignoring or escalating them. There’s also the constant, gnawing fear of mistakenly dismissing that one alert tied to an actual attack.

But the job of the so-called Tier 1 or Level 1 security operations center (SOC) analyst is on track for extinction. A combination of emerging technologies, alert overload, and fallout from the cybersecurity talent shortage is starting to gradually squeeze out the entry-level SOC position.

Technology breakthroughs like security automation, analytics, and orchestration, and a wave of SOC outsourcing service options, will ultimately morph the traditionally manual front-line role into a more automated and streamlined process.

That doesn’t mean the Tier 1 SOC analyst, who makes anywhere from $40,0000- to $70,000 a year and whose job responsibilities can in some cases include running vulnerability scans and configuring security monitoring tools, will become obsolete. Rather, the job description as we know it today will.

“The [existing] role is going away,” says Forrester principal analyst Jeff Pollard, of the SOC Tier 1 analyst job. “It will exist in a different form.”

Gone will be the mostly manual and mechanical process of the Tier 1 SOC analyst, an inefficient and error-prone method to triage increasingly massive volumes of alerts and threats flooding organizations today. Waiting for and clicking on alerts, using a scripted process, and then forwarding possible threats to a Tier 2 analyst to confirm them and gather further data just isn’t a sustainable model, experts say.

“I’ve never been a fan of the term ‘Tier 1 SOC analyst.’ The term itself is a symptom of a larger problem,” says Justin Bajko, co-founder and a vice president of new SOC-as-a-service startup Expel and the former head of Mandiant’s CERT. “There’s a lot of manual crank-turning, and I’m [the analyst] awash in a sea of alerts. My ability to do real analysis and add value to the business with clickthrough work … is pretty minimal.

“That’s where we are right now” with the Tier 1 SOC analyst, he says.

Bajko believes this manual role has actually contributed to the cybersecurity talent gap. “It’s not a great use of talent that’s out there,” he notes.

The Tier 1 SOC job not surprisingly has a relatively high burnout and turnover rate. Once analysts get enough in-the-trenches experience, they often leave for higher-paying positions elsewhere. Some quit out of boredom and opt for more lucrative and interesting developer opportunities.

Large organizations meanwhile are scrambling to keep their SOC seats filled while they begin rolling out orchestration and automation technologies, for instance, to better streamline operations.

“The majority of a Tier 1 SOC analyst’s job is just getting through the noise as best you can looking for a signal,” say Bajko. “It starts feeling like a losing battle with a bunch of raw and uncurated alerts” to go through, and sometimes multiple consoles that aren’t integrated, he says.

With the use of analytics, orchestration, and automation technologies as well as new SOC services that perform much of the triaging of alerts before they reach the analyst’s screen, the Tier 1 analyst can become more of an actual analyst, according to Bajko. “Instead of a sea of alerts, they can spend time being thoughtful about things they are looking at and make better decisions and apply more context.”

Greg Martin, founder of startup JASK, which offers an artificial intelligence-based SOC platform, says Tier 1 analysts are basically the data entry-level job of cybersecurity. “We created it out of necessity because we had had no other way to do it,” he says. But he envisions them ultimately taking on more specialized tasks such as assisting in investigations using intel they gather from an incident.

The Tier 1 SOC analyst will become more like the Tier 2 analyst, who actually analyzes an alert flagged by a Tier 1 and decides whether it should get escalated to the highly skilled Tier 3 SOC analyst for a deeper inspection and possible incident response or forensics investigation. Tier 2 analysts, who often kick off the official incident response process, also would get more responsibility in that scenario, and Tier 3 could spend more time on proactive and advanced tasks such as threat hunting, or rooting out potential threats.

“So Tier 1 would be able to figure out if [an alert is] real and Tier 2 would make decisions like we should isolate that machine,” for example, Forrester’s Pollard says. “Tier 1 won’t go away; it must move up to more advanced tasks.”

Today’s Tier 1 analyst drowning in alerts is at risk of alert fatigue. That could result in a real security incident getting missed altogether if it’s misidentified as a false positive (think Target’s mega-breach). “My big worry in the SOC is a Tier 1 analyst is under pressure to get through as many alerts as they can, and they make some bad decisions,” says Expel’s Bajko, who has built and managed several SOCs during his career. “I’m much more worried about calling a thing a false positive” when it’s not, he says.

Aggies in the SOC

Some SOC managers are already re-architecting their teams and incorporating new technologies. Take Dan Basile, executive director of Texas AM University’s SOC, which supports the 11 universities under the AM system as well as a half-dozen state government agencies on its network. Basile had to create a whole new level of SOC analyst to staff up: he calls it the “Tier .5” SOC analyst.

“We initially have Tier 1s, 2s, etc. But we have had a hard time even hiring full-time employees, much less hanging onto them for more than a year. We fully expect them to leave and go to industry and make three times what” a university can pay, Basile says.

So Basile got creative. The Texas AM SOC partnered with several groups on campus to identify undergraduate students who might be a good fit for part-time SOC positions. The student Tier .5  SOC analysts work closely with Tier 1 SOC analysts, who oversee and perform back-checks on the students’ alert-vetting decisions. The students look at the alerts and then grab external information to put context around the alert. “They pivot and hand it up to a higher grade student or an official Tier 1 employee,” Basile says. “They’re doing that first false-positive removal.”

The Texas AM Tier 1 SOC analyst then verifies the Tier .5’s work. “They send it on up if it’s okay,” he says.

Hiring undergrads helps fill open slots in more remote campus locations, for example, he says. There are some 250,000 users on the university’s massive network at any time, so there are a lot of moving parts to track. “Due to the location of some of these universities [in the AM system], it’s just hard as heck to hire anyone in cybersecurity right now.”

Texas AM recently added an artificial intelligence-based tool from Vectra to the SOC to help cut the time it took to vet alerts, a process that often took hours to reach the action phase. AI technology now provides context to alerts as well, and now it only takes 15- to 20 minutes to triage them, Basile says.

The Tier 1 SOC analysts at Texas AM are viewing results from the AI-driven tools, next-generation endpoint, and SIEM tools, he says. “They’re doing that first rundown: Is this really bad? Do I need to escalate it? Is this garbage? Or do I need to scream at the top of my lungs because it’s that bad?”

Basile says even with newer technologies that streamline the process, you still need person power. “I don’t see people moving away because of AI,” he says. You need people to verify and dig deeper on the intel the tools are generating. “AI is just providing you more information,” he says. “You will always need someone sitting there behind the screen and saying yes or no.”

It’s not about automating the SOC itself. “I don’t think you’ll ever automate away the job of SOC analysts. You need humans to do critical thinking,” Expel’s Bajko says.

Meantime, it’s still more difficult to fill the higher-level, more skilled Level 2 and 3 SOC analyst positions.  “I’ve been looking for a good forensics person for a year now. I don’t even have the job posted anymore” after being unable to fill it, Texas AM’s Basile says. The result: the university’s Tier 3 analysts have a heavier workload,  he notes.

Meanwhile, the student SOC staffers get to acquire deeper technical experience. “Now they can dig into packet capture,” for instance, he says. “This gives entry-level people the opportunity to learn, and to find more bad things.”

That’s good news for entry-level security talent. While SOC Tier 1 jobs today are relatively low-tech, the positions often call for a few years’ experience in security, including analysis of security alerts from various security tools. Such qualification requirements make it even harder for SOC managers to fill the slots since most newcomers to security just don’t have the hands-on experience.

SOCs Without Tiers

Not all SOCs operate in tiers or levels of analysts. Mischel Kwon, co-founder of MKACyber and former director of the US-CERT, says she doesn’t believe in designating SOC analysts by level. “I don’t see my SOC in tiers, and a lot of people are not looking at tiers anymore,” says Kwon, whose company offers SOC managed services and consulting.

Placing analysts by tiers – 1, 2, and the most advanced, 3 – only made the job tedious for lower-level analysts, she notes. “It puts the more junior people into boring and pigeonholed activity. We really find that that exacerbates the turnover problem.”

Kwon says a SOC analyst should understand all things SOC. Her firm “pools” SOC analysts into groups, she says, rather than tiers. Pooling is not new, though:  “It’s been in sophisticated SOCs for at least [the past] 10 years,” she says.

MKACyber’s SOC strategy is similar to that of Texas AM’s: pair up the junior analysts with more senior ones so they can learn skills from them. “No one wants to be Tier 1 and it’s hard to be Tier 3. But if you put them them into pools working together, the junior [analysts] become midlevel very quickly, versus in a very stovepiped SOC,” Kwon says.

See Texas AM Dan Basile, executive director of the university’s SOC, present Maximizing the Productivity and Value of Your IT Security Team at this month’s INSecurity conference.

 

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/analytics/death-of-the-tier-1-soc-analyst/d/d-id/1330446?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Forget APTs: Let’s Talk about Advanced Persistent Infrastructure

Understanding how bad guys reuse infrastructure will show you the areas of your network to target when investigating new threats and reiteration of old malware.

Security staff put a lot of emphasis on advanced persistent threats, or APTs, and rightly so. They are extremely difficult to defend against if a hacker is specifically targeting an organization. But with everyone’s focus on APTs, we may be missing a different type of attack vector: advanced persistent infrastructure.

We tend to view threats in a silo, often ignoring correlating histories. By doing that, we miss vital information about attacks. In this case, intruders are using patterns that weren’t readily picked up in the past. They aren’t looking to buy a new server for every new attack. Instead, threat actors will reuse IPs and domain names across multiple campaigns.

The evolution of the Apache Struts vulnerability is a good example of how threat actors use advanced persistent infrastructure as an attack vector. In 2014, there were initial reports of exploits against the Struts vulnerability. In early 2017, new exploits were discovered in a Struts 2 vulnerability. We noticed the two exploits followed a very distinct pattern.

According to data submitted by qualified companies without attribution on TruSTAR’s threat intelligence platform, we can now see threats trending across major industry sectors like retail, financial services, cloud, and healthcare. For the past four weeks, indicators of compromise (IOCs) associated with Apache Struts 2 have been trending across our all of the users who submit data to our platform. Looking back at historical report data in the Struts 1 and Struts 2 vulnerabilities, we found that the IP addresses used with the original Struts are now being used with the new Struts.

This lead to some interesting observations:

  • Tactics May Change But IPs Don’t. Unless they are a member of a big crime organization, most bad guys don’t have the money to buy new IP addresses and domains over and over again. Hence, when an IP address comes online we should know exactly what it is tied to and its history.
  • Hackers Feed on the Lazy. The connections between Struts 1 and Struts 2 created a new reality: as is often the case, when a new zero-day exploit is reported, organizations are slow to move on patching these things. The bad guys know they have to act quickly to make use of the exploit. What they do is simply retool their favorite form of malware, and then use the infrastructure access they have in place, like IPs and domains, to launch the new attacks.

Recognizing how these IP addresses and domains are reused allow you to predict what may be coming down the pike. Look at your activity history. That will give you an idea about what to be on the lookout for. When you see a new version or variant of a known malware, monitor old IPs and domains that directly correlate for new activity.

By understanding how bad guys reuse infrastructure, you’ll have a better idea of the areas of your network to target when investigating a new threat, especially when it is a reiteration of an old malware.

This research was provided by the TruSTAR Data Science Unit. Click here to download the IOCs that are currently leveraging the Apache Struts 2 attack.

Related Content:

 

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Curtis Jordan is TruSTAR’s lead security engineer where he manages engagement with the TruSTAR network of security operators from Fortune 100 companies and leads security research and intelligence analysis. Prior to working with TruSTAR, Jordan worked at CyberPoint … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/forget-apts-lets-talk-about-advanced-persistent-infrastructure/a/d-id/1330427?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Q: Why are you running in the office? A: This is my password for El Reg

A trio of Indian boffins have studied the use of smartphone accelerometers as biometric sensors and concluded they could be a handy way to identify users.

Unlike the collaboration between American and Hong Kong researchers who want “who are you?” for ad-tracking, the National Institute of Technology, Karnataka boffins’ research recommended using the accelerometer-ID to secure your mobe.

The researchers collected accelerometer data from ten users carrying a Galaxy J1 in their pockets, with a focus on how the test subjects walk, and sampled 50 times a second.

That yielded data whose physical inputs – how frequently your feet hit the ground, for example, and how hard – depend on who’s doing the walking.

From that, the researchers went conducted the usual processes of feature extraction, modelling, and analysis to work out how well the phone’s understanding of gait can be turned into a useful identifier.

A final sample of 31 features, with 3,600 examples in the dataset, suggested this method could be an effective way to identify users: across the ten individuals in the test, the researchers’ model accuracy ranged from a worst performance of 93.85 per cent up to “subject 6”, which the model got right 99.7 per cent of the time.

The best-performed modelling algorithm of the four the researchers tested was the Random Forest Classifier, which averaged 96.79 per cent accurate identifications.

Their study, published at arXiv, is under consideration for next year’s IEEE National Conference on Communications. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/16/smartphone_accelerometer_biometric_security/

Does UK high street banks’ crappy crypto actually matter?

The Register‘s recent story about the failure of most UK high street banks to follow web security best practices has provoked a lively debate among security experts.

Tests of six banks revealed sketchy support for HTTP Strict Transport Security (HSTS), a cryptographic technology introduced in October 2012 and designed to protect websites against protocol downgrade attacks and cookie hijacking.

Security researcher Scott Helme and encryption expert Professor Alan Woodward were both adamant that this was a serious failing, not least because updating to support the technology would be straightforward, but other experts aren’t so sure.

Martijn Grooten, security researcher and editor of industry journal Virus Bulletin, argued that lack of support for HSTS by banks has “little to no practical impact”. By contrast, excluding customers with insecure set-ups would be commercially damaging.

“Customers not being able to access online banking because the bank stubbornly insists on strong crypto is a far bigger concern than the crypto being broken,” Grooten said. “And rightly so.”

Mobile banking, image via Shutterstock

El Reg assesses crypto of UK banks: Who gets to wear the dunce cap?

READ MORE

Per Thorsheim‏, an infosec researcher who founded the PasswordsCon conference, said that most banking fraud relies on planting malware on users’ computers or phishing rather than exploiting server-side encryption weaknesses. “Lack of HSTS is laziness, but not really a threat today,” he said.

El Reg invited RBS/NatWest (the worst performer) to comment on the poor security ratings of its websites and criticism over the failure to support HSTS. We’re yet to receive a substantive response. Independent security experts were, however, able to offer rationales on for banks’ apparent tardiness in adopting up-to-date cryptographic technologies.

Software engineer Chris McKee‏ commented: “Change requests probably take about 200 meetings, five levels of management that don’t even understand the change and an audit trail nine miles long.”

Security consultant Kevin Beaumont said: “Lots use OSes don’t support modern standards. PCI delayed implementation of standards for several years for this reason. Even the TLS 1.1 requirement delayed indefinitely now.

“The TLS 1.1 requirement is currently June 2018, however that has been delayed many times. The ironic thing is that POS [Point of Sale] devices are ‘exempt’ – but the biggest (and probably only) risk.”

Professor Alan Woodward, a computer scientist at the University of Surrey, stuck by his assessment in our original article.

“I think you’re all violently agreeing: HSTS is not a panacea but not to do it makes little sense, especially for sensitive data such as a bank,” he concluded. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/16/bank_security_crypto_reloaded/

Crouching cyber Hidden Cobra: US warns Nork hackers are at it again with new software nasty

The FBI and US Homeland Security have issued an alert about a new strain of malware infecting American corporate systems and stealing sensitive data.

The remote access trojan (RAT), dubbed Fallchill, is the work of a North Korean hacking group called Hidden Cobra, which some at US-CERT believe was responsible for the WannaCry ransomware outbreak. Businesses are urged to remove Fallchill as “the highest priority.” The Feds have published a list of IP addresses of public-facing machines infected by the software nasty, and sets of network intrusion detection rules, so IT admins can quickly find out if they’ve been hit.

Fallchill essentially opens a backdoor into infiltrated corporations, allowing its masterminds – likely to be Kim Jong-un’s North Korean government – to extract highly confidential blueprints and other documents.

“According to trusted third-party reporting, HIDDEN COBRA actors have likely been using FALLCHILL malware since 2016 to target the aerospace, telecommunications, and finance industries,” the Feds’ warning states. “The malware is a fully functional RAT with multiple commands that the actors can issue from a command and control (C2) server to a victim’s system via dual proxies.”

Fallchill gets onto Microsoft Windows computers via malware already in place on the machines, or getting onto poorly patched browsers via drive-by downloads. Once on a system, the code opens faked TLS connections for communications with the outside world, ciphering the data with RC4 encryption using the following key: 0d 06 09 2a 86 48 86 f7 0d 01 01 01 05 00 03 82.

It then communicates to its command and control center via a couple of proxies for obfuscation and reports on the type of computer infected, its IP and MAC addresses, system name and processor type. It can then steal data, start up processes, fiddle with time stamps, and upgrade itself with new capabilities.

To lock down systems, the advisory recommends whitelisting applications and only letting admin-level staff install new software. Keeping up to date with patching and virus definition updates is also key, especially since Fallchill’s fingerprints have been defined and sent out to antimalware toolmakers.

For a country with low PC penetration and a pitiful internet architecture, North Korea punches above its weight when it comes to hacking, if attribution claims turn out to be correct. Students who show an aptitude in computer security are reportedly put into a special school classes to hone their skills further and advance the ends of the cruel hermit regime.

With this latest outbreak, the FBI and DHS are asking IT admins who spot an infection to get in touch immediately and to take full forensic data to help in further investigations.

Finally, the Feds have emitted an advisory on another Hidden Cobra effort: Volger, which opens a remote-control backdoor on infected Windows PCs. It targets people working in government, and financial, automotive, and media industries, using spear-phishing, and concentrates on infiltrating networks in India, Iran, Saudi Arabia, Taiwan, and so on. Like for Fallchill, Uncle Sam has published guides on detecting the malware. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/15/hidden_cobra_north_korea_malware_fallchill/

The four problems with the US government’s latest rulebook on security bug disclosures

Analysis The United States government has published its new policy for publicly disclosing vulnerabilities and security holes.

The new rulebook [PDF] – and the decision to make it public – comes following a tumultuous 12 months in which Uncle Sam’s chief spy agency, the NSA, was devastated to discover part of its secret cache of hacking tools and exploits had been stolen, were available for sale, and later leaked all over the internet for free.

The shockwaves from that cyber-weapon dump were felt across the world, with hospitals in the UK among organizations knackered by the WannaCry malware, which wielded the leaked NSA exploit code to infect vulnerable Windows computers.

There is no mention in the US government’s new “Vulnerabilities Equities Policy and Process” of the dangers of the NSA stockpiling security bugs: when the agency gets its hands on an exploitable vulnerability in a product, it may keep the details private so it can leverage the bug to quietly infect and spy on targets. For example, the aforementioned leaked cyber-weapons exploited one such stockpiled flaw in the Windows network file system code, and thus when the NSA toolkit was leaked online and into the hands of WannaCry’s developers, there was no patch available to protect users.

The very existence of the policy document is sufficient proof, though, that the widespread criticism of Uncle Sam’s approach to computer security was heard, and acted upon.

The most important part of the new policy is that it states that the default action when the US government discover a new security hole should be to disclose it to the relevant software companies. It states: “In the vast majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest.”

The other good news is that the many arms of the US government have recognized the importance of speed in disclosing vulnerabilities and have written that into their disclosure rules.

If information on a specific vulnerability is released, the policy gives the US government seven days to react – which is extremely quick when you consider the size the administration and the number of departments that need to be consulted: 10, according to the policy.

If someone within the US government discovers a hole, the process for reviewing and disclosing it is also notable swift:

  • A one-day notification period after the new “VEP executive secretariat” is informed for all the other departments to be asked to react and respond.
  • Five days for the department to respond – and raise any concerns about disclosing the security hole
  • Seven days to reach consensus if someone does raise an objection.
  • Resolution within a “timely fashion”
  • A goal for consensus but if not, a vote before disclosure

That is all surprisingly efficient and pragmatic. But, of course, there are problems. And so far we have spotted four of them.

1. There is a massive NDA loophole

The policy notes: “The USG’s decision to disclose or restrict vulnerability information could be subject to restrictions by foreign or private sector partners of the USG, such as Non-Disclosure Agreements, Memoranda of Understanding, or other agreements that constrain USG options for disclosing vulnerability information.”

While it is important to note that there may be restrictions, legal and otherwise, on disclosing vulnerabilities, this part of the policy potentially allows organizations seeking to sell technical details on security holes to block disclosure by concocting an NDA.

2. There is no rating of risk

Typically software vulnerabilities are rated according to how potentially dangerous they are. Microsoft, for example, has four ratings of severity: low, moderate, important and critical.

This is useful to sysadmins who know whether to focus on a patch immediately or leave it a more convenient time, although it must be said that Microsoft tends to label stuff as important when really anyone else would call it critical – but you get the point. A rating system also allows a broader assessment of what and how many vulnerabilities are being disclosed.

For example, 100 holes with a low severity rating aren’t worth as much as a single critical vulnerability because of the added risk and exposure that a critical hole brings with it.

But there is no mention of ratings in the VEP policy. As such, others will have to assess how significant a bug is – which seems like an unnecessary additional delay, especially since there is no way that the US government does not apply its own internal severity rating.

It is going to be hard to assess whether this new policy is actually achieving much without ratings: the NSA could publicly disclose 999 low and medium risk holes, and still keep five critical ones classified.

With ratings, it would be possible to compare what the US government is revealing with commercial entities – and so gain an approximation for how many zero-day flaws the USG is sitting no. Which is, presumably, why they have avoided ratings.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/15/us_governments_vulnerability_disclosure_policy/

Stealthy Android Malware Found in Google Play

Eight apps found infected with a new Trojan family that ups the ante in obfuscation with four payload stages.

A sneaky new Android Trojan family employs four payload stages in its attack rather than the more typical two stages, researchers say.

The Android/TrojanDropper.Agent.BKY family was found in at least eight apps in Google Play so far, but the damage has been limited. Each of the apps only had a few hundred downloads before Google pulled them from the store, according to ESET, which discovered the malware family and notified Google.

The attack’s use of four payload stages before delivering its final nastiness, a mobile banking Trojan, is rare. “Two-stage payloads are really common in the Android ecosystem. Four-stage malware on Google Play isn’t so common,” observes Lukas Stefanko, an ESET malware researcher.

With more payload stages, attackers are able to deeply hide the true intent of their payload.

“This one added some extra obfuscating layers – dropper and decryptor, plus a downloader – to hide its malicious purpose,” Stefanko explains.

The Attack

Once a user launches the app, it initially behaves like a legitimate app by mimicking its advertised functions and withholding suspicious permission requests. The first stage calls for the malicious app to decrypt and execute the second-stage payload. Both steps are invisible to users.

Inside the second-stage payload is a hardcoded URL, which then downloads another malicious app or third payload. App users are prompted to install this bogus but legitimate-looking app. In some cases, it’s disguised as an Adobe Flash Player or Android update.

“Once they see a request to install the third stage payload, it should become a bit suspicious for users,” Stefanko notes.

After the third payload, or app, has all its requested permissions granted, it will decrypt and execute the fourth and final payload – a mobile banking Trojan. The malicious app will take users to a bogus login form to steal their credentials or credit card details, according to ESET’s report.

One of the malicious apps ESET reviewed had 3,000 downloaded links, with the vast majority of the victims coming from the Netherlands.

Although the Android/TrojanDropper.Agent.BKY samples ESET came across were banking Trojans or spyware, the downloader could be any nefarious piece of code that the attacker wants, the researchers say.

ESET came across the Android/TrojanDropper.Agent.BKY family in late September when its systems detected the apps dropping payloads in an unusual way. For now, it is not clear who is behind these attacks, Stefanko says.

Related Content:

 

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/mobile/stealthy-android-malware-found-in-google-play/d/d-id/1330440?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Fred Kwong: The Psychology of Being a CISO

Security Pro File: Fred Kwong learned people skills in the classroom and technical skills on the job. The former psychology major, now CISO at Delta Dental, shares his path to cybersecurity and how he applies his liberal arts background to his current role.

When Fred Kwong’s friends had Nintendo game systems, his home had a PC. The household computer sparked an early interest in technology, which persisted throughout the long, winding, sometimes blocked road that eventually led to his role as CISO of Delta Dental.

“My educational background and my IT background are completely separate,” Kwong notes. While he wanted to explore technology, finding an educational path was difficult. At the University of Madison he encountered a choice between two majors: computer science and computer engineering. “Neither was what I actually wanted to do,” he adds.

As a student, Kwong learned programming languages like C++ and Fortran before deciding he was on the wrong track. “It drove me nuts,” he says. “I did not want to spend the next 30 years of my life programming.” He decided to take his tech education outside the classroom.

“All my IT leaning has pretty much been the ‘school of hard knocks,’ or learning in the workplace,” he explains, and he continued to take part-time classes at a technical college while supplementing them with various tech-focused roles.

Kwong got his start in IT at Cibol, a help desk outsourcing company where he answered about 80 calls per day for the AOL help desk. There, he learned about modems and discovered he enjoyed helping people get online. But after a couple of years, he once again felt he was in the wrong place. His self-guided education continued at Zurich Insurance, where he worked as a “cable monkey,” learning networking and routing as part of the network team.

Zurich continued to be Kwong’s main source of IT education as he resumed full-time classes at Roosevelt University, where his studies fell far outside the technology field.

An Unconventional Path
“I went back to school for things that interested me,” says Kwong of his decision to double major in psychology and professional communications, partly inspired by his time in congressional debate as a high school student. “I wanted to learn about people — and what better way to learn about people than to study psychology?”

Kwong’s first foray into technical education was an MBA with a concentration in MIS. It didn’t take long for him to switch gears back into the psychology field. As he was finishing his MBA, a class in executive leadership inspired him to pursue his PhD in organizational development, where he found himself surrounded by a non-technical crowd.

“I was, quite honestly, a little bit intimidated at the time because I was in a room full of COOs and VPs of human resources, people who have pretty established careers,” he recalls. “And there’s me, this network engineer, in the PhD program, in a field that’s completely unrelated to my work.”

Kwong, sticking with the belief that effective communication would prove handy in any role, went on to complete his doctorate. A role as the network manager at Benedictine University introduced him to security. In addition to working on servers and telecommunications, he learned the ins and outs of firewalls and access control.

He worked his way up the security ladder through roles at CSC, where he was a network and data center manager, then US Cellular, where he was the senior infrastructure manager, and Farmers Insurance, where he built a privileged access management program and insider threat program. It was his last role before he had the opportunity to build security at Delta Dental.

Team Player
Kwong’s psychology background has, as expected, proven handy in his security roles.

“I would say that I have a heightened sense of awareness of folks I deal with,” he says. “A lot of times in the CISO role, it really is about building relationships and ensuring how to shift the culture or the organizations from one that’s not necessarily security-minded to one that becomes security-minded.”

This is especially difficult at Delta, which has 39 member organizations and a large board of directors. Kwong says getting everyone on board with security can be a challenge; after all, security isn’t viewed as a revenue generator but as a cost model. All members have their own agenda, and he has to ensure security is part of each person’s mission and objective.

It’s a mindset he emphasizes across the company. Most breaches initially involve the human factor, he points out, and he has to change the mindset of employees to be security conscious.

“We do that via phishing campaigns, lunch and learns, having direct messaging that appeals to employees to secure themselves not only in the business but also at home,” Kwong explains. “It’s an emotional tie. We tie [security] to something that’s tangible to them, not just in the business but for personal use … that really shifts the change in the culture.”

When there is space open on his team, Kwong looks within the business. He built an internal program at Delta to help aspiring security professionals starting in low-level tech roles.

“We built a program where — and this is near and dear to my heart — help desk and desktop folks can intern with security folks to learn about security and see if it’s a good career path for them,” explains, adding that many successful security pros come from different parts of IT.

For a month, interns learn about security tools and complete projects. If they are still interested in security at the end of the program, they can continue learning about it. When there is an opening in security, Kwong says, he can pull from an internal group of employees he knows has an interest in joining the team.

The internship program has since grown outside security to educate future employees for high-level IT roles in database management and networking, he adds.

Off the Clock

It’s hard to believe Kwong has any free time outside his roles as CISO and adjunct professor at Roosevelt University, where he now teaches organizational behavior and organizational development. But when he does, he uses it for volunteer work — and occasional photo shoots.

“There are a couple of organizations I really like to work with,” he says. Feed My Starving Children, which ships nutritional food to parts of the world without it, is one of them. Kwong says he puts together bundles of food, donates, and recruits people to help out.

Habitat for Humanity is another: Kwong enjoys volunteering with the organization and building homes in the Chicago area. “I like working with my hands,” he continues. “Plumbing, dry walling, all that fun stuff.”

Wedding photography is another favorite hobby and he enjoys snapping photos at occasional events for family and friends. Photography is fun, he says, but not always simple. It’s easy to take pictures of stuff when you have time to set it up. It’s harder at a wedding, when things are moving and you need to snap the right shot at the right time.

Kwong is modest about his work — “I don’t consider myself that good, quite honestly, and I feel like it’s a really hard craft,” he says — but his subjects seem to be big fans.

“I guess the best compliment I’ve gotten is, there have been times when people said ‘I wish we just hired you to be our photographer!'” he says. “It’s nice to hear.”

Personality Bytes

Worst day ever at work: 9/11/01 — my parents were both on separate planes that day, unsure of their fate.

First hack: Turned an old office chair into a swiveling TV stand

What your coworkers don’t know about you that would surprise them: Used to be an avid poker player

Security must-haves: Security awareness training, access control governance, vulnerability management

Business hours: Don’t apply in security

What keeps you up at night: Becoming the fall guy for a breach

Fun fact: Birds don’t urinate

Favorite hangout: Home

Comfort food: Ground beef and rice bowl

What’s in your music playlist right now: Billy Joel

What kind of car do you drive: Lexus RX 350

Favorite thing to do after a long day: Netflix binge watching

Actor who would play you in a film: Stephen Chow

Next career after security: Professor

 

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Related Content:

 

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/careers-and-people/fred-kwong-the-psychology-of-being-a-ciso/d/d-id/1330442?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple