STE WILLIAMS

20 Questions to Ask During a Real (or Manufactured) Security Crisis

There are important lessons to be learned from a crisis, even the ones that are more fiction than fact.

I’ve heard the statement “society doesn’t deal with problems until they become a crisis” many times. Unfortunately, this is often the case in information security, but it doesn’t need to be this way. As security practitioners, we can’t fix the ills of society. We can, however, learn how to distinguish a real security crisis from a manufactured one. Furthermore, from each crisis (real or manufactured) that we go through, we can learn how to avert them all together.

In this spirit, I offer 20 questions to ask during a real or manufactured security crisis.

Image Credit: DuMont Television/Rosen Studios. Public domain, via Wikimedia

1. What is the threat that the issue at hand poses? Regardless of the noise surrounding a given situation, you need to understand the actual threat you’re dealing with. Conjecture and hype won’t help. Rather, you need to objectively understand how the threat could manifest itself as a risk to the organization.

2. What is the organization’s exposure to the threat? Once you understand the threat, you can evaluate your exposure to that threat. This needs to be done in order to fully understand the gravity of the situation.

3. What risk does this threat pose to the organization? Once you understand the organization’s exposure, you can assess the risk posed to the organization. This is where you really begin to understand how seriously to consider the threat and how aggressively to respond.

4. Is the hype surrounding this threat justified? Separating fact from fiction is important. If the facts support the hype surrounding a given threat, then it needs to be dealt with as such. However, if the facts tell a different story, it’s time to spin this one down.

5. Does the hype surrounding the threat translate to a real risk for the organization? If the risk is real, then it’s time to respond appropriately. That includes the communication necessary to keep the right stakeholders informed.

6. When did we first become aware of the issue? Were you just made aware of this, or have you been aware of it for quite some time? The difference is important. If you knew about a significant risk to the organization and didn’t act on it or escalate appropriately, that’s a fairly significant lapse in security.

7. Why wasn’t this raised earlier? If there is a reason, it can be addressed as part of continual process improvement. If there is no reason, it’s important to understand why.

8. Could we have avoided this issue? In many cases, issues can be avoided if risk assessment were done more proactively, or if the attack surface had been reduced significantly. Not in all cases, of course, but it’s good to ask the question.

9. Why didn’t we avoid this issue? Once you understand how you could have avoided an issue, you need to ask why you didn’t.

10. Has any damage to the organization occurred? This is, of course, the quintessential question. If no damage occurred, you need to remediate the risk, learn from your mistakes, and be thankful. If damage has occurred, then you still need to remediate the risk, learn from your mistakes, and, of course, perform incident response.

11. What are the steps required to remediate the issue? If you need to respond and remediate, the first step is to map out the steps required to do so properly. Taking a few moments to get organized and ensure all bases are covered yields a higher-quality result and saves time down the line.

12. What are the lessons learned from this issue? After any issue is dealt with, lessons need to be extracted and studied. This allows the security organization to improve and mature.

13. Can we apply those lessons to avoid a similar situation in the future? Obviously, crisis mode is a last resort. If you can apply lessons learned, you can avoid making the same mistake.

14. What other potential crises might we encounter? Post-crisis is a great time to think outside of the box and do some analysis. Understanding what other pitfalls you may encounter allows you to mitigate those risks ahead of time and improve the security posture of the organization.

15. What else can we tighten up to avoid future issues? You may have patched, tightened controls, or improved monitoring after the crisis, but what else can you do to keep from having to relive this or a similar experience?

16. How can we ensure that our remediation of the issue will be effective? Your plan may sound good on paper, but to be more certain, map the technologies and applications the issue affects, then conduct a sanity check to see whether it will achieve your desired goals.

17. Have we verified that remediation was effective? If you’ve already remediated, have you tested to ensure that the remediation was effective? If not, you could be exposed to a recurrence.

18. What steps have we taken to avoid a similar situation in the future? You need to ensure that whatever remediation you’ve done, whatever lessons you’ve learned, and whatever improvements you’ve made are lasting and not a one-time fix.

19. Have we precisely and effectively communicated actions to management and executives? Regardless of whether or not you had a real crisis, whether or not you handled it appropriately, and whether or not you’ve made improvements to the security organization, your actions need to be documented and communicated to management and executives. This builds confidence in the security team’s ability and avoids excessive spin-up when the next issue arises.

20. Have we taken steps to avoid future damage? In the end, it all comes down to whether or not you avoid or minimize damage to the organization. This is perhaps is the hardest question to answer. But it is likely the most important.

Related Content:

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register. 

Josh (Twitter: @ananalytical) is an experienced information security leader who works with enterprises to mature and improve their enterprise security programs.  Previously, Josh served as VP, CTO – Emerging Technologies at FireEye and as Chief Security Officer for … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/20-questions-to-ask-during-a-real-(or-manufactured)-security-crisis/a/d-id/1335079?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

More Than Half of SMB Devices Run Outdated Operating Systems

66% of devices in small-to midsized businesses are based on expired or about-to-expire Microsoft OS versions, Alert Logic study found.

New research underscores security weaknesses in small-to midsized businesses including a dependence on antiquated Microsoft operating systems, encryption misconfigurations, poor patching regimes, and reliance on outdated Exchange 2000 email servers.

The findings, published this week by Alert Logic, demonstrate how resource-strapped SMBs increasingly are vulnerable in the face of today’s cyber threats.

Some 66% of SMB devices surveyed run Microsoft OS versions that are expired or will expire in the next six months. The majority of devices scanned by Alert Logic for the study currently run Windows versions that are more than 10 years old. Microsoft will discontinue support for Windows 7 and Windows 2008 Server on January 14, 2020.

“What we suggest is for [SMB] security pros to read the report, understand it, and then take the findings to their management so business executives can better understand why it’s important to make an investment in security,” says Jack Danahy, senior vice president for security at Alert Logic. “If they even do one thing, focusing on patching will make a big difference. They should also put a mitigation control in for better monitoring.””

Alert Logic also found other weak security practices by SMBs:

Encryption misconfigurations

According to the Alert Logic research, 42% of SMB security issues are related to encryption. While automated patching has helped to reduce the frequency of vulnerabilities, configuration remains a major issue. This includes misconfiguring SSL encryption, not configuring Amazon S3 buckets properly, and providing improper access credentials to employees.

Poor patching practices

75% of unpatched vulnerabilities among SMBs are more than one year old, according to the research. While automated updates have improved software patching, organizations are still having difficulty keeping up with all the updates.

Reliance on antiquated email servers

More than 30% of SMB email servers operate on unsupported software, according to the research. Despite email being the lifeblood of most companies, almost one-third of the top email servers detected were running Exchange 2000, which Microsoft stopped supporting nearly 10 years ago. 

Frank Dickson, research vice president at IDC who focuses on security, adds that there are four practical steps that SMB can take to avoid security mishaps: make sure the company’s operating systems and applications are current; patch regularly; download all the updates (new versions of software); and use some form of multifactor authentication, whether it’s a finger scan, facial recognition, or an iris scan.

“So many of the problems can be solved by taking some common sense steps,” he says.

AlertLogic’s Danahy adds that many of the same problems existed 20 years ago, but people were less familiar with security issues.

“While I do think people underappreciate the complexity of an organization changing their operating system, I think we’re at a point where people are starting to look at security differently,” Danahy says. “The SMB folks recognize that security has become a serious challenge.”

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

 

 

 

 

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full Bio

Article source: https://www.darkreading.com/endpoint/more-than-half-of-smb-devices-run-outdated-operating-systems/d/d-id/1335142?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Patch Android! July 2019 update fixes 9 critical flaws

Depending on when users receive it, this week’s Android July 2019 patch update will fix 33 security vulnerabilities, including 9 marked critical, and 24 marked high.

If you own a Google Pixel device, that will be within a day or two, leaving everybody else on the 2019-07-01 and 2019-07-05 patch levels (what these dates mean is explained here) running Android 7, 8 or 9 to wait anything from weeks to months to catch up.

As usual, July’s batch of fixes covers flaws in significant parts of Android, including system, framework, library, and Qualcomm’s numerous components, including closed-source software.

However, as has been the case for some months, it’s the media framework that provides a disproportionate amount of the patching action in the form of three remote code execution (RCE) bugs marked critical.

These are CVE-2019-2107, CVE-2019-2106 (affecting Android 7 and 8), and CVE-2019-2109 (which only affects Android 9).

Another RCE critical is CVE-2019-2111 in the Android system, with the remaining critical flaws all connected to Qualcomm’s closed-source components.

In contrast to Microsoft’s Patch Tuesday, Google rarely offers much detail on individual flaws during the initial patch release, restricting itself to the following generalisation:

The most severe vulnerability in this section could enable a remote attacker using a specially crafted file to execute arbitrary code within the context of a privileged process.

Google is able to be this vague primarily because:

We have had no reports of active customer exploitation or abuse of these newly reported issues.

Anyone interested in knowing a bit more about these should check the flaw CVEs on the US National Vulnerability Database (NVD) in a week or two when more information is added on each vulnerability.

Alternatively, vendors publish their own advisories which often feature more device-specific information – see the July 2019 update advisories for Samsung, Nokia, Motorola, LG, and Huawei.

Huawei

If you own a Huawei device, these should receive this month’s update without issue. As for updates after August’s, the company is due to make an announcement soon (users can find more information on Huawei’s website).

Depending on the version of Android, a device’s patch level (2019-07-01 or 2019-07-05) can be determined in Settings About phone Android security patch level. For Android 9 it’s Settings System Advanced System updates.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/VcD8Nl6BnQM/

Miami police body cam videos up for sale on the darkweb

This can’t be a good day for Miami police.

We’ve known for a while that many webcams are a security train wreck, and that doesn’t change just because a police officer straps one on.

Now, unsurprisingly, police body cam footage has been found sloshing around online.

It’s not just that about a terabyte of videos from Miami Police Department body cams was leaked and stored in unprotected, internet-facing databases, according to the security outfit that found them. It’s that they were leaked and then sold, according to Jason Tate, CEO of Black Alchemy Solutions Group, who told The Register that his team had found the footage listed for sale on the darkweb.

Tate first tweeted about the discovery on Saturday, including a sample video, which has since been removed.

Tate said that the data is coming from five different cloud service providers. Besides Miami Police, there’s video leaking from city police departments “all over the US”, he said.

It seems these 5 providers have city contracts all over.

Known security SNAFUs

Last August, a security researcher – Josh Mitchell, a consultant at security firm Nuix – analyzed bodycams from five vendors that sell to US law enforcement agencies. He spotted vulnerabilities in several popular brands that could place an attacker in control of a camera and tamper with its video.

Mitchell found that the lack of security in the police bodycams included broadcasting of unencrypted, sensitive information about the device that could enable an attacker with a high-powered directional antenna to snoop on devices and gather information including their make, model, and unique ID. That information could lead to police getting stalked, since an attacker could track an officer’s location or to even suss out when multiple police officers are coordinating a raid, Mitchell told a DefCon audience at the time.

Mitchell also found that some cameras include their own Wi-Fi access points but don’t secure them properly. An intruder could connect to one of these devices, view its files and even download them, he warned. In many cases, the cameras relied on default login credentials that an attacker could easily bypass. This could lead to attackers tampering with evidence by replacing it with convincing deepfake footage. (That’s just one example of why the US Defense Advanced Research Projects Agency (DARPA) has been studying the problem of detecting deepfakes.)

Tate is well aware of the potential for evidence tampering. When somebody on Twitter pointed out that the footage and its associated metadata are “largely public records,” he said he knows that. That doesn’t mean it won’t lead to problems in evidence integrity, though, he said:

Miami Police Department must have felt the same way, since it looks like the department’s admins removed the videos from public access after Tate notified them about his findings. But it was publicly accessible for at least a number of days, he told The Register. That gave ample opportunity for hackers to copy videos from the databases and potentially sell them.

A spokesperson for Miami PD told The Register that the department is still looking into the claims and wouldn’t comment until it completed its review.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/LKehV6k9BhE/

Georgia’s court system hit by ransomware

Georgia’s court system has been hit with may be the fourth Ryuk ransomware strike against state and local agencies in the past month and a half.

At the time of publishing this article, the site was still down.

According to Atlanta’s Channel 11 News, officials confirmed on Monday that at least part of the court system’s network had been knocked offline by a ransomware attack.

Details about the extent of the damage haven’t been publicly disclosed, but officials say it’s much less severe than the attack against Atlanta that destroyed years of police dashcam video last year, as well as freezing systems. Six days after it was hit, Atlanta was still rescheduling court dates, police and other employees were still writing out reports by hand, and residents couldn’t go online to pay their water bills or parking tickets.

The earlier attack against Atlanta involved SamSam ransomware – a high-profile ransomware that was typically used in targeted attacks where attacker’s break into a victim’s network and launch ransomware manually, to cause maximum damage and disruption.

The crooks demanded what was then roughly $52,000 worth of bitcoin. That paled in comparison to the $2.6 million worth of emergency contracts the city initiated to claw back its systems, and to the six figure ransoms demanded in similar targeted attacks by other gangs.

The nature of this latest attack on Georgia’s court system hasn’t yet been determined. Authorities said the extortionists’ note didn’t specify a specific ransom amount or demands. Although the attack doesn’t appear to be as crippling as the SamSam one from last year, they took the court network offline to stay on the safe side, authorities said.

While little details were available as of Tuesday afternoon, there’s a hint that the Georgia assault might involve Ryuk ransomware.

On Tuesday afternoon, Ars Technica’s Sean Gallagher tweeted a followup to his writeup of the Georgia attack, saying that he’d heard back from the Georgia Administrative Office of Courts. He was told that while the malware hasn’t yet been identified, it left a message with contact information for ransom operators, which is “consistent with Ryuk and other targeted ransomware,” Gallagher said.

As Naked Security’s Mark Stockley detailed back in December, Ryuk – a relatively new strain of targeted ransomware – ascended just as SamSam’s influence began to diminish in August 2018.

If so, it might be the fourth Ryuk attack against state and local agencies since May. The first three were against Florida cities, though it’s not entirely clear whether Ryuk was involved in the attack against Riviera Beach. At any rate, the cities that have fallen prey to some sort of ransomware in the past few weeks are:

  • Riviera Beach, Florida, which agreed to pay attackers over $600,000 three weeks after its systems were crippled.
  • Lake City, Florida, which was hit on 10 June by Ryuk ransomware, apparently delivered via Emotet. Lake City officials agreed to pay a ransom of about $490,000 in Bitcoin.
  • Key Biscayne, Florida, which last week also got clobbered by an Emotet-delivered Ryuk attack. The city reportedly hasn’t yet decided if it’s going to pay the ransom.

On Monday, after its insurer had agreed to pay most of that $490K ransom, Lake City’s Joe Helfenberg confirmed that the city had fired its IT director, Brian Hawkins.

What to do?

For information about how targeted ransomware attacks work and how to defeat them, check out the SophosLabs 2019 Threat Report.

The bottom line is: if all else fails, you’ll wish you had comprehensive backups, and that they aren’t accessible to attackers who’ve compromised your network. Modern ransomware attacks don’t just encrypt data, they encrypt parts of the computer operating system too, so your backup plan needs to account for how you will restore entire machines, not just data.

For more on dealing with ransomware, listen to our Techknow podcast:

(Audio player above not working? Listen on Soundcloud or access via iTunes.)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zWQw3zObpWU/

IoT vendor Orvibo gives away treasure trove of user and device data

Two billion items of log data from devices sold by China-based smart IoT device manufacturer Orvibo was found by researchers at web privacy review service vpnMentor, who discovered the data in an exposed ElasticSearch server online.

Orvibo has been selling products for smart homes, businesses, and hotels since 2011, ranging from HVAC systems through to home security, energy management, and entertainment systems. The back-end database appears to have been logging system events from lots of them.

Researchers Noam Rotem and Ran Locar found logs from Orvibo devices in China, Japan, Thailand, the US, the UK, Mexico, France, Australia, and Brazil, vpnMentor said in its report.

This data provides insights into the lives of Orvibo’s customers, creating potential security risks, it warned.

With over 2 billion records to search through, there was enough information to put together several threads and create a full picture of a user’s identity.

The logs discovered by the vpnMentor team contained various pieces of personal information, including email addresses, usernames, user IDs, and passwords. Orvibo’s developers had used the notoriously insecure MD5 hashing mechanism to protect the passwords. It had also failed to use a salt, which is a random string combined with the password that makes hashed passwords far more difficult to recover.

The log data also included codes required for users to reset their accounts. The company said:

With this code accessible in the data, you could easily lock a user out of their account, since you don’t need access to their email to reset the password.

The code enables people to reset their email addresses too, meaning that an attacker could deny a user any chance of regaining their passwords.

Other information in the open database included family names, IP addresses, and the precise geolocation codes of the devices generating the logs. The logs displayed this data as latitude and longitude coordinates, vpnMentor said, adding:

This also demonstrates that their products track location in their own right, rather than determining location based on an IP address.

The researchers warned that attackers could use the logged data to disrupt a person’s home. For example, they could take control of security cameras, turn off electrical sockets and light switches, and even control smart locks.

Orvibo ignored repeated attempts by both vpnMentor and journalists at ZDNet to notify it of the breach over several weeks. As of Monday, the database was still publicly accessible, ZDNet reported.

Elastic turned off remote access to the free version of its software by default, binding it only to local addresses. However, users can change that configuration, potentially exposing their data to everyone if they made those servers public-facing. Executives have denied responsibility for the many online data leaks, pointing instead to inexperienced users.

The company recently seems to have caved to user concern, making changes to the default, free version of its ElasticSearch database by introducing security features that users previously had to pay for. These included TLS for encrypted communications, and native authentication (meaning that there’s finally an easy way to put password protection on public-facing ElasticSearch servers out of the box).

Nevertheless, persuading users to update the software and then configure the new, free features will be slow. Expect to see a lot more exposed ElasticSearch records like these in the meantime.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/6pRlaixtVsM/

$30/month email upstart Superhuman brought low with a blast of privacy Kryptonite

Superhuman, an email startup betting people who deal with a lot of messages will pay $30 a month for a more organized inbox, has come under fire for not providing privacy by default.

“Superhuman is a surveillance tool that intentionally violates privacy by notifying senders every time their emails have been viewed by recipients,” said Mike Davidson, VP at InVision and former design VP at Twitter, in a tweet last week.

“I would never trust this company. Only way to make sure your own privacy isn’t violated is to disable images in your own email app.”

He elaborated on his concerns over the weekend in a blog post, a few days after the New York Times published a mostly positive review of Superhuman with passing mention of privacy concerns. In the pre-July 4 US news drought, Davidson’s concerns stirred up discussion of the latest Silicon Valley attempt to turn commodity technology into a respectable revenue stream.

The Superhuman app is currently available on a limited basis – you have to request access and spend time on a waiting list. Beyond its aesthetic polish, it includes features like “AI triage,” “Undo Send,” social network analytics, reminders, scheduled messages and the ability to determine whether its messages have been opened – and therein lies the problem.

In the scheme of surveillance tools, Superhuman barely rates a mention. It’s not a stingray (IMSI-catcher) that poses as a cell tower to hoover up phone data; it’s not a mobile app cynically designed to gather personal info through a social media company or marketing SDK; it’s not a website littered with trackers or any of the apps offered for free from Google or Facebook; it’s not a Chrome extension designed to spirit passwords away.

It’s not actual surveillance software installed by Chinese authorities on the phones of travelers or one of many real-time video systems around the globe that captures public activity for potential law enforcement review.

The company’s alleged sin is that the Superhuman email client inserts a tracking pixel in outgoing messages by default. If you were following privacy issues in 1999, you might recall such things were once called web bugs.

“Superhuman calls this feature ‘Read Receipts’ and turns it on by default for its customers, without the consent of its recipients,” said Davidson.

Read Receipts are available to email users through a variety of mail client and browser extensions. But they’re generally not activated by default. The standard defense against tracking pixels is to load all messages as text rather than HTML and to tell your email client not to load images by default – provided that privacy function is available.

In Google’s now discontinued Inbox app and in its current Gmail for iOS app, for example, there was no way to disable images from loading – which is exactly what you’d expect from a company in the advertising business that wants to make inboxes more amenable to marketers and indulges its data fetish as often as possible.

There are also a variety of privacy extensions designed to counter read receipts and related tracking technologies. The fact that these represents countermeasures against the status quo should give some sense as to pervasiveness of privacy abuses.

Davidson in his post acknowledges there are other services and tools that violate privacy. His point is that email clients should not do this by default and should give control over tracking to message recipients. Apple and Microsoft, not to mention LinkedIn, Signal and Twitter, he argues, have designed read receipts in an ethical way.

Complaining about privacy in this way has a certain charm, like Don Quixote tilting at windmills. It recalls all the privacy complaints that have been brushed aside by marketers since web browsing became a thing in the early 1990s. But it also has value in that it prompts tech investors to defend the privacy-trampling business models they bet on.

There’s venture capitalist Gary Sheynkman puzzling over why anyone conducting business via email wouldn’t want read receipts and then suggesting people can protect themselves (rather than have the company not do something requiring protection). And Nick Abouzeid, an investor in Superhuman, telling people to turn off images, “otherwise, it’s part of the platform and you made your own bed.” Then there’s Zak Kukoff, an investor at Emergence Capital, declaring this is a phony controversy because consumers will always trade privacy for “advancements in tech.”

The tone-deafness is astounding, particularly as legislators in Washington and in Europe appear to be fed up with the tech industry’s inability to regulate itself. Silicon Valley’s answer to pretty much everything has been “Can we do it?” regardless of the ethical implications. Governments and civil society increasingly are asking “Should we do it?”, while confronted with examples of why they should not.

Davidson argues Superhuman should do the right thing and protect privacy rather than contributing to its further erosion.

The Register asked Superhuman to discuss Davidson’s observations but we’ve not heard back. Perhaps the company is choosing to preserve its privacy. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/03/superhuman_email_biz/

Here’s a great idea: Why don’t we hardcode the same private key into all our smart home hubs?

Smart home company Zipato hardcoded the same private SSH key into every one of its hubs, leaving its system open to hacking, researchers revealed this week.

The eggheads at security shop Black Marble demonstrated in a blog post how that flaw, combined with two related vulnerabilities, allows them to access the hub and devices connected to it. The upshot: they can open your front door with a laptop.

Smart home hubs are a relatively popular way to manage a range of otherwise incompatible smart home products, giving people a simple, single way of controlling everything. But that same approach can be a security nightmare if the hub itself isn’t secure. And in this case, it was not.

Zipato’s controller, which used the z-wave wireless standard, had two security holes in its API – local and remote – that the researchers was able to exploit. They rated both as critical. Combined with the somewhat baffling decision to hardcode the same private SSH key into every hub that provides root access to the device, and you have a recipe for disaster.

The key was extracted by simply imaging the hub’s SD card: in appeared in the ‘/etc/dropbear/’ folder and was called ‘dropbear_rsa_host_key.’ The folder was password protected but easily cracked with some readily available software.

With that private key, the researchers were then able to delve into the hub’s inner workings and grab the device’s scrambled passwords. They then discovered that the hub’s API would accept the scrambled/hashed password, rather than requiring the actual username and password (this was the API vulnerability), and so it was relatively easy to pose as the owner of the hub and then command it to do what it is designed to do: turn things on and off.

Which, in the case of a smart lock, opens the door. A few lines of code and you’re in. Due to the root access, even if the hub is set up for multiple users, a hacker would have access to all user accounts. Or, in other words, be able to open every door.

Welcome internet users

The hack works remotely due to the same flaw so if the hub is connected to the internet anyone in the world can theoretically open your front door. Which is less than ideal. If the hub is local only, you’d need to be on the same Wi-Fi network to exploit it.

There are an estimated 100,000 Zipato devices in around 20,000 residences, often installed by third party providers.

locked out

We don’t want to be Latch key-less kids: NYC tenants sue landlords for bunging IoT ‘smart’ lock on their front door

READ MORE

The researchers did the responsible thing and waited until the issue was patched before publishing exploit details. The company has put out a software update that should fix the API holes and has scrapped the single hardcoded SSH private key.

From now on, every new hub will have a unique key. And Zipato has ditched its ZipaMicro hub in favor of updated product. Which is all good, but you have to wonder why on earth the company used the same key for every device in the first place. That should be smart home product manufacturing 101.

Even big companies are vulnerable to these sorts of security flaws. In December, Logitech infuriated many of its customers when a third-party researcher discovered security holes in its API and it decided the best solution was to disable its external software interfaces altogether. As a result it cut off the meticulously built smart home setups of its customers, who made their displeasure known. Logitech backed down after an outcry and figured out patches to the holes.

The danger of hardcoded and default passwords is so significant that California passed a law last year that legally requires smart home manufacturers selling products in the state to include “a preprogrammed password unique to each device manufactured” or “a security feature that requires a user to generate a new means of authentication before access is granted to the device for the first time.” It will come into force in January 2020. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/03/zipato_hardcoded_key/

Russian ‘Silence’ hacking crew turns up the volume – with $3m-plus cyber-raid on bank’s cash machines

A prominent Russian hacker crew is seemingly expanding its reach – having just pulled off a multi-million dollar cyber-heist in Bangladesh, we’re told.

Moscow-based security outfit Group-IB told The Reg it believes the crooks, dubbed Silence, stole at least $3m (£2.4m) from Bangladesh-based Dutch-Bangla Bank via a string of cash-machine withdrawals over a span of several days.

The cyber-gang made a name for itself last year by breaking into various bank networks using purpose-built exploits and tools. The group is extremely small, possibly made up of as few as two people, though it appears to be extremely smart and armed with a considerable arsenal of malicious code written by its members.

In this latest caper, according to the authorities, the group was able to infiltrate the Dutch-Bangla Bank’s network, install malware on its PCs, and seize control of its card processing system, allowing them to, apparently, order individual ATMs to dispense cash without alerting the rest of the bank’s network.

Bank vault

Hackers KO Malta’s Bank of Valletta in attempt to nick €13m

READ MORE

With the card system under their control, the hackers then sent people from Ukraine – possibly either group members or just hired money mules – to visit various ATM locations in Bangladesh and make fraudulent withdrawals that were processed by the compromised card system and thus approved: the hacked backend OK’d the withdrawals. Team Group-IB said the mules were on their phones before each withdrawal, likely in order to coordinate with the person remotely allowing the machines to dispense cash.

When all was said and done, Group-IB said, the criminals made off with at least $3m from Dutch Bangla alone.

The researchers believe the attack is the start of a larger campaign from Silence as the hacking operation looks to expand from regional attacks in Eastern Europe and move further into Asia in order to go after higher-value targets.

“Having tested their tools and techniques in Russia, Silence has gained the confidence and skill necessary to be an international threat to international banks and corporations. Asia particularly draws cybercriminals’ attention,” noted Group-IB head of dynamic analysis of malicious code Rustam Mirkasymov.

“Dutch Bangla Bank is not the first Silence victim in the region. In total, we are aware of at least four targets Silence attacked in Asia recently.”

By the time you read this, there should be more details over on the Group-IB website. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/03/silence_hacking_bangla/

We are shocked to learn oppressive authoritarian surveillance state China injects spyware into foreigners’ smartphones

Authorities in a tumultuous region of China are ordering tourists and other visitors to install spyware on their smartphones, it is claimed.

The New York Times reported today that guards working the border with Krygyzstan, in China’s Xinjiang region, have insisted visitors put an app called Fengcai on their Android devices – and that includes tourists, journalists, and other foreigners.

The Android app is said to harvest details from the handset ranging from text messages and call records to contacts and calendar entries. It also apparently checks to see if the device contains any of 73,000 proscribed documents, including missives from terrorist groups, including ISIS recruitment fliers and bomb-making instructions. China being China, it also looks for information on the Dalai Lama and – bizarrely – mentions of a Japanese grindcore band.

Visitors using iPhones had their mobes connected to a different, hardware-based device that is believed to install similar spyware.

hongkong

No Telegram today, protestors: Chinese boxes DDoS chat app amid Hong Kong protest

READ MORE

This is not the first report of Chinese authorities using spyware to keep tabs on people in the Xinjiang region, though it is the first time tourists are believed to have been the primary target. The app doesn’t appear to be used at any other border crossings into the Middle Kingdom.

In May, researchers with German security company Cure53 described how a similar app known as BXAG that was not only collecting data from Android phones, but also sending that harvested information via an insecure HTTP connection, putting visitors in even more danger from third parties who might be eavesdropping.

The remote region in northwest China has for decades seen conflict between the government and local Muslim and ethnic Uighur communities, with reports of massive reeducation camps beign set up in the area. Beijing has also become increasingly reliant on digital surveillance tools to maintain control over its population, and use of intrusive software in Xinjiang to monitor the locals has become more common.

Human Rights Watch also reported that those living in the region sometimes had their phones spied on by a police-installed app called IJOP, while in 2018 word emerged that a mandatory spyware tool called Jing Wang was being pushed to citizens in the region. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/02/china_snooping_app/