STE WILLIAMS

Minigame: Celebrate Firefox 70’s release by finding a website with 70+ trackers blocked

Firefox turned 70 today, at least in terms of version, with an update focused on – surprise, surprise – security and privacy.

In an attempt to hammer home just how much Mozilla is looking after users, the company has added a Privacy Protections Report to illustrate how slurpy some parts of the world wide web can be.

The report follows September’s on-by-default Enhanced Tracking Protection (ETP) and will show a user just how many cross-site and social media trackers, fingerprinters and cryptominers the browser has blocked.

Firefox Monitor is also present, for keeping track of data breaches and flagging up compromised accounts and passwords. Users can additionally get a summary of the number of passwords stashed in the company’s login and password shack, Firefox Lockwise, as well as a view of all the devices Lockwise is syncing with.

Mozilla is keen to plug Lockwise, which requires a Firefox account to use. The app – which is available to desktop, iOS and Android users – now permits users to generate new secure passwords (“rather than the typical 123456”, which, astonishingly continued to top the charts in 2018, ahead of old favourite “password”) and features an improved management dashboard.

The gang is also stripping path information from the HTTP referrer, sent in an HTTP connection, to stop site-to-site tracking. The feature first turned up in January 2018’s Private Browsing mode.

Lockwise faces stiff competition from the likes of LastPass and its ilk while competing browsers such as Microsoft’s upcoming Chromium-based Edge will also cheerfully block trackers (and warn that going too far might break stuff).

That said, the Privacy Protections report is a very nice thing to look at, although we have to wonder how many users it will tempt from the competition. A quick glance at stat-wrangler NetMarketShare showed the browser had slipped from nearly 10 per cent of the desktop market to below 9 per cent over the last year (although it had at least managed to leapfrog Internet Explorer into second place behind the might of Chrome).

Taking mobile devices into account, the browser slumps to a sad-faced 4 per cent.

Protections aside, there is little in the release to reverse that trend. An improved Javascript Baseline Interpreter will see page load performance improve “by as much as 8 per cent” and compositor improvements in the macOS version have resulted in power efficiencies and page loads sped up by 22 per cent.

Firefox accounts for a hair under 5 per cent of macOS browser share.

Overall, this heavily privacy-focused update is handy, but not earth-shattering and unlikely to lure many over the fence. For some, it not being Chromium-based is reason enough.

There was also a certain joy to be had in seeing just how high we could get those tracker numbers just by visiting some day-to-day sites.

No, we aren’t going to name names. Glass houses and all that. ®

Sponsored:
Beyond the Data Frontier

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/22/firefox_70/

Messed Western: Vuln hunters say hotel giant’s Autoclerk code exposed US soldiers’ info, travel plans, passwords…

A security team for review site vpnMentor, led by Israeli researchers Noam Rotem and Ran Locar, recently found a publicly accessible AWS-hosted database owned by Autoclerk, a reservation system recently acquired by Best Western Hotels and Resorts Group.

The exposed database contained sensitive personal data for thousands of people around the globe, according to vpnMentor, including their hotel and travel reservations. Among those affected were US government and military personnel.

“Our team viewed highly sensitive data exposing the personal details of government and military personnel, and their travel arrangements to locations around the world, both past and future,” vpnMentor’s pseudonymous author “Guy Fawkes” said in a blog post on Monday. “This represented a massive breach of security for the government agencies and departments impacted.”

The researchers claim to have viewed logs for US army generals traveling to Moscow, Tel Aviv, and other destinations, among other sensitive details. And they also say they encountered unencrypted login details for connected services during their probes of the system.

Exposed reservations revealed customers’ full names, dates of birth, home addresses, phone numbers, dates and costs of travel, and masked credit card details. On some reservations, this included hotel guest check-in times and room numbers.

A spokesperson for the US Department of Defense told The Register that the DoD is looking into the company’s claims but had no information to provide at the time this story was filed.

The researchers say they discovered the database on September 13 and notified CERT the same day. After receiving no response, they contacted the US Embassy in Tel Aviv on September 19 about the lack of CERT response, managed to reach someone at the Pentagon on September 26, and finally on October 2 they saw the database closed.

Autoclerk provides reservation services for multiple travel-oriented companies, including hotels and travel agencies, and travel platforms such as HAPI Cloud, OpenTravel, and Synxis. One of the affected platforms belongs to a contractor who handles travel arrangements for US military personnel, claims Fawkes.

The researchers say the database they explored contained more than 179GB of data and speculate that much of it came from external travel and hospitality platforms – the exposed Autoclerk database connected these external systems and allowed them to interact with one another.

“Whoever owns the database in question uses an Elasticsearch database, which is ordinarily not designed for URL use,” said Fawkes. “However, we were able to access it via browser and manipulate the URL search criteria into exposing schemata from a single index at any time.”

biostar

Not very Suprema: Biometric access biz bares 27 million records and plaintext admin creds

READ MORE

vpnMentor did not immediately respond to a request to clarify whether the Autoclerk database is an Elasticsearch database or whether it interfaces with a separate Elasticsearch database. We’d also like to know why Fawkes claimed only “1000s” of people had personal details exposed when the database is said to have contained “100,000s” of booking reservations amounting to 179GB of data.

In any event, it’s not uncommon for security pros to manage to probe improperly secured databases by crafting data queries using URL parameters.

Fawkes said the researchers identified the database with port scans conducted for a web mapping project.

Neither Autoclerk nor Best Western responded to requests for comment. ®

Sponsored:
Beyond the Data Frontier

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/22/autoclerk_army_data/

FIDO-Based Authentication Arrives for Smartwatches

The Nok Nok App SDK for Smart Watch is designed to let businesses implement FIDO-based authentication on smartwatches.

Nok Nok Labs today debuted the Nok Nok App SDK for Smart Watch, bringing FIDO-based authentication to smartwatches at a time when global smartwatch shipments are spiking.

The move will help organizations provide consumers with a flexible, secure way to log in through a unified back-end infrastructure that now includes smartwatches in addition to mobile applications, mobile Web, and desktop Web, says Nok Nok Labs, founder of the FIDO Alliance. The organization started with authentication for native mobile apps on Android and iOS, then went onto Chrome and Firefox on mobile, followed by Safari, Edge, Chrome, and Firefox on the desktop. Now, Nok Nok Labs is bringing authentication support to smartwatch apps.

Its announcement arrives following a 44% increase in global smartwatch shipments, which reached 12 million units in the second quarter of 2019, Strategy Analytics reports. Once mostly used for fitness, smartwatches now have several use cases: banking, e-commerce, productivity, and home security apps among them. One in 10 Americans is expected to own one this year.

Nearly 40% of people are planning to buy a smartwatch so they don’t have to pull out their phones to view information or notifications, Nok Nok Labs reports. In newer smartwatches, like the Apple Watch Series 3, LTE support means users won’t need to be tethered to their phones at all.

The growth of smartwatches, particularly among young consumers, drove Nok Nok Labs to launch the Nok Nok App SDK for Smart Watch, says Dr. Rolf Lindemann, vice president of products at Nok Nok Labs. Its new standards-based controls will tackle the issue of authentication on smartwarches with a solution that governs access control directly on the device, regardless of whether it’s directly attached to the network or tethered to a smartphone.

Nok Nok Labs already supports use of the smartwatch as an authenticator for a phone client. Now smartwatch owners will be able to use the watch both as the client and the authenticator.

“When we first looked into the smartwatch, in general, we saw the smartwatch primarily [as a] replacement for the dedicated security key,” Lindemann says. “As opposed to a security key, a smartwatch is something you already have with you.” People would prefer to keep their smartphone in their pockets and more heavily rely on the functionality in a smartwatch, he adds.

Further, Lindemann continues, most smartwatch wearers don’t want to purchase a security key to use an authenticator. It’s simply easier for them to authenticate using a watch they already have.

Most of the technologies that let users access sensitive data via smartwatch typically store OAuth tokens or other bearer tokens in their smartwatch applications, Nok Nok Labs said in a blog post on today’s announcement. These tokens provide “relatively weak” authentication and must be renewed often as they lack strong binding to the device. Its new SDK lets developers standardize on FIDO-based authentication infrastructure for smartwatch applications, eliminating the need for weaker bearer tokens and the requirement to expire and renew them.

With standards-based controls that govern access control on the watch, employees can use enterprise applications and view sensitive data that is centrally managed with policies designed to limit access to certain users, noted Steve Brasen, research director with EMA, in a statement.

Nok Nok App SDK for Smart Watches will first be available on Apple Watch, which has consistently been the most popular smartwatch on the market. Between 2015 and 2019, 98.5 million Apple Watches were sold, compared with 93.1 million units of all other models combined.

Related Content:

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/iot/fido-based-authentication-arrives-for-smartwatches/d/d-id/1336148?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Alliance Forms to Focus on Securing Operational Technology

While mainly made up of vendors, the Operational Technology Cyber Security Alliance aims to offer security best practices for infrastructure operators and industrial partners.

Citing the increased risks to industrial control networks and operational technology from cyberattacks, 12 cybersecurity software makers and service providers launched the Operational Technology Cyber Security Alliance (OTCSA) on October 22.

The alliance is the fruit of more than a year of work by corporate members and aims to create best-practice and implementation guidance for those companies that monitor and control industrial and critical infrastructure using networked devices. The group hopes that the focus on cybersecurity will help operators harden their networks and make their operations more resilient to cyber-physical risk, the group stated.

“The alliance plans to go beyond the ‘what’ and move to the ‘how,'” says Kevin Dunn, senior vice president at the NCC Group, one of the founding members. “Initial deliverables released at launch will help to define the problem space…. The alliance will move to providing implementation guidance that helps operators with very specific guidance in the near future.”

Concerns about cyberattacks on operational technology — and the real-world impact of such attacks — have increasingly worried security professionals. Almost a quarter of operational networks — 22% — have signs of a compromised systems, according to a report collecting data from more than 1,800 operational networks and produced by critical-infrastructure cybersecurity maker CyberX. Such networks are highly vulnerable. Windows systems that are not up-to-date are running in 62% of networks, the company found in the report, published on October 22.   

“The data clearly illustrates that [Internet of Things and industrial-control system] networks continue to be soft targets for adversaries,” the report stated.

Other studies have found similar patterns. While more than half of companies have suffered a cyberattack or outage in the past year, only 42% of utility professionals believe their company is ready for a cyberattack, and 35% have failed to even put a plan in place to respond to an operational technology (OT) attack, a Ponemon Institute report found.

Earlier this month, cloud security firm Netskope discovered that an unknown attacker targeted companies in the US petroleum industry using a remote access Trojan that had been used against retailers in previous attacks.

The alliance currently consists of 12 companies: ABB, Check Point Software, BlackBerry Cylance, Forescout, Fortinet, Microsoft, Mocana, NCC Group, Qualys, SCADAfence, Splunk, and Wärtsilä.

While the organization is entirely made up of OT and IT vendors at launch, critical infrastructure and industrial-control operators have been involved in the founding of the organization, and many are expected to join, NCC Group’s Dunn says.

“Operator members have been involved in the foundation and architecture of the OTCSA, but indeed many more are needed to actively participate in the alliance going forward,” Dunn adds. “The vendor members that are participating all represent vital parts of the OT and IT landscape from an asset and security perspective. “

The alliance expects to quickly increase its membership and publish new materials in the next six months. The alliance hopes the launch will help consolidate interest, adds Satish Gannu, chief security officer for robotics and industrial equipment maker ABB, also a member of the OTCSA. 

“It is a constant conversation that we are having with multiple IT and OT partners,” he says. “We would like to get as many customers as possible on this, so we are letting people know we exist.”

Related Content

 

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/alliance-forms-to-focus-on-securing-operational-technology/d/d-id/1336150?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

About 50% of Apps Are Accruing Unaddressed Vulnerabilities

In rush to fix newly discovered security issues, developers are neglecting to address older ones, Veracode study finds.

The latest edition of Veracode’s annual “State of Software Security” study released this week shows that many enterprise organizations are at increased breach risk because of aging, unaddressed application security flaws.

Veracode recently analyzed data from application security tests on more than 85,000 applications and found that, on average, companies fix just 56% of all software security issues they discover between initial and final scans. Most of the flaws that are fixed tend to be newly discovered ones, while older, previously discovered issues are neglected and allowed to accumulate dangerously.

The resulting “security debt,” as Veracode calls it, is increasing breach risks at many organizations. “Security debt — defined as aging and accumulating flaws in software — is emerging as a significant pain point for organizations across industries,” says Chris Wysopal, founder and CTO at Veracode. “Just as with credit card debt, if you start out with a big balance and only pay for each month’s new spending, you’ll never eliminate the balance.”

Veracode’s new report marks the 10th time the company has released an annual assessment on the state of application software security. The 85,000 applications that were tested for the latest study is more than 50 times larger than the 1,591 applications tested for the first edition.

The study showed over half of all applications are accruing debt in the form of unfixed security vulnerabilities between initial security scanning and the last scan because developers are more focused on new issues.

The median time to fix newly discovered vulnerabilities is 59 days, which is about the same as 10 years ago. But the average number of days to fix flaws jumped from 59 days in the first report to 171 days in the latest. The data shows that while typical median fix times haven’t gotten worse in 10 years, security debt is getting much deeper, Wysopal says.

Eighty-three percent of the applications in Veracode’s study had at least one security flaw in them on initial scan, up from 72% in the first study. Sixty-six percent of applications failed initial tests based on OWASP’s Top 10 and the SANS Top 25 standards.

At least some of increase in the number of flaws discovered during initial scan had to do with the broader set of applications that were tested for the latest study. The vulnerability scanning capabilities that exist today are also better than they were a decade ago, resulting in more vulnerabilities being discovered.

“With a 50-fold increase in sample size, we did see the overall prevalence of flaws rise 11%,” Wysopal says. The good news, however, is that the proportion of those flaws assessed to be of high severity dropped 14% over the same period, he says. Only 20% of applications in Veracode’s study had high-severity flaws at initial scan, down from 34% in the first report.

Not very surprisingly — and consistent with results in Veracode’s previous reports — the flaws that get the most attention are also the most severe ones. Veracode found that developers fix 76% of the most critical vulnerabilities and 69% of the slightly less critical but still severe flaws.

“This tells us developers are getting better at figuring out which flaws are necessary to fix first,” Wysopal notes.

The data suggests that finding and fixing vulnerabilities has become as much a part of the process as improving functionality, Wysopal says. “As developers become more responsible for securing their software, this shift in mindset is critically important.”

The Same Old Same Old
Veracode found that many of the most common security flaws that are present in application software these days are the same as those from 10 years ago. Among them are cryptographic errors and information leakage flaws, input validation issues, and weak credential management.

At the same time, a few other vulnerabilities types — such as buffer overflow and buffer management errors — have become less prevalent because less code is written in languages, such as C and C++, that are susceptible to these flaws.  

“The other flaw categories remain prevalent as developers aren’t educated about cryptographic issues, information leakage, CRLF, and input validation errors, so they keep making the same mistakes over and over again,” Wysopal says.

The frequency and the cadence of software security tests have a direct impact on response times, according to Veracode. Organizations in the security vendor’s study that scanned applications less than once a month, for instance, required a median time of 68 days to remediate security issues, while those that scanned daily required just 19 days.

Most organizations, however, appeared nowhere close to doing scans that frequently. Only a third of the applications in the Veracode study, for example, were scanned between two and six times per year, while another 36% were scanned just once a year. Less than 1% of the applications were scanned 260 times or more per year.

Significantly, Veracode found that applications with the highest scan frequency also had five times fewer unaddressed security issues on average compared with the least scanned apps. The data suggests that implementing DevOps and DevSecOps models can have a huge impact on securing application software, according to Veracode.

Related Content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/application-security/about-50--of-apps-are-accruing-unaddressed-vulnerabilities/d/d-id/1336151?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Vatican launches smart rosary – complete with brute-force flaw

At some point, most software developers have probably hit ‘run’, crossed their fingers and prayed, but last week the Vatican took it to a whole new level. It released its new digital rosary – complete with show-stopping logic bug.

Deciding that the 21st century might be a nice place to visit, the Vatican started by testing out this whole wearable technology thing with an electronic rosary. It’s called the Click to Pray eRosary and it targets “the peripheral frontiers of the digital world where the young people dwell.” (The Vatican News actually talks like this.)

Traditional rosaries are meditative beads that you use to count off multiple prayers, and they’ve been around since at least the 12th century, according to scholars. Wearable as a bracelet, the new electronic version, released on 15 October, springs into life when users activate it by stroking its touch-sensitive cross.

The $110 device syncs with Click to Pray, which is the official prayer app of the Pope’s Worldwide Prayer Network. It tracks the user’s progress as they work through different sets of themed prayers. Oh, it also tracks your steps, too, for those that want to exercise both body and soul.

Unfortunately, it seems that holy software developers are as fallible as the rest of us. Two researchers noticed flaws with Click to Pray that divulged sensitive information.

In a blog post last Friday, Fidus Information Security exposed a brute-force flaw in the app’s authentication mechanism. It lets you log in via Google and Facebook – no problem there – but it’s the alternative that caused the issue: access with a four-digit PIN.

When a user resets their account using Click to Pray’s app, it uses an application programming interface (API) to make the request to the server, which then sends the PIN to the user’s email. The server also returns the PIN in its response to the API request, meaning that someone accessing the API directly could get the user’s PIN without having access to their email.

Fidus said:

Armed with this, we can simply log in to the application with the provided pin compromising the account with minimal effort. The account contained: Avatars, phone numbers, height, weight, gender and DOB’s.

There’s also another problem with the system, explained the company: The API doesn’t limit the number of attempts that you can make to log in with the PIN. Because we’re talking numerical digits rather than alphanumerics here, that’s 10^4, or 10,000 attempts. A simple Python script could rattle through those in short order.

Security researcher Baptiste Robert (aka Elliot Alderson) also discovered and reported the bug, tweeting about it responsibly after the Vatican had issued a fix:

Vatican priest Father Robert R. Ballecer thanked him publicly:

The Vatican and its developers moved pretty quickly to fix the issues when Fidus contacted them, although they moved in mysterious ways. Rather than just taking the PIN out of the API response altogether, they just made it longer, doubling the number of digits to eight.

Fidus responded:

There doesn’t seem to be any direct correlation between the new 8 digit PIN and the correct 4 digit PIN which is sent via e-mail. It is likely the data returned is not random but rather is obfuscated although it has not been possible to reverse engineer the algorithm used… yet.

A Vatican spokesperson also reportedly said that the brute-forcing issue has been solved too.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/U2BKBl-hm44/

Storing your stuff securely in the cloud

How much of your stuff goes into the cloud? Probably a lot more than you realize.

Not just your files, photos, videos, but also your app settings, notes, reminders, and if you use a password manager, possibly your password vault too.

If you work in any kind of collaborative organization – from corporate life to family life – you probably do a lot of work in shared online documents that you pass around, maybe even share the credentials. I’m not here to wag a finger at you, this is just reality for many of us. What’s important is to understand the risks in what we’re doing and what we can do to mitigate them.

As the saying goes, the cloud is just someone else’s computer. So the risk with storing things in the cloud is that you’re giving up your own local control over your files. This means there is a risk, however small, that someone else can access them, maliciously or accidentally.

Some examples of unauthorized entrants can include:

  • An attacker who hacks their way into the cloud server where your files are stored
  • An employee of that cloud company who has more access to customers’ files than they should
  • A colleague who has since left your organization, but still has access to your files

Maybe that former colleague doesn’t care about being able to access their old files, or perhaps they’ve gone on to work for a competitor. Maybe that attacker is only able to gain access to a bunch of old Word documents you’ve forgotten about, or perhaps they’ve found an unencrypted collection of all your financial passwords.

Either way, storing your stuff – whatever it is – securely in the cloud means keeping unwanted folks out, even though you can’t physically access it. That means using a combination of a few security measures.

ALWAYS: opt for services that use strong encryption

You wouldn’t store your vital files just anywhere, would you? You don’t want to hand your files over to any old haphazard service that says it provides cloud storage.

Is the file transfer process from your computer to their servers secure? When the data is on their servers, are they encrypting and securing it there as well, and if so, how?

This information is bare-minimum stuff. If you can’t find details about it easily, go elsewhere.

IF YOU CAN: encrypt locally first

If you are storing files locally and then backing them up in the cloud via a program like Dropbox, S3, or Google Drive, your best bet to secure them is to encrypt them locally – meaning on your own computer, hard drive, or other device – before they head off to the cloud. That way if someone does manage to break into your cloud storage, your encrypted files will be nothing but useless bits of unreadable data to them.

Of course, you don’t always use the cloud like a giant hard drive full of files, so it isn’t always in your power to encypt files locally – sometimes you have to rely on the services you use have to do it for you.

It’s worth checking to see if they are.

Take password managers, for example. Good ones will encrypt your data locally before backing it up online – and equally if not more importantly, will keep the key needed to encrypt and decrypt this data on your device and NEVER in the cloud. It takes a bit of reading and research to verify this for yourself, but any service worth using will make it clear where and how it handles this information.

ALWAYS: use robust passcodes and MFA

A lot of cloud services – like Google Drive or iCloud – offer an option to access files and information online, via a web portal. If you expect to access your stuff this way then any attacker who successfully guesses or phishes your password for that service can too.

For that reason, you should allows use a strong, unique password, and enable multi-factor authentication (MFA) wherever it’s available.

Your phone and computer are also portals to your cloud-based life too, so make sure they’re protected by strong PINs and passcodes as well.

ALWAYS: follow the principle of least privilege

The principle of least privilege is the idea of giving people only the access they need to do what they need to do, and nothing more.

If possible, have users create their own accounts so that you don’t need to share credentials. This way, users can be given the access they need, rather than the access that everybody else needs. It also means that if somebody leaves your organisation you don’t have to reset the one-and-only password for everyone, or, as often happens, if no one gets around to revoking the leaver’s access, they don’t take access to everything with them.

ALWAYS: have a backup for vital data

If there are files, photos, or other bits of data that you can’t live your life without, you owe it to yourself to make sure you have backups of your cloud data.

Personally, I have several physical hard drive backups for my must-have files, in addition to cloud backups. Backups are only useful if they are regularly tested, so I make sure to check in on them from time to time to ensure everything’s still where it should be and working. If my cloud backups are one day compromised, or the cloud service I use breaks or goes bust (remember, it’s just somebody else’s computer) it’s likely my physical hard drive backups will be able to save the day.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/27shf2RZHS8/

US nuclear weapons command finally ditches 8-inch floppies

Imagine a computer system based on the 1970’s-era IBM Series/1 and 8-inch floppy drives and most people would assume you’re describing a museum piece kept alive by enthusiasts.

And yet, such a computer system ranks as one of the most important in the world – so critical in fact that nobody has wanted to change or upgrade it since it was built nearly half a century ago.

It sits in bunkers across the US, part of the command centres that run the country’s nuclear missile deterrent on behalf of the Strategic Automated Command and Control System (SACCS).

Surprised? You shouldn’t be. But what matters is that SACCS finally spies a hardware upgrade as part of a $400 billion, 10-year programme to modernise the US’s military nuclear technology.

This programme has been public knowledge for a while but a detail that might have escaped public attention is the recently reported intention to ditch 8-inch floppies in favour of a contemporary, presumably encrypted, storage equivalent.

Strangelove

They’ve been using 8-inch floppy disks all this time?

Trying to visualise this might be hard for some readers who hail from an era after such portable storage existed.

According to the format’s Wikipedia entry, these appeared in 1973 with the later examples reaching the then impressive data capacity of 1.2MB.

Then the more convenient 5.25-inch floppy disk appeared in the 1980s with the advent of the PC and the 8-inch floppy slowly disappeared.

It’s easy to gloat at the apparent backwardness of this but systems of such importance are designed primarily to work rather than to be up to date.

Do the newest technologies work better than the old ones? In some cases, no. In a sector built on certainty, the 8-incher worked because it was compatible with the hardware.

Said Lieutenant Colonel Jason Rossi, 595th Strategic Communications Squadron commander, quoted by C4isrnet:

You can’t hack something that doesn’t have an IP address. It’s the age that provides that security.

Not, in fact, strictly true – you can hack computers that don’t have an IP address.

Doing that involves either gaining direct access to the computer or working out how to reach it across an air gap.

But he has a point all the same – hacking this would require the attacker to bridge that gap using another 8-inch disk, something that few nation-states would be able to sneak into a locked console room.

Ironically, one reason they’re getting rid of them is not their age but that the systems are becoming challenging to repair, mainly because the expertise is no longer there.

Add the extra training time required to use a system unlike any in the 21st Century and the time for change had arrived.

According to the report, one thing that has changed over time is the software, but even that must be a challenge on such an old computer deck.

The US military is coy about what will replace the equipment, but it will presumably be something faster and more powerful.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/v_9AgmOao9c/

Japanese hotel chain sorry that hackers may have watched guests through bedside robots

Japanese hotel chain HIS Group has apologised for ignoring warnings that its in-room robots were hackable to allow pervs to remotely view video footage from the devices.

The Henn na Hotel is staffed by robots: guests can be checked in by humanoid or dinosaur reception bots before proceeding to their room.

Facial recognition tech will let customers into their room and then a bedside robot will assist with other requirements. However several weeks ago a security researcher revealed on Twitter that he had warned HIS Group in July about the bed-bots being easily accessible, noting they sported “unsigned code” allowing a user to tap an NFC tag to the back of robot’s head and allow access via the streaming app of their choice.

Having heard nothing, the researcher made the hack public on 13 October. The vulnerability allows guests to gain access to cameras and microphones in the robot remotely so they could watch and listen to anyone in the room in the future.

The hotel is one of a chain of 10 in Japan which use a variety of robots instead of meat-based staff.

So far the reference is only to Tapia robots at one hotel, although it is not clear if the rest of the chain uses different devices.

The HIS Group tweeted: “We apologize for any uneasiness caused,” according to the Tokyo Reporter.

The paper was told that the company had decided the risks of unauthorised access were low, however, the robots have now been updated.

The chain has suffered a bunch of other issues with the robots, including problems with voice recognition systems reacting to guests snoring and a failure of the reception dinosaurs to understand guests’ names. ®

Sponsored:
Technical Overview: Exasol Peek Under the Hood

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/22/japanese_hotel_chain_sorry_that_bedside_robots_may_have_watched_guests/

In a Crowded Endpoint Security Market, Consolidation Is Underway

Experts examine the drivers pushing today’s endpoint security market to consolidate as its many players compete to meet organizations’ changing demands and transition to the cloud.

The overcrowded endpoint security market is rife with activity as its many players compete to meet new enterprise demands and large companies buy small ones in hopes of staying afloat.

Gartner listed 20 companies in its “2019 Magic Quadrant for Endpoint Protection Products,” says Peter Firstbrook, research vice president with the company and one of the report’s authors, but he could have easily invited another 10. “There’s far too many,” he points out. “This market is overdue for consolidation.”

What made it so crowded? There are two types of companies in the endpoint security market, which, in general, provides centrally managed technology to lock down the endpoint. The traditional giants, including McAfee, Symantec, and Kaspersky, were early players in the market and historically provided antivirus tools and firewalls to defend machines against cyberattacks.

“Then someone would come up with a new way to attack endpoints, and someone else would come up with a way to block those attacks,” says John Pescatore, SANS’ director of emerging security trends, of how the market evolved – until a new wave of companies introduced the idea that protection is never perfect. Businesses must be able to detect and respond to threats.

The shift to endpoint detection and response (EDR), and the consequent proliferation of endpoint-focused companies, began when ransomware started to become a major enterprise problem, Firstbrook explains. Incumbent providers were complacent in their roles and “caught flat-footed” when ransomware hit. It wasn’t necessarily the vendor’s fault, he adds, noting that customers didn’t always upgrade their systems as needed. Still, the problem demanded a change in how organizations approached security and kept their security software up-to-date. 

“Ransomware was a big wake-up call, costing serious amounts of money, and companies were going out of business,” Firstbrook says. Incoming EDR companies, including CrowdStrike, Carbon Black, SentinelOne, and Endgame, took an approach to security the older players hadn’t, with behavioral-based detection instead of seeking indicators of compromise. It’s much more efficient to watch for strange behavior than to watch for every version of malicious software.

“It’s really hard for [attackers] to completely rearchitect a program,” Firstbrook says. “Behavioral-based detection forces them to rewrite it. EDR and behavioral detection are becoming primary components of endpoint detection solutions.” EDR companies brought several new advantages — for example, the ability to run on top of more traditional platforms.

These startups, with their new behavioral-based approach and “assumed breach” mindset, generated venture capital money, Firstbrook explains, and the market grew. Both old and new endpoint security businesses have their strengths. Now, there are simply too many of them.

Redefining the Endpoint
One of the biggest trends in today’s endpoint security market is product management, and much of the decision-making for security products is moving to the cloud. Traditional endpoint companies sold on-premises systems to communicate with a central cloud server that provides IOC data. That made it tough to keep users updated; however, moving management servers to the cloud eliminates this requirement and gives users the most current protection.

Cloud and virtualization are changing the definition of the endpoint and companies’ approach to securing it, SANS’ Pescatore explains. As the attack surface grows to include firmware and supply chain attacks, organizations are investing more in cloud-native products to protect themselves.

The promise of a cloud-based platform is as threats change, companies can detect and react to changes without having to install any new management software. They don’t have to maintain the management server, it’s easy to get up and running, and it’s easy to pull data from clients outside the network. While “cloud native” is hard to define, Firstbrook points to CrowdStrike as the best example, citing its lightweight architecture and role as a rules enforcement engine and data collection engine. If a company has an idea for how to create a rule, it can do it.

Amid such a disruptive period, it can be difficult for bigger firms to keep up. Firstbrook points to Symantec: It offers a cloud-based management console, but there is not a lot of integration between protective technology and EDR technology. He says it may be a little more clunky, and a little less efficient, until the company converges to fully cloud-native architecture.

“They see the changes, and they’re addressing them, but I think at this point it’s such a big change they may not make the changes in time to really capture it,” Firstbrook adds.

On top of the move to cloud, there is a greater demand for simplicity, says Hank Thomas, partner at Strategic Cyber Ventures. Security buyers in the enterprise are tired of dealing with complex systems and multiple point products for narrowly focused needs. “They want to focus on security tools that they can remotely maintain and are consolidated in one place,” he said.

Endpoint security products are becoming harder to use, Firstbrook points out. People want them to be more sensitive, but they’re not always qualified to review the data and say whether it’s a false positive or actual threat. As a result, vendors are starting to provide more operational services, from installation, to configuration, to light management, to full management. IT teams don’t have time to swap out their vendors, learn a new tool, and continuously monitor it.

“Endpoint is something everyone has to do, but not every company has to be an expert in,” he adds. Going forward, it will be important for endpoint security tools to adopt to different detection technologies or new machine learning techniques without the client needing to act.

Too Many Cooks in the Kitchen?
The endpoint security market has grown packed with companies old and young attempting to meet these new enterprise demands. Several recent acquisitions underscore the growing importance of new technologies among older companies struggling to innovate, experts say.

{Continued on next page} 

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full BioPreviousNext

Article source: https://www.darkreading.com/endpoint/in-a-crowded-endpoint-security-market-consolidation-is-underway/d/d-id/1336125?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple