STE WILLIAMS

Biggest vuln bombshell in forever and storage industry still umms and errs over patches

Analysis A growing consensus among storage hardware appliance vendors is that, since they don’t run external software on their hardware, they don’t need to stick performance-hindering patches into their operating systems.

Software-defined storage (SDS) and hyperconverged systems vendors do, consultants claim, because they can run external, customer-supplied software on the same hardware as their software.

For example, a user viewpoint from Martin Glassborow:

He goes on to add: “If I can get access to the CLI and upload code, I suspect I can exploit. Many systems are running x86 and ’embedded’ Linux, BSD, Windows and allow ssh/CLI access.

“And a huge number of Storage arrays have control/management software often running on embedded x86 servers… that also need patching. What I’m reading at the moment is nonsense from many.”

Two SDS companies taking different approaches are Nexenta and Scality. The former admits vulnerability in certain cases, the latter does not.

Dell, HPE, Microsoft and HCI vendor Scale Computing all say their software needs patching. For example, Scale said: “Scale HC3 systems will likely take a small performance hit, but we will be far less impacted than most.”

The ones that have said “no” to patching include DataCore, IBM, Infinidat, NetApp and Tintri.

HPE

HPE has said patch impact on performance would vary by workload, and has identified vulnerable products in a thorough list:

  • StoreVirtual – not vulnerable – product doesn’t allow third-party code execution
  • StoreVirtual 3000 file controller – vulnerable – further information forthcoming
  • 3PAR StoreServ File Controller V3 – vulnerable – further information forthcoming
  • StoreEasy 1450, 1550, 1650, 1650E, 1850, 3850 – vulnerable – further information forthcoming
  • 3PAR StoreServ 7xxx, 8xxx, 9xxx, 10xxx, 20xxx – not vulnerable – product doesn’t allow third-party code execution
  • 3PAR StoreServ Service Processors – not vulnerable – product doesn’t allow third-party code execution
  • XP7 Gen1 and Gen2 SVP and MP – not vulnerable – product doesn’t allow third-party code execution
  • StoreOnce products – not vulnerable – product doesn’t allow third-party code execution
  • MSA products – not vulnerable – product doesn’t allow third-party code execution
  • SimpliVity – fix under investigation
  • HyperConverged 250 and 380 – fix under investigation

Nexenta

Nexenta software running in virtual machines or containers is vulnerable.

Software-defined – indeed, software-only – storage supplier Nexenta’s VP for marketing and channels, Don Lopes, said: “We’re indeed aware of the Spectre and Meltdown bug. As you know, to exploit these hardware vulnerabilities an attacker must have the ability to run software directly on the target system. Contrary to compute platforms or hyperconverged solutions, our products are delivered as closed software appliances and do not allow running third-party software to run on them.

“Due to this, they aren’t exposed to exploits. In the cases where our software is run as a VM or a Docker container, we do recommend that customers patch the underlying OS and hypervisors.

“We provided these details and have spoken to a number of our customers and partners who have accepted and are happy with our communication.”

Scality

Object storage supplier Scality’s chief product officer Paul Speciale said: “Scality has not taken a public position on this microprocessor and potential virus issue. We also haven’t, to date, had any inquiries about it from our customer base. Our understanding is that while there haven’t been any exploits from such viruses as of yet, the threat is mainly to mobile and laptop devices that are directly connected to the public internet, and have access to the root user of the operating system.

“In contrast, the Scality RING software is in nearly all cases deployed in our customers’ secure data centres. This means our software is behind the multiple layers of firewalls and security devices, and accessed only by customer applications versus direct access on the internet. This insulation means we are much less susceptible to outside malicious access.

“While we do deploy on standard x86-based servers, we don’t present a general-purpose server to the outside world. For example, the RING restricts access to only the necessary network ports needed for operations. We also have our own software stack virtualizing the underlying hardware, and our own management software stack.”

Pure Storage Purity Run

Purity Run is a facility on Pure Storage arrays to enable customers to run their own code on them. Does that require the Purity OS to be patched?

A spokesperson told us: “We’re exploring this. Keep in mind that our storage arrays are not directly susceptible, and only a small subset of customers are running third-party code within Purity Run, which we released last year.

“We know who all of these customers are and have already contacted each of them personally. We would expect no impact on non-Purity Run storage performance and minimal impact on Purity Run performance for Spectre mitigations.

“We’ll be working to help users assess Meltdown mitigations for any guest environments or apps they are running within Purity Run, though the same updates would be necessary regardless of whether they are running on a virtualized platform or bare metal.”

Reg Comment

What should you do about your software-defined storage and Spectre/Meltdown? The security folks will say that, unless it is ring-fenced absolutely from running external code, apply the Spectre and Meltdown patches. That seems good advice. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/17/swdefined_storage_needs_spectre_meltdown_patching_say_sds_vendors/

BIND comes apart thanks to ancient denial-of-service vuln

Back in 2000, a bug crept into the Internet Systems Corporation’s BIND server, and it lay unnoticed until now.

The result: if you’re running a vulnerable version of BIND and using DNSSEC, you need to patch the server against a denial-of-service vulnerability.

The venerable BIND is the world’s most-used Domain Name System (DNS) software.

The vulnerability, disclosed on January 16th, is in the named (name daemon): “Improper sequencing during cleanup can lead to a use-after-free error, triggering an assertion failure and crash in named”, the advisory states.

The error is in the netaddr.c library in the daemon.

Disabling DNSSEC validation provides a workaround, but the advisory says all versions since BIND 9.0.0 (released in 2000) need to be patched.

The issue is most serious for “versions 9.9.9-P8 to 9.9.11, 9.10.4-P8 to 9.10.6, 9.11.0-P5 to 9.11.2, 9.9.9-S10 to 9.9.11-S1, 9.10.5-S1 to 9.10.6-S1, and 9.12.0a1 to 9.12.0rc1”.

“No known active exploits but crashes due to this bug have been reported by multiple parties”, the advisory continues.

Jayachandran Palanisamy of Cygate identified the bug. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/17/bind_patch_catches_crashes/

Researchers Offer ‘a VirusTotal’ for ICS

Free online sandbox, honeypot tool simulates a real-world industrial network environment.

S4x18 CONFERENCE – Miami – A team of researchers plans to release an open source online tool for capturing and vetting ICS malware samples that operates as a sandbox with honeypot features.

David Atch, vice president of research for CyberX, here today outlined details of the free, Web-based sandbox tool he and his team initially developed for research purposes. “It’s like a VirusTotal for ICS,” he explained in an interview.

VirusTotal is the wildy popular online tool that uses multiple antivirus and scan engines to analyze suspicious files and URLs for malware.

The goal was to create a sandbox that simulates real-world industrial networks. The sandbox tool allows ICS malware to execute and unpack, and then detects telltale malicious activities such as OPC (Open Platform Communications) scanning or overwriting PLC configuration files, and provides quick offline detection, according to CyberX Labs, which plans to roll out the tool in the next couple of months.

Atch says existing network sandbox technology for non-ICS, or IT environments, often misses ICS-specific malware because it doesn’t account for OT protocols and devices, for example, and doesn’t simulate OT components. “There are not enough tools for the ICS community,” Atch says. And VirusTotal isn’t ideal for ICS-specific malware, either, he says.

Take Stuxnet. The first Stuxnet variant was sent to VirusTotal in 2007, notes Ralph Langner, founder and CEO of Langner Communications, but Stuxnet wasn’t detected until 2012, he says. “I strongly support the idea” of a VirusTotal for ICS malware, he says.

Langner, a top Stuxnet expert, says ICS malware analysis is time-consuming. “It took me three years to analyze Stuxnet,” he says.

The ICS malware sandbox tool is aimed at more efficiently spotting ICS-specific malware, and can simulate the types of traffic to and from a PLC, for example, as its honeypot function. That allows the malware to execute in a safe space while unpacking and uncovering its functions and matching them with other known variants. The tool includes OT software, virtualized ICS processes and files, and a low-interaction ICS network (the honeypot element).

The concept of an ICS sandbox isn’t new: researchers at Trend Micro in 2013 stood up two honeypot-based architectures that posed as a typical ICS/SCADA environment at a water utility, including one that included a Web-based application for a water pressure station. There were 39 attacks from 14 different nations over a 28-day period. with that most attacks on ICS/SCADA systems appeared to come from China (35%), followed by the US (19%), and Laos (12%).

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/researchers-offer-a-virustotal-for-ics/d/d-id/1330833?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

It’s raining fake missiles: Japan follows Hawaii with mistaken alert

No sooner had we written up that fake missile alert in Hawaii than another fake missile alert was sent out, this time in Japan.

Japan’s national broadcaster, NHK, published an apology:

NHK is apologizing after issuing a false alert that said North Korea had probably launched a missile and warned people in Japan to take cover.

The false message was sent in Japanese shortly before 7 PM local time on Tuesday. It went out through the public broadcaster’s Japanese apps and website.

A few minutes later, NHK corrected the wrong information. There are no reports of problems caused by the mistake. NHK says a switching error is to blame.

The incident comes just days after officials in the US state of Hawaii issued a false missile alarm and caused panic.

In the Hawaii incident at the weekend, a public servant who was supposed to perform a routine test of the state’s missile warning system apparently selected the “send real alert” option instead.

Despite the dreadful implications of a real alert, and the unlikelihood of a real alert compared to the regularity of a test alert, there was apparently no additional oversight needed – no supervisor approval or peer review requiring confirmation from a second person.

However, there was a precaution in place in Hawaii to prevent the inadvertent cancellation of warnings.

Ironically, therefore, the person who was trusted to broadcast a state-wide missile alert to phones, TVs and radios wasn’t authorised to cancel it, even after realising the error.

The result was a 38-minute delay before Hawaiians who had reacted to the shocking news were able to relax – assuming they hadn’t hidden themselves in an underground bunker with no mobile phone coverage, of course.

According to CBS News, the Japanese mistake was “an error by a staff member who was operating the alert system for online news”.

That’s all we know at the moment, but it sounds very much like the nightmare of every sales engineer who was ever asked to show off a new product to a prospective customer: a live demonstration gone wrong.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ENpL7eiIlO0/

Hospital injects $60,000 into crims’ coffers to cure malware infection

A US hospital paid extortionists roughly $60,000 to end a ransomware outbreak that forced staff to use pencil-and-paper records.

The crooks had infected the network of Hancock Health, in Indiana, with the Samsam software nasty, which scrambled files and demanded payment to recover the documents. The criminals broke in around 9.30pm on January 11 after finding a box with an exploitable Remote Desktop Protocol (RDP) server, and inject their ransomware into connected computers.

Medical IT teams were alerted in early 2016 that hospitals were being targeted by Samsam, although it appears the warnings weren’t heeded in this case.

According to the hospital, the malware spread over the network and was able to encrypt “a number of the hospital’s information systems,” reducing staff to scratching out patient notes on pieces of dead tree.

With flu season well underway in the US state, Hancock Health administrators called in the FBI’s cyber-crime task force, and a third-party IT specialist, to quickly restore the ciphered filesystems – but the files could not be recovered in time. So the hospital did what too many other businesses are doing, and paid the ransom a day later on January 12 – in this case four Bitcoins.

“We were in a very precarious situation at the time of the attack,” Hancock’s CEO Steve Long said in a statement to The Register today.

“With the ice and snow storm at hand, coupled with the one of the worst flu seasons in memory, we wanted to recover our systems in the quickest way possible and avoid extending the burden toward other hospitals of diverting patients. Restoring from backup was considered, though we made the deliberate decision to pay the ransom to expedite our return to full operations.”

Resumed

The ransomware’s masters accepted the payment, and sent over the decryption keys to unlock the data. As of Monday this week, the hospital said critical systems were up and running and normal services have been resumed.

This doesn’t appear to be a data heist. The hospital claimed no digital patient records were taken from its computers, just made inaccessible. “The life-sustaining and support systems of the hospital remained unaffected during the ordeal, and patient safety was never at risk,” the healthcare provider argued.

Taking hospital bosses at their word – and assuming “oh, we had to deal with the flu” isn’t a cover-up for failed tape drives – the IT department did generate backups but these were not immediately available.

It’s one thing to keep an offline store of sensitive data to prevent ransomware on the network from attacking it. It’s another to keep those backups somewhere so out of reach, they can’t be recovered during a crisis, effectively rendering them useless.

It just proves that when planning disaster recovery, you must consider time-to-restoration as well as the provisioning of backup hardware. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/16/us_hospital_ransomware_bitcoin/

In Security & Life, Busy Is Not a Badge of Honor

All security teams are busy, but not all security teams are productive. The difference between the two is huge.

Busy, busy, busy. Everyone is busy. No time for anything. Being busy has become a badge of honor of sorts in modern society. I’m not one who shies away from going against conventional wisdom, so I’ll come right out and say that I see this as something that is rather unfortunate. Further, I see the idolization of busyness as something detrimental to security as a profession. 

I often come across those I call busy people. People who feel the need to constantly tweet about how busy they are or how much work they have to get through. People who feel the need to tell you that they are buried in emails and can’t keep their in-box clean. People who don’t have time to respond to your emails and will tell you as much when you happen to run into them in person. People who tell you that they have one hour free over the next three months during which they can meet with you. People who can’t spare five minutes for a phone call when you have a question for them. The list of such behaviors goes on and on.

For some reason, modern society encourages and even champions such behavior. But what do I see when I encounter this type of behavior? Failure. Sound provocative? While I am not an expert in human behavior, a few things seem to cause this obsession with busyness:

  • Insecurity: “I don’t believe enough in myself and the importance of what I’m doing, so I feel a need to make sure everyone knows I am busy.”
  • Disorganization: Often, busyness results from wasting a tremendous amount of time on looking for things, working in an interruption-driven manner, and/or trying to remember what needs to be done.
  • Inability to separate the wheat from the chaff: Every decision we make in life necessitates evaluating certain data points. Sometimes it seems like life is more about filtering out what is irrelevant than it is about paying attention to what is relevant. Those who can quickly isolate the important factors of a decision and filter out the noise are able to come to a decision and move forward much more quickly than those who cannot.
  • Inability to prioritize: No one has time to do everything that crosses his or her mind. That’s why prioritization is key. People make time for what is important to them. If someone told you that if you sat on a park bench from 11:00 a.m. to noon tomorrow he would give you $10 million, I’m sure you would find the time to be there.

If you still have any doubt, it should be fairly clear from the points above that being busy is quite different from being productive. There are many productive people who still find time for what is most important to them in life, whatever that may be. So, what lesson can we take from this in security?

Unfortunately, I would describe the state of many security programs as “busy” but not “productive.” The difference between those two words is enormous. Many security organizations are geared toward measuring, rewarding, and even priding themselves on busyness rather than productivity. The end result of this approach, sadly, is that it weakens their overall security posture. Let’s take a look at a few examples of this:

  • Ticket obsession: Many organizations pride themselves on how many tickets they open and close in a given day, week, or month. It’s a meaningless metric that many organizations use to show how hard they are working. But is this really something to take pride in? It is certainly true that people in these organizations are working hard, but are they working smart? The only way to know the answer to that question is to understand how the tickets that are being opened and closed contribute toward mitigating and reducing risk. If they directly contribute toward that end, this is a productive activity. If they don’t, it’s a busy one.
  • Alert fatigue: I’ve heard far too many people proudly and bombastically tout the number of alerts they “handle” on a daily, weekly, or monthly basis. But how many of those alerts were false positives? How many were relevant to threats the organization is concerned about? Did the volume of alerts create a noise level so high that the organization missed events that it should have paid attention to? If you’re plowing through thousands of alerts on a daily basis, you are busy. Only when you improve the signal-to-noise ratio, enrich alerts with the necessary contextual information, and prioritize appropriately can you overcome alert fatigue and move from alerts making you busy to alerts making you productive.
  • Seeing the forest for the trees: Sometimes the fact that people are too busy to come up for air is precisely the reason that they need to come up for air. Time-consuming duties can serve as an indication that specific areas of a process need to be re-examined. Perhaps the hours spent on a given task don’t add any value to the security program? Perhaps leveraging technology could greatly reduce the time spent on certain duties? Maybe automating certain manual processes could also save time? Not every activity that takes time is worth that time, which is a concept that is key to moving from busy to productive.
  • Root cause: Maybe the reason the security team is so caught up with playing whack-a-mole is because there are certain root causes that have not been identified and addressed appropriately. Productive organizations identify and address root cause, which saves them time later in the process. Busy organizations let root cause remain unaddressed and then sink a tremendous amount of time (and money) into dealing with the mess that results from that.

I’ve never come across a security organization that has idle time. All security teams are busy. But not all security teams are productive. The difference between the two is huge. Aim to be a productive security organization. Leave the busyness for those organizations that just don’t get it.

Related Content:

Josh is an experienced information security leader with broad experience building and running Security Operations Centers (SOCs). Josh is currently co-founder and chief product officer at IDRRA. Prior to joining IDRRA, Josh served as vice president, chief technology officer, … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/in-security-and-life-busy-is-not-a-badge-of-honor/a/d-id/1330818?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

1 in 9 Online Accounts Created in 2017 Was Fraudulent

Account takeovers hot, stolen credit cards not.

More than one in nine of all online accounts created in 2017 was fraudulent, according to a report released today by ThreatMetrix.

According to “Cybercrime Report in 2017: A Year in Review,” attackers continue to move away from the quick-buck business of credit card theft and are moving toward attacks that provide longer-term profits — for example, using stolen identity data to open new accounts. Between 2015 and 2017, attackers attempted to open 83 million fraudulent new accounts. Emerging industries, including ride-sharing and gift card-sharing, are particularly susceptible to fraud, according to the report. 

Account takeover attacks also increased by 170%; an account takeover attack occurs every 10 seconds, according to ThreatMetrix.

Overall, ThreatMetrix detected a 100% increase in attack volume over the past two years, including “unprecedented spikes” of irregular behavior immediately after the Equifax breach.   

See here for more.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/1-in-9-online-accounts-created-in-2017-was-fraudulent-/d/d-id/1330831?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Kaspersky Lab Warns of Extremely Sophisticated Android Spyware Tool

Skygofree appears to have been developed for lawful intercept, offensive surveillance purposes.

An Italian IT company has been using spoofed web pages to quietly distribute an extremely sophisticated Android spyware tool for conducting surveillance on targeted individuals since sometime in 2015.

In an advisory Tuesday, security vendor Kaspersky Lab described the tool, named Skygofree, as containing location-based audio recording capabilities and other functionality never before seen in the wild.

Available telemetry suggests the multi-stage spyware was first developed in 2014 and has been in continuous development since then. The Android implant gives attackers the ability to take complete administrative control of infected devices and to snoop in on conversations and nearby noises when the device enters specific locations, Kaspersky Lab said.

Skygofree is also designed to steal WhatsApp messages via Android’s Accessibility Services and to connect infected devices to attacker-controlled Wi-Fi networks. Its other capabilities include the ability to surreptitiously take videos and pictures, steal call records and SMS messages, and grab geolocation data, calendar events, and other information from infected devices.

Interestingly, the spyware tool has the ability to add itself to the list of protected Android apps on an infected device so it doesn’t get automatically shut down when the screen is turned off.

In total, Skygofree supports 48 different commands that attackers can use to execute various malicious actions on an infected device. Attackers can control the malware using HTTP, binary SMS messages, the Extensible Messaging and Presence Protocol (XMPP), and FirebaseCloudMessaging services, according to Kaspersky Lab.

The same IT firm that developed the malware also appears to be distributing it, says Alexey Firsh, malware analyst at Kaspersky Lab. The firm has been using web pages spoofed to appear like they belong to leading mobile network providers to deliver the malware on Android devices.

The first spoofed landing pages were registered in 2015. The most recent domain was registered last October suggesting the distribution campaign is still active. “Based on the infrastructure analysis we believe that it was set up by the same commercial entity which is believed to be behind the malware itself,” Firsh says.

Following the Kaspersky Lab advisory, the domain Whois Record was edited, suggesting the Italian firm is now trying to cover its tracks, he noted.

Available information shows that the targets of the attacks so far have been all Italian-speaking individuals. What remains unclear is how exactly victims arrive at the spoofed landing pages from where the malware is being distributed.

“It could be some kind of malicious redirect or targeted phishing with a link,” Firsh says. “We don’t know exactly, but these phishing sites were not public-forced and [a] user that is reading news or watching funny videos could not just get to these pages,” by accident, he says.

Identifying and blocking high-end mobile malware such as Skygofree can be extremely challenging given their complex payload structure and native code binaries, Firsh says. Another big challenge is the relatively small number of people that get targeted with this kind of tool, making it hard for security researchers to get their hands on them.

Kaspersky Lab has not identified the developer of Skygofree by name. But the IT firm behind the spyware appears to be similar to other providers of so-called lawful intercept software such as the Milan-based HackingTeam, FinFisher of Munich, and RCS Lab of Milan. Law enforcement and spy outfits from around the world use software from companies such as these to conduct surveillance and pursue investigations.

Research firm MarketsandMarkets last year estimated that worldwide demand for law intercept tools would top $1.3 billion by 2019.

Related content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/mobile/kaspersky-lab-warns-of-extremely-sophisticated-android-spyware-tool/d/d-id/1330832?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hawaii missile alert triggered by one wrong click

What amounts to a bad graphical user interface (GUI) – one that makes it too easy to click the “send the state’s population an emergency alert” option when you mean to click “test the emergency alert that sends people running for their lives” – terrified the population of Hawaii on Saturday morning.

The mistakenly sent emergency alert about an incoming ballistic missile was the first, adrenaline-gushing glitch. The second was that nobody at the state’s Emergency Management Agency (HI-EMA) corrected the error for a full 38 minutes.

According to the Washington Post, this tweet, from Rep. Tulsi Gabbard (D-Hawaii), was the first indication many received about the alarm being a glitch. She sent it out within about 15 minutes of the false alarm.

During the 38-minute delay between the emergency alert system sending the alarm and and its subsequent alert that the alarm had been false, the emergency message showed on phones and TVs and played on radio stations across the state.

As CNN reported, people sought shelter by crawling under tables in cafes, were ushered into military hangars, and huddled around TVs to watch the news for the latest developments. Some put their kids into the bathtub, others sought shelter in tunnels, while some tried to get to the airport to clear out before the heavens rained down ruin.

Apologies for the false alarm have come from HI-EMA and from Hawaii Gov. David Ige, who explained that the mistake was made “during a standard procedure at the changeover of a shift [when] an employee pushed the wrong button.”

The state has released a timeline (PDF) of the incident.

It shows that officials knew within 3 minutes of the alert going out that there had been no missile launch. They didn’t post notifications about the error until 8:20 a.m., when they published alert cancellations on their Facebook and Twitter accounts. It wasn’t until 8:45 a.m. that the emergency alert system issued the “false alarm” notification.

In the aftermath, Federal Communications Commission (FCC) boss Ajit Pai initiated an investigation, saying that the false alarm was “absolutely unacceptable”. Pai blamed Hawaii government officials, saying that they didn’t have “reasonable safeguards or process controls” that could have stopped the alert’s transmission.

HI-EMA says it has indeed started a review of cancellation procedures to “inform the public immediately if a cancellation is warranted.” Otherwise, we’ll get a reputation as the EMA who cried wolf, both the agency and Pai said. From HI-EMA:

We understand that false alarms such as this can erode public confidence in our emergency notification systems. We understand the serious nature of the warning alert systems and the need to get this right 100% of the time.

On Sunday, HI-EMA spokesman Richard Rapoza told the Chicago Tribune that the situation was particularly bad as there wasn’t a system in place to correct the initial error. The agency had standing permission through the Federal Emergency Management Agency (FEMA) to use civil warning systems to send out the missile alert, but not to send out a subsequent false alarm alert, he said.

That’s where that 38-minute lag came in, Rapoza said:

We had to double back and work with FEMA [to create the false alarm alert], and that’s what took time.

In the past there was no cancellation button. There was no false alarm button at all.

That part of the problem has already been fixed, Rapoza said:

Now there is a command to issue a message immediately that goes over on the same system saying ‘It’s a false alarm. Please disregard.’ as soon as the mistake is identified.

…Which leaves the “how do we keep these types of mistakes from happening in the first place” piece of the puzzle still to go. HI-EMA has said it’s suspended all internal drills until an investigation is completed.

Also, it’s initiated a requirement that two people are needed to activate and to verify tests and actual missile launch notifications.

The employee who made the mistake has been temporarily reassigned, but he won’t be fired, Rapoza said. Really, anybody could have made the same mistake, and that’s a problem with the procedures in place, not with the human who did what humans do: make mistakes.

Rapoza is right, of course, if a little late to the party. It isn’t news that poor design is a security and safety issue and the basic elements of good graphical user interface design have been understood for decades.

As interface design guru Don Norman wrote:

Bad design and procedures lead to breakdowns where, eventually the last link is a person who gets blamed, and punished.

… Does human error cause accidents? Yes, but we need to know what caused the error: in the majority of instances human error is the result of inappropriate design of equipment or procedures.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cYqLA48O6dw/

Man charged over fatal “Call of Duty” SWATting

Tyler Barriss, the 25-year-old Los Angeles man who was arrested last month for his involvement in a SWATting incident, has now been charged.

He was charged with involuntary manslaughter in placing a SWATting call that resulted in a fatal police shooting of 28-year-old Andrew Finch in Wichita, Kansas on 28 December.

SWATting, which takes its name from elite law enforcement units called SWAT (Special Weapons and Tactics) teams, is the practice of making a false report to emergency services about shootings, bomb threats, hostage taking, or other alleged violent crime in the hopes that law enforcement will respond to a targeted address with deadly force.

In a police briefing the day following the fatal shooting, Wichita Deputy Police Chief Troy Livingston said that the result of the Wichita SWAT has been a “nightmare” for everyone involved: police, the community and Finch’s family.

After his arrest, Barriss didn’t admit to placing the call that led to Finch’s death. He did, however, express remorse in an interview from Sedgwick County jail that he gave to a local TV station.

From the recording:

As far as serving any amount of time. I’ll just take responsibility and serve whatever time, or whatever it is that they throw at me… I’m willing to do it. That’s just how I feel about it.

Barriss said that whatever punishment results from his role in the death of Andrew Finch, it doesn’t matter: it won’t change what happened.

Whether you hang me from a tree, or you give me 5, 10, 15 years… I don’t think it will ever justify what happened.

In the emergency call recording, a man said he’d shot his father in the head. The caller also said he was holding his mother and a sibling at gunpoint in a closet. He said he’d poured gasoline all over the house and that he was thinking of lighting the house on fire.

Police surrounded Finch’s Wichita home, prepared to deal with a hostage situation. When Finch answered the door, he followed police instructions to put up his hands and move slowly. But at some point, authorities said, Finch appeared to be moving his hand toward his waistband as if he was going to pull out a gun.

A single shot killed Finch. He was dead by the time he reached the hospital. Police said the innocent man was unarmed.

Barriss allegedly made the threatening call after a Call of Duty game in which two teammates were disputing a $1.50 wager. Apparently, one had accidentally “killed” a teammate in the role-playing game.

One of the players sent incorrect details of a nearby address to a known swatter, who was reportedly responsible for evacuations over a bomb hoax call at the Call of Duty World League Dallas Open last month.

After his arrest, Barriss said he felt “a little” remorse.

Of course, you know, I feel a little of remorse for what happened. I never intended for anyone to get shot and killed. I don’t think during any attempted swatting anyone’s intentions are for someone to get shot and killed. I guess they’re just going for that shock factor whatever it is, for whatever reason someone’s attempting swat, or whatever you want to call it.

As for why he would do such a thing in the first place, he wasn’t insightful:

There is no inspiration. I don’t get bored and just sit around and decide I’m going to make a SWAT call.

Barriss said that he often gets paid to make the fake emergency calls, but he wouldn’t say whether he was paid to make the SWATting call in Wichita that led to Finch’s death.

A Twitter account called @SWAuTistic took credit for the SWATting but then turned around and denied responsibility. On the Thursday night following the shooting, SWAuTistic tweeted this:

I DIDNT GET ANYONE KILLED BECAUSE I DIDNT DISCHARGE A WEAPON AND BEING A SWAT MEMBER ISNT MY PROFESSION.

The Twitter account was suspended soon after.

According to security reporter Brian Krebs, SWAuTistic claimed credit for placing dozens of these calls, calling in bogus hostage situations and bomb threats at roughly 100 schools and at least 10 residences. He also claimed responsibility for bomb threats against a high school in Florida and the bomb threat that interrupted the FCC net neutrality vote in November.

Krebs’ report is well worth the read. One pearl of information: it appears that Kansas investigators were led to Barriss, the man who’s allegedly behind the @SWAuTistic account, by Eric “Cosmo the God” Taylor.

Remember him? He pleaded guilty to being part of the group that SWATted Krebs in 2013.

From Krebs:

Taylor is now trying to turn his life around, and is in the process of starting his own cybersecurity consultancy. In a posting on Twitter at 6:21 p.m. ET Dec. 29, Taylor personally offered a reward of $7,777 in Bitcoin for information about the real-life identity of SWAuTistic.

In short order, several people who claimed to have known SWAuTistic responded by coming forward publicly and privately with Barriss’s name and approximate location, sharing copies of private messages and even selfies that were allegedly shared with them at one point by Barriss.

If Barriss does indeed turn out to be SWAuTistic, anticipate more arrests as investigators continue on to find others involved in this tragedy.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wUmojTSH9-E/