Hawaii missile alert triggered by one wrong click


What amounts to a bad graphical user interface (GUI) – one that makes it too easy to click the “send the state’s population an emergency alert” option when you mean to click “test the emergency alert that sends people running for their lives” – terrified the population of Hawaii on Saturday morning.

The mistakenly sent emergency alert about an incoming ballistic missile was the first, adrenaline-gushing glitch. The second was that nobody at the state’s Emergency Management Agency (HI-EMA) corrected the error for a full 38 minutes.

According to the Washington Post, this tweet, from Rep. Tulsi Gabbard (D-Hawaii), was the first indication many received about the alarm being a glitch. She sent it out within about 15 minutes of the false alarm.

During the 38-minute delay between the emergency alert system sending the alarm and and its subsequent alert that the alarm had been false, the emergency message showed on phones and TVs and played on radio stations across the state.

As CNN reported, people sought shelter by crawling under tables in cafes, were ushered into military hangars, and huddled around TVs to watch the news for the latest developments. Some put their kids into the bathtub, others sought shelter in tunnels, while some tried to get to the airport to clear out before the heavens rained down ruin.

Apologies for the false alarm have come from HI-EMA and from Hawaii Gov. David Ige, who explained that the mistake was made “during a standard procedure at the changeover of a shift [when] an employee pushed the wrong button.”

The state has released a timeline (PDF) of the incident.

It shows that officials knew within 3 minutes of the alert going out that there had been no missile launch. They didn’t post notifications about the error until 8:20 a.m., when they published alert cancellations on their Facebook and Twitter accounts. It wasn’t until 8:45 a.m. that the emergency alert system issued the “false alarm” notification.

In the aftermath, Federal Communications Commission (FCC) boss Ajit Pai initiated an investigation, saying that the false alarm was “absolutely unacceptable”. Pai blamed Hawaii government officials, saying that they didn’t have “reasonable safeguards or process controls” that could have stopped the alert’s transmission.

HI-EMA says it has indeed started a review of cancellation procedures to “inform the public immediately if a cancellation is warranted.” Otherwise, we’ll get a reputation as the EMA who cried wolf, both the agency and Pai said. From HI-EMA:

We understand that false alarms such as this can erode public confidence in our emergency notification systems. We understand the serious nature of the warning alert systems and the need to get this right 100% of the time.

On Sunday, HI-EMA spokesman Richard Rapoza told the Chicago Tribune that the situation was particularly bad as there wasn’t a system in place to correct the initial error. The agency had standing permission through the Federal Emergency Management Agency (FEMA) to use civil warning systems to send out the missile alert, but not to send out a subsequent false alarm alert, he said.

That’s where that 38-minute lag came in, Rapoza said:

We had to double back and work with FEMA [to create the false alarm alert], and that’s what took time.

In the past there was no cancellation button. There was no false alarm button at all.

That part of the problem has already been fixed, Rapoza said:

Now there is a command to issue a message immediately that goes over on the same system saying ‘It’s a false alarm. Please disregard.’ as soon as the mistake is identified.

…Which leaves the “how do we keep these types of mistakes from happening in the first place” piece of the puzzle still to go. HI-EMA has said it’s suspended all internal drills until an investigation is completed.

Also, it’s initiated a requirement that two people are needed to activate and to verify tests and actual missile launch notifications.

The employee who made the mistake has been temporarily reassigned, but he won’t be fired, Rapoza said. Really, anybody could have made the same mistake, and that’s a problem with the procedures in place, not with the human who did what humans do: make mistakes.

Rapoza is right, of course, if a little late to the party. It isn’t news that poor design is a security and safety issue and the basic elements of good graphical user interface design have been understood for decades.

As interface design guru Don Norman wrote:

Bad design and procedures lead to breakdowns where, eventually the last link is a person who gets blamed, and punished.

… Does human error cause accidents? Yes, but we need to know what caused the error: in the majority of instances human error is the result of inappropriate design of equipment or procedures.

Article source:

Man charged over fatal “Call of Duty” SWATting


Tyler Barriss, the 25-year-old Los Angeles man who was arrested last month for his involvement in a SWATting incident, has now been charged.

He was charged with involuntary manslaughter in placing a SWATting call that resulted in a fatal police shooting of 28-year-old Andrew Finch in Wichita, Kansas on 28 December.

SWATting, which takes its name from elite law enforcement units called SWAT (Special Weapons and Tactics) teams, is the practice of making a false report to emergency services about shootings, bomb threats, hostage taking, or other alleged violent crime in the hopes that law enforcement will respond to a targeted address with deadly force.

In a police briefing the day following the fatal shooting, Wichita Deputy Police Chief Troy Livingston said that the result of the Wichita SWAT has been a “nightmare” for everyone involved: police, the community and Finch’s family.

After his arrest, Barriss didn’t admit to placing the call that led to Finch’s death. He did, however, express remorse in an interview from Sedgwick County jail that he gave to a local TV station.

From the recording:

As far as serving any amount of time. I’ll just take responsibility and serve whatever time, or whatever it is that they throw at me… I’m willing to do it. That’s just how I feel about it.

Barriss said that whatever punishment results from his role in the death of Andrew Finch, it doesn’t matter: it won’t change what happened.

Whether you hang me from a tree, or you give me 5, 10, 15 years… I don’t think it will ever justify what happened.

In the emergency call recording, a man said he’d shot his father in the head. The caller also said he was holding his mother and a sibling at gunpoint in a closet. He said he’d poured gasoline all over the house and that he was thinking of lighting the house on fire.

Police surrounded Finch’s Wichita home, prepared to deal with a hostage situation. When Finch answered the door, he followed police instructions to put up his hands and move slowly. But at some point, authorities said, Finch appeared to be moving his hand toward his waistband as if he was going to pull out a gun.

A single shot killed Finch. He was dead by the time he reached the hospital. Police said the innocent man was unarmed.

Barriss allegedly made the threatening call after a Call of Duty game in which two teammates were disputing a $1.50 wager. Apparently, one had accidentally “killed” a teammate in the role-playing game.

One of the players sent incorrect details of a nearby address to a known swatter, who was reportedly responsible for evacuations over a bomb hoax call at the Call of Duty World League Dallas Open last month.

After his arrest, Barriss said he felt “a little” remorse.

Of course, you know, I feel a little of remorse for what happened. I never intended for anyone to get shot and killed. I don’t think during any attempted swatting anyone’s intentions are for someone to get shot and killed. I guess they’re just going for that shock factor whatever it is, for whatever reason someone’s attempting swat, or whatever you want to call it.

As for why he would do such a thing in the first place, he wasn’t insightful:

There is no inspiration. I don’t get bored and just sit around and decide I’m going to make a SWAT call.

Barriss said that he often gets paid to make the fake emergency calls, but he wouldn’t say whether he was paid to make the SWATting call in Wichita that led to Finch’s death.

A Twitter account called @SWAuTistic took credit for the SWATting but then turned around and denied responsibility. On the Thursday night following the shooting, SWAuTistic tweeted this:


The Twitter account was suspended soon after.

According to security reporter Brian Krebs, SWAuTistic claimed credit for placing dozens of these calls, calling in bogus hostage situations and bomb threats at roughly 100 schools and at least 10 residences. He also claimed responsibility for bomb threats against a high school in Florida and the bomb threat that interrupted the FCC net neutrality vote in November.

Krebs’ report is well worth the read. One pearl of information: it appears that Kansas investigators were led to Barriss, the man who’s allegedly behind the @SWAuTistic account, by Eric “Cosmo the God” Taylor.

Remember him? He pleaded guilty to being part of the group that SWATted Krebs in 2013.

From Krebs:

Taylor is now trying to turn his life around, and is in the process of starting his own cybersecurity consultancy. In a posting on Twitter at 6:21 p.m. ET Dec. 29, Taylor personally offered a reward of $7,777 in Bitcoin for information about the real-life identity of SWAuTistic.

In short order, several people who claimed to have known SWAuTistic responded by coming forward publicly and privately with Barriss’s name and approximate location, sharing copies of private messages and even selfies that were allegedly shared with them at one point by Barriss.

If Barriss does indeed turn out to be SWAuTistic, anticipate more arrests as investigators continue on to find others involved in this tragedy.

Article source:

FBI expert calls Apple ‘jerks’ as encryption tension simmers


Apple has been called many things in its time but never, as far as anyone can remember, “jerks” by an FBI employee speaking at a public conference.

The man who made these remarks – senior FBI forensic expert Stephen R. Flatley – reportedly followed this up by describing the company as “pretty good at evil genius stuff.”

We don’t have the full context of these remarks – was Flatley perhaps being humorous? – but the seriousness of the conflict that prompted the barbs is not in doubt.

It began on the day in September 2014 when Apple launched iOS 8, after which the company said it could no longer access data on an encrypted iOS device – even if asked to by a government agency handing it a warrant.

The technical backdoor that had always been there as a last resort for investigators was sealed. As the company explained this new world:

Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data. So it’s not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8.

As far as the FBI was concerned, shutting out investigators was an obstructive decision by Apple, while from Apple’s point of view, it had no choice. It was following the logic of encryption, which is that a security design in which a backdoor exists will end up being equivalent to no security at all.

Flatley also complained that Apple keeps ratcheting up iOS security, recently changing password iterations from 10,000 to 10m. This meant:

Password attempts speed went from 45 passwords a second to one every 18 seconds. […] At what point is it just trying to one up things and at what point is it to thwart law enforcement?

Not coincidentally, Flatley’s boss and FBI director Christopher Wray used the same event last week to argue that encryption backdoors would not compromise wider security, a viewpoint that many in the security industry have vigorously disagreed with for years.

According to Wray, encryption prevented the FBI from accessing 7,775 mobile devices in 2017, without saying how many of these were Apple’s.

It’s the type of statistic that will probably be bandied around more often. Clearly, the FBI thinks – probably correctly – that it plays well with public opinion to keep repeating the argument that unbreakable encryption stymies serious crime investigations.

But it could also be that the simmering conflict between the world’s largest technology company and America’s biggest law enforcement bureau over encryption is becoming increasingly redundant.

Encryption is spreading and improving, regardless of what Apple does. Apple knows this, just as all technology firms do, and wants to be on the right side of technological history should interest in privacy spread, as some believe it will.

Internet users might then face the dilemma of life in two competing universes – one in which they are watched by governments and hemmed in by controls and the other in which private companies offer them a patchwork of partial freedoms out of economic self-interest.

Article source:

UK’s Just Eat faces probe after woman tweets chat-up texts from ‘delivery guy’


A customer of takeaway delivery firm Just Eat has alleged a driver from an eatery used her phone number to ask her for a date.

Michelle Midwinter claimed that, after using Just Eat to order a takeaway, she had received an uninvited WhatsApp message from someone she didn’t know.

According to screenshots shared on Twitter, the person first said he was “a fan” and then identified himself as the driver who had just delivered her meal.

He went on to ask if she enjoyed her meal, and then followed up minutes later with a message saying: “If you have a [boyfriend] tell me, I don’t want to make any problems”.

About 20 minutes later, she alleged, the driver upped the creepy levels by reportedly saying: “Good night [baby] see you next time when I get your meal.”

The use of Midwinter’s phone number for anything other than an update on the whereabouts of her food could be a breach of privacy laws – and the Information Commissioner’s Office said it would be investigating.

“If a customer’s phone number is used for reasons for which it was not originally taken, it could be a breach of the Data Protection Act,” a spokesperson said.

“Organisations have a legal duty to make sure personal data is only used for the purposes for which it was obtained. We are aware of reports of an incident involving Just Eat and will be looking into it.”

Although the driver is not an employee of Just Eat – he would have been hired by the restaurant, as the firm offers customers a single site to place orders – the biz still has a responsibility to protect its customers’ data.

The restaurant involved has not been named, but Just Eat said in a statement that it was “deeply concerned” about the incident and would investigate.

The firm said that it “takes the safeguarding of customer data extremely seriously” and that information is shared with restaurants “solely for the purpose of facilitating delivery”.

The driver, it added, “has acted in a way that does not represent Just Eat and our core values”.

However, when Midwinter initially complained to Just Eat she reported getting a very different response, which she said was “extremely disappointing”.

Screenshots shared of the Live Chat with a customer advisor named Trixie show that she was told the “best thing to do is give the restaurant feedback by leaving a review on Just Eat”.

The advisor then added: “We know this won’t fix a bad meal but it will hopefully improve things in the future.” And offered a £5 voucher for the “inconvenience”.

When Midwinter pointed out that having her delivery driver use her phone number to make unsolicited contact wasn’t simply an “inconvenience”, the advisor – apparently still failing to get the real issue – upped this to £10.

Just Eat’s statement today said it was “appalled” by the approach taken initially.

“This lacked empathy and does not reflect our policies or the way Just Eat would expect something like this to be dealt with,” the spokesperson said.

“We are looking at our procedures to understand why incorrect and inappropriate information was given out to the customer on this occasion. We have highlighted this with our Customer Care Senior Management team, who will review the incident, and ensure appropriate action is taken to ensure this doesn’t happen again.”

Since Midwinter tweeted about the incident, she has reported being contacted by a number of other women who have had similar or worse incidents.

She said: “This is no longer about my personal experience, this is about every single female who has been victimised in this way by someone from a company we put our trust in.” ®

Minds Mastering Machines – Call for papers now open

Article source:

Android snoopware Skygofree can pilfer WhatsApp messages


Mobile malware strain Skygofree may be the most advanced Android-infecting nasties ever, antivirus-flinger Kaspersky Lab has warned.

Active since 2014, Skygofree, named after one of the domains used in the campaign, is spread through web pages mimicking leading mobile network operators and geared towards cyber-surveillance.

Skygofree includes a number of advanced features not seen in the wild before, including:

  • Location-based sound recording through the microphone of an infected device – recording starts when the device enters a specified location
  • Abuse of Accessibility Services to steal WhatsApp messages
  • Ability to connect an infected device to Wi-Fi networks controlled by the attackers

All the victims of the ongoing campaign detected so far have been located in Italy, leading Kaspersky to theorise that the developers are themselves Italian.

Kaspersky’s researchers reckon the group may have filled the vacuum created by the demise of HackingTeam, following a 2015 breach in which the source code of commercial law enforcement surveillance/spyware tools that the firm developed was leaked, among other embarrassing secrets such as corporate emails.

mobile malware evolution

Skygofree mobile malware evolution [source: Kaspersky Lab]

Skygofree is a strain of multi-stage spyware that gives attackers full remote control of an infected device. It has undergone continuous development since the first version was created at the end of 2014, Kaspersky Lab said.

“The implant carries multiple exploits for root access and is also capable of taking pictures and videos, seizing call records, SMS, geolocation, calendar events and business-related information stored in the device’s memory,” the firm added.

The malware is even programmed to add itself to the list of “protected apps” so that it is not switched off automatically when the screen is off, circumventing a battery-saving feature that might otherwise limit its effectiveness.

The attackers also appear to have an interest in Windows users. Researchers found a number of recently developed modules targeting Microsoft’s OS.

“High-end mobile malware is very difficult to identify and block and the developers behind Skygofree have clearly used this to their advantage: creating and evolving an implant that can spy extensively on targets without arousing suspicion,” said Alexey Firsh, Malware Analyst, Targeted Attacks Research, Kaspersky Lab.

“Given the artefacts we discovered in the malware code and our analysis of the infrastructure, we have a high level of confidence that the developer behind the Skygofree implants is an Italian IT company that offers surveillance solutions, rather like HackingTeam.”

More information, including a list of Skygofree’s commands, indicators of compromise, domain addresses and the device models targeted by the implant’s exploit modules can be found in a blog post on


Kaspersky Lab moved to clarify that Skygofree has no connection to Sky, Sky Go or any other subsidiary of Sky, and does not affect the Sky Go service or app.

Minds Mastering Machines – Call for papers now open

Article source:

Top 3 Pitfalls of Securing the Decentralized Enterprise


What’s This?

Doubling down on outdated security practices while the number of users leveraging your enterprise network grows is a race to the bottom for businesses moving to distributed workflows.

The modern enterprise doesn’t live within four walls. It’s distributed, with companies leveraging digital communications to connect their brightest minds, and give teams the flexibility they need to successfully execute their most pressing tasks. But for all the benefits that decentralization promises, it also begins to blur the network perimeter, which forces security teams to think more critically and creatively about their defenses. When networks become distributed, there are numerous pitfalls that await them.

Pitfall 1: Devices and Users
The proliferation of mobile devices has put fully functional computers in the palms and pockets of virtually every modern worker. Whether part of a bring your own device initiative or delivered to employees directly by the company, employees use these essential work tools to access business-critical data, even when they aren’t plugged in at corporate headquarters.

The downside is that when employees connect to information systems and enterprise data from outside of the safety of the corporate network, it’s critical to keep tabs on where that traffic originates and if the device or user has permission to access enterprise data. Administrators need to be sure that they keep directories current to dictate permissions and proxy settings, while also doing all they can to monitor for traffic origins that could indicate illegitimate or malicious activity. By having an up-to-date registry of users, their devices and the associated permissions of that individual’s rank and role, teams will more easily be able to spot anomalous traffic patterns that indicate data theft.

Pitfall 2: More devices breed more applications – and threats
Part-in-parcel with the proliferation of mobile devices in the workplace is a boom in new applications and software – both for business and for pleasure – that employees are hungry to download. The problem here is twofold: For starters, non-essential applications can be a drain on bandwidth, so administrators need the ability to prioritize network capacity toward business-critical activity to avoid latency.

Further to that, just downloading any content onto the network from an outside source – whether a smartphone game or a word document – can open the floodgates to potential threats hiding in plain sight. Trojans – malware hidden within seemingly innocuous file types – can be unleashed on a corporate network via a personal email attachment, initiating a wealth of attacks – from DDoS to command and control callbacks – aimed at stealing data and disrupting network performance.

Pitfall 3: Bulky defenses only complicate security
Even security teams that are already meeting these challenges may not be taking the easiest or most effective route to securing decentralized networks. For instance, many teams will layer on security solutions by purchasing additional on-premises security appliances as bandwidth needs grow. While this approach will provide the additional security capacity needed to protect traffic, each piece of hardware will require dedicated security management, and put extra demands on IT to create costly and complicated backhaul networks.  

A better solution is for organizations need to simplify control and network pathways in order to give their business as much visibility into the activity taking place on their network as possible. Rather than installing hardware in a cumulative fashion, adopting additional consoles and vantage points into the network for teams to monitor, organizations need to strive to have all network activity presented from a single pane of glass.  

The decentralized organization isn’t a passing fad, but as costs pile up, a business that doesn’t evolve its security strategy to enable it might be. Doubling down on outdated security practices while the number of users leveraging enterprise networks grows is an easy race to the bottom for organizations moving to distributed workflows.

Paul Martini is the CEO, co-founder and chief architect of iboss, where he pioneered the award-winning iboss Distributed Gateway Platform, a web gateway as a service. Paul has been recognized for his leadership and innovation, receiving the Ernst Young Entrepreneur of The … View Full Bio

Article source:

Mental Models & Security: Thinking Like a Hacker

These seven approaches can change the way you tackle problems.

In the world of information security, people are often told to “think like a hacker.” The problem is, if you think of a hacker within a very narrow definition (e.g., someone who only breaks Web applications), it leads to a counterproductive way of thinking and conducting business.

A little knowledge is a dangerous thing, not least because isolated facts don’t stand on their own very well. As legendary investor Charlie Munger once said:

Well, the first rule is that you can’t really know anything if you just remember isolated facts and try and bang ’em back. If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form.

You’ve got to have models in your head. And you’ve got to array your experience both vicarious and direct on this latticework of models. …

[You’ve] got to have multiple models because if you just have one or two that you’re using, the nature of human psychology is such that you’ll torture reality so that it fits your models, or at least you’ll think it does. …

This is worth bearing in mind for security pros.

When we look at the thought process of a (competent) security professional, it encompasses many mental models. These don’t relate exclusively to hacking or wider technology, but instead cover principles that have broader applications.

Let’s look at some general mental models and their security applications.

1. Inversion
Difficult problems are best solved when they are worked backward. Researchers are great at inverting systems and technologies to illustrate what the system architect should have avoided. In other words, it’s not enough to think about all the things that can be done to secure a system; you should think about all the things that would leave a system insecure.

From a defensive point of view, it means not just thinking about how to achieve success, but also how failure would be managed.

2. Confirmation Bias
What people wish, they also believe. We see confirmation bias deeply rooted in applications, systems, and even entire businesses. It’s why two auditors can assess the same system and arrive at vastly different conclusions regarding its adequacy.

Confirmation bias is extremely dangerous from a defenders’ perspective, and it clouds judgment. This is something hackers take advantage of all the time. People often fall for phishing emails because they believe they are too clever to fall for one. Reality sets in after it’s too late.

3. Circle of Competence
Most people have a thing that they’re really good at. But if you test them in something outside of this area, you may find that they’re not well-rounded. Worse, they may even be ignorant of their own ignorance.

When we examine security as a discipline, we realize it’s not a monolithic thing. It consists of countless areas of competence. A social engineer, for example, has a specific skill set that differs from a researcher with expertise in remotely gaining access to SCADA systems.

The number of tools in a tool belt isn’t important. What’s far more important is knowing the boundaries of one’s circle of competence.

Managers building security teams should evaluate the individuals in the team and build the department’s circle of competence. This can also help identify where gaps are that must be filled.

4. Occam’s Razor
Occam’s razor can be summarized like this: “Among competing hypotheses, the one with the fewest assumptions should be selected.”

It’s a principle of simplicity that’s relevant to security on many levels. Often hackers will use simple, tried-and-tested methods to compromise a company’s systems: the infected USB drive in the parking lot or the perfectly crafted spearphishing email that purports to be from the finance department.

While there are also complex and advanced attack avenues, these are not likely to be used against most companies. By using Occam’s razor, attackers can often compromise targets faster and cheaper. The same principles can and should be applied when securing organizations.

5. Second-Order Thinking
Second-order thinking means to consider that effects have effects. This forces you to think long-term when considering what action to take. The question to ask is, “If I do X, what will happen after that?”

It’s easy in the security world to give first-order advice. For example, keeping up to date with security patches is good advice. But without second-order thinking, this can lead to poor decisions with unforeseen consequences. It’s vital that security professionals consider all implications before executing. For example, “What impact will there be on downstream systems if we upgrade the OS on machine X?”

6. Thought Experiments
A technique popularized by Albert Einstein, the thought experiment is a way to logically carry out a test in one’s own head that would be difficult or impossible to perform in real life. In security, this is usually used during “tabletop” exercises or when risk modeling. It can be extremely effective when used in conjunction with other mental models.

The purpose isn’t necessarily to reach a definitive conclusion but to encourage challenging thoughts and to push people outside of their comfort zones.

7. Probabilistic Thinking (Bayesian Updating)
The world is dominated by probabilistic outcomes, as distinguished from deterministic ones. Although we cannot predict the future with great certainty, we often subconsciously make decisions based on probabilities. For example, when crossing the road, we believe there’s a low risk of being hit by a car. The risk exists, but if you’ve looked for traffic, you are confident that you can cross.

The Bayesian method says that one should consider all prior relevant probabilities and then incrementally update them as newer information arrives. This method is especially productive given the fundamentally nondeterministic world we experience: we must use both prior odds and new information to arrive at our best decisions.

While there may not be a simple answer to what it means to “think like a hacker,” the use of mental models to build frameworks of thought can help avoid the pitfalls associated with approaching every problem from the same angle.

I’ve listed seven mental models here, some which you may already be familiar with and others you could try. Please share any of your favorite security and hacker mental models and problem-solving techniques in the comments.  

Related Content:

Javvad Malik is a London-based IT Security professional. Better known as an active blogger, event speaker and industry commentator who is possibly best known as one of the industry’s most prolific video bloggers with his signature fresh and light-hearted perspective on … View Full Bio

Article source:

Four Malicious Google Chrome Extensions Affect 500K Users

ICEBRG Security Research team’s finding highlights an often-overlooked threat.

The ICEBRG Security Research team discovered four malicious Google Chrome extensions during a routine investigation of anomalous traffic. More than 500,000 users, including workstations in major businesses around the world, have been affected.

The team was analyzing an unusual spike in outbound traffic from a workstation at a European VPS provider. Upon further investigation of the traffic, researchers found four malicious extensions available in Google’s Chrome Web Store: Change HTTP Request Header, Nyoogle – Custom Logo for Google, Lite Bookmarks, and Stickies – Chrome’s Post-it Notes.

This finding highlights the threat of browser extensions, which are available in most major web browsers and an oft-overlooked attack vector. Threat actors know employees usually trust, and have control over, downloading these extensions. Using this knowledge, they can execute code via seemingly legitimate applications to gain a foothold into organizations.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source:

Doh!!! The 10 Most Overlooked Security Tasks

Here’s a list of gotchas that often slip past overburdened security pros.PreviousNext

Image Source: Shutterstock via VGStockstudio

Image Source: Shutterstock via VGStockstudio

Security pros are under siege. Just in the last weeks we discovered major vulnerabilities in basic hardware chips, dubbed Meltdown and Spectre. Hacking from nation-states continues unabated, prompting fears that it will deter our ability to have safe elections later this year. And now, even the basics can go wrong as was displayed last week when the power went out at the 2018 Consumer Electronics Show in Las Vegas.

There’s big money on the line. Ponemon estimated the average cost of a breach in 2017 at $3.62 million, but the cost to a company can be much more than financial. Damage to the brand and public perception is often hard to judge.

And then there’s security holes you may not have thought about – or seem so obvious that you considered them handled years ago – like making sure the company has a back-up generator on hand for its data center.   

In interviews with three security experts, we developed a list of 10 gotchas that may not lock the organization down for good, but will go a long way to making sure you can sleep at night. They range from being ever more vigilant about phishing emails and DNS calls to taking more care about deleting accounts when an employee leaves. The latter can be a real headache because merely deleting a user from Active Directory doesn’t cut it any more.

To develop the list we spoke to John Pescatore, director of emerging security trends at the SANS Institute; Christos Dimitriadis, head of security for Greece’s INTRALOT Group; and Stephen Cobb, senior security researcher at ESET. 


Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full BioPreviousNext

Article source:!!!--the-10-most-overlooked-security-tasks/d/d-id/1330820?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Most Common Exploits of 2017 in Microsoft Office, Windows

The most common exploit affects Microsoft Office and has been used by attackers in North Korea, China, and Iran.

The most popular exploits in 2017 targeted Microsoft Office and Windows, report researchers at AlienVault, who say the most common flaws remain exploited for a long period of time.

Each year, the company records anonymized security events from customers and from other vendors’ threat reports recorded via its Open Threat Exchange (OTX) platform. It combines findings from the two datasets into a single picture of the year’s threat landscape.

There is significant difference between the most common exploits reported by vendor reports on OTX, and from AlienVault’s customers. The dataset of 80 vendor reports indicates four of the top 10 exploits from 2017 target Microsoft Windows and three affect Office. There is one vulnerability each for Adobe Flash, Microsoft .NET, and Android/Linux on the list.

The top-ranked exploit, CVE-2017-0199, is an Office exploit that has been used by targeted attackers in North Korea, China, and Iran, as well as by criminal groups deploying Dridex. CVE-2012-0158, the third most-referenced vulnerability, affects Microsoft Windows.

AlienVault threat engineer Chris Doman reports Microsoft has “exceptionally mature” processes to prevent exploits. However, because its software is so widely used, exploits that slip through the cracks are used heavily once they are discovered.

In contrast with the vendors’ threat reports, the AlienVault customer dataset is very large and contains billions of security events. Many of the most common exploits reported are fairly old and affect Windows 2000, Miniupnp, SNMP, OpenSSL Poodle, and PHP. There is one Microsoft Office vulnerability (CVE-2011-1277) and an Apache Struts vulnerability on the list.

Doman notes the data is biased toward “noisy” network-based exploit attempts from worms and exploit scanners, which is why the company is still collecting vulnerabilities from 2001 and 2002. It advises consulting the dataset on vendor reports when planning defense tactics.

Other key findings include the discovery that most effective exploits are quickly adopted by criminal and nation-state groups. NjRat malware variants were most common persisting on networks. On a geographical level, they noticed an increase of attackers located in Russia and North Korea, and a “significant drop” in activity coming from threat actors based in China.

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: