STE WILLIAMS

Attackers Use Scripting Flaw in Internet Explorer, Forcing Microsoft Patch

Microsoft issues an emergency update to its IE browser after researchers notified the company that a scripting engine flaw is being used to compromised systems.

Microsoft issued an emergency patch Wednesday after security researchers warned that attackers were using a vulnerability in Internet Explorer’s scripting engine to compromise computers.

The vulnerability, designated CVE-2018-8653, affects the scripting engine in Microsoft’s Internet Explorer when it executes Visual Basic scripts (VBScript) and Microsoft’s version of Javascript. Attackers that convince a user to view a specially constructed Web page, HTML document, PDF file, or Office document can execute malicious programs due to a memory-corruption issue.

“Any application that supports embedding Internet Explorer or its scripting engine component may be used as an attack vector for this vulnerability,” the CERT Coordination Center at Carnegie Mellon University warned in an advisory, adding that “this vulnerability was detected in exploits in the wild.”

The out-of-cycle patch is the third time this year that Microsoft issued a fix for a serious flaw outside of its second-Tuesday-of-the-month schedule, not counting patches for problematic updates. In January, the company patched its code to support firmware updates needed to secure computers against Meltdown and Spectre processor flaws (CVE-2017-5753 and others). In March, Microsoft patched a kernel vulnerability (CVE-2018-1038) introduced during the earlier January update. In May, just a few days before its scheduled patch day, Microsoft issued an update for a vulnerability (CVE-2018-8115) in Windows containers.

In 2003, Microsoft moved away from issuing patches as needed to a monthly patch cycle. While the move seemed to be a way to reduce the number of articles covering the company’s frequent software flaws, Microsoft executives argued that deploying patches on a weekly basis caused too much chaos of systems administrators. The company started its monthly patch cycle on October 14, 2003.

This month, Microsoft released its schedule patch on Dec. 11.

In its guidance for the security issue, the company explained that its latest update modifies how Internet Explorer and the scripting engine handles objects in memory. Without the patch, attackers have been able to infect systems using specially crafted scripts.

“In a web-based attack scenario, an attacker could host a specially crafted website that is designed to exploit the vulnerability through Internet Explorer and then convince a user to view the website, for example, by sending an email,” the company stated in its advisory.

Microsoft did not provide any other explanation, but a spokesperson did issue a comment: “We addressed CVE-2018-8653 and customers who have Windows Update enabled and have applied the latest security updates, are protected automatically,” Jeff Jones, senior director at Microsoft, said in a statement.

The scripting engine used by Internet Explorer and Windows has frequently been a source of vulnerability and the vector of attacks and exploits. In May, Microsoft warned of a remote code execution vulnerability in the VBScript engine (CVE-2018-8174), which could be attacked through Internet Explorer or using an ActiveX control in Microsoft Office.

Attackers have often used Office documents with malicious VBScripts to infect systems and get past malware scanners.

The update also continues Microsoft’s habit of unwelcome Christmas surprises.

December has always been a month for critical patches in Microsoft products. Last year, for example, Microsoft issued a critical patch for a remote-code execution vulnerability in its Malware Protection Engine, which scans files for potentially malicious code. 

Microsoft credited Clement Lecigne of Google’s Threat Analysis Group with notifying them of the vulnerability’s exploitation.

 

Related stories

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/attackers-use-scripting-flaw-in-internet-explorer-forcing-microsoft-patch/d/d-id/1333533?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Security 101: How Businesses and Schools Bridge the Talent Gap

Security experts share the skills companies are looking for, the skills students are learning, and how to best find talent you need.

Cybersecurity is a fast-moving field and education has a hard time keeping up. Traditional colleges and universities are often behind the curve when it comes to cybersecurity, so how are future security engineers and CISOs learning the ropes? How will companies find them? And, when they do, how can they determine who truly has the skills they’re looking for?

The demand for security talent only continues to rise. In its 2018 Cybersecurity Workforce Study, (ISC)² found the global shortage of security experts has hit 2.93 million. More than 63% of respondents report a lack of security staff; 60% say it puts them at moderate to extreme risk.

Security teams are poised to grow. In Dark Reading’s survey, “Surviving the IT Security Skills Shortage,” researchers learned only 45% of 400 IT and cybersecurity professionals have most of the people they need. Most (82%) planned to keep staffing the same or grow their teams.

Hiring talent takes time. A workforce study by ISACA’s Cybersecurity Nexus found more than 25% of organizations take at least six months to fill priority security positions, and more than 40% received fewer than five applications for security roles. Further, 33% of organizations say it’s tougher to get management approval for new security staff compared with two years ago.

When they do get approval, security leaders learn talent is incredibly hard to find. Nearly 40% of Dark Reading’s respondents say there are plenty of less experienced/trained people available but the most-skilled positions are hard to fill. Thirty-five percent say there is a shortage of IT security professionals at almost every level.

The key to solving the security skills gap lies in education: training people with the right skills and giving them the experience they need to help businesses solve their problems. But what are students learning, and what should they be learning? What skills do businesses really want?

Security Syllabus: How Students Learn

Cloud security is a hot topic in education these days, says Tony Cole, CTO at Attivo Networks. (ISC)², Cybrary, and many other education platforms want to better understand the world’s mass migration to cloud computing and the security implications it will bring going forward.

Incident response is another common topic in security education, as is penetration testing. An area Cole says he expected to grow more is cloud analytics, which isn’t the topic of many courses. As companies look at their cloud security controls, processes, and policies, they’ll need more people with those skills. “That’s a huge component of moving to the cloud,” Cole explains.

Like IT, programming, and other areas of tech, security is a skill best learned in practice. Nearly half of respondents in (ISC)²’s study say relevant security work experience is the most important qualification for employment, followed by knowledge of advanced security concepts (47%).

Security architecture is another important area, Cole says, and more university programs are beginning to offer it. The problem is students have little to no operational experience. “There’s going to be a significant shortage for awhile until we incorporate recent grads into organizations and provide operational experience for them.” One tactic could be offering internship experiences to undergraduates so they enter the workforce with real-world skills.

Cole points to a need for cybersecurity education in junior colleges and vocational programs. “We need to start at a lower level if we’re going to get people interested in this,” he adds.

When it comes to building their security skillsets, many students take courses at universities or colleges; some rely on conferences or online classes. Others learn skills via bug hunting. Businesses are now also getting into the trend of offering education to their employees.

“Most organizations you see today, and most I’ve been at, are trying to cut costs by going to online curricula,” says Cole. “It’s on demand, [employees] can pull it out any time.”

Some institutions aim to offer real-life experience through competition. New York University’s Tandon School of Engineering, for example, annually hosts a student-run cybersecurity competition dubbed CSAW. This year, its 15th running, saw 3,500 teams from more than 100 countries complete challenges designed by New York City’s top ethical hackers.

You cannot really teach about security by lecturing in a classroom,” says Nasir Memon, professor in the department of computer science and engineering at NYU Tandon. “You have to understand how attackers work.” High school and college students can test their hacking and defensive skills, compete against red teams or blue teams in an embedded security challenge, or show off their knowledge of security policy, applied research, and forensic analysis.

“It’s a nice way to attract students to this discipline,” Memon says. “Fifteen years back, security was not in people’s minds.” Students who compete often go on to pursue cybersecurity careers; those who don’t often have a strong security foundation in software engineering or other roles.

Staffing Shortage: What Businesses Need

“There’s a pretty good overlap,” says Cole of the skillsets students are learning and those businesses want. Still, many may not have a clear idea. About one-third of (ISC)²’s respondents say organizations’ lack of knowledge around security skills is a challenge to career progression.

When asked about the skills most critical to their organization’s security posture, 58% said security awareness; the same percentage said risk assessment, analysis, and management. More than half (53%) said security administration, followed by network monitoring (52%), intrusion detection (51%), cloud computing security (51%), and security engineering (51%).

However, Cole points out, a challenge for businesses is soft skills are often not offered in security training – and they are becoming increasingly necessary as security teams are more often required to communicate with the CEO, board members, and technical teams. He suggests soft skills be built into security courses as opposed to having a standalone offering.

Dark Reading’s survey found technical professionals who have “people skills” and are good communicators are rare; 52% of respondents say they are hardest to find. “People with experience in environments/industries similar to ours” is equally difficult, they report. Experience with latest technologies (41%), required credentials (32%), and offensive research/pentesting skills (18%) rounded out the list of hard-to-find security skillsets.

Verifying Skillsets

Skills listed on a resume mean little if candidates can’t prove them. Methods for verifying security skills vary from business to business, says Cole.

Some test them online: candidates are directed to a portal where they complete skills challenges. If they pass, they move on to an in-person interview. Sometimes people are hired directly from these types of challenges without a face-to-face interaction, he explains.

“I think you’re going to see more people build skills portals where they get tested before they come in the door,” he adds, a tactic that could test for soft skills and raise red flags, if needed.

Still, some companies take the traditional route, bringing in candidates for interviews after they meet at a networking event or receive a resume via email. The applicant will meet with people in the organization and complete a skills assessment after their visit.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/security-101-how-businesses-and-schools-bridge-the-talent-gap/d/d-id/1333540?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

3 Reasons to Train Security Pros to Code

United Health chief security strategist explains the benefits the organization reaped when it made basic coding training a requirement for security staff.

As security leaders strategize for a strong 2019, one of the initiatives that crops up on a lot of CISO to-do lists is getting cybersecurity better embedded into DevOps teams. When done right, DevSecOps can help security teams reduce risk faster while at the same time rejecting the role of “the office of ‘no.'”

Many times when security people talk about appsec, they lament the fact that developers don’t know enough about security. Part of the DevSecOps ethos is getting out of that finger-pointing, tribal mentality and instead coming together as a cohesive team no matter the role — whether developer, operations staff, QA, or security. 

“Empathy is a two-way road,” says Aaron Rinehart, chief enterprise security architect for UnitedHealth Group. “Meaning, I always hear security people talking about how development people need to learn security. Well, you know what? How about we teach software engineering to security?” 

This thought was the seed of a program that Rinehart ushered forward at United Health almost two years ago that required all security people to take some basic programming classes. 

“We’re not trying to create coders,” he explains. “What we’re trying to do is sort of build common understanding, common problem, common empathy.” 

The program is low-budget, built on free online courses on the internet. For those who’ve never written code in their entire life, Rinehart’s team pointed them to a beginner’s level course to get an understanding of syntax and the like. From there, everyone was pointed to AutomateTheBoringStuff.com, for classes on Python. 

According to him, the program has reaped some major benefits. Here are the three biggest. 

1. Changing Mindsets 

“I never thought this one little thing would inspire so much change,” Rinehart says. “I would say in the end we didn’t get a whole bunch of new coders, but what we did was we changed people’s thinking.” 

So, even something as simple as developing patch management policies shifted as the people setting these policies recognized that patching complicated internally developed systems isn’t the simple button-press process that updating a Microsoft application is. It takes development work, testing, and so on. 

“So that team went forth and wrote their own software security policy that was really in tune with the developers,” he says.

 

2. Fostering Skills to Build Their Own Automation

While not everyone turned into coders, the courses did inspire at least some teams to code their own security tooling for different kinds of security automation. 

“Now you have people that can go from idea to product on their own, right? You have this new sort of innovation engine; it’s now possible,” Rinehart says. “Now not every person is going to do that, right? But you’ve created the ability.” 

For example, in one case an experienced network security pro took what they learned and built out a firewall automation API for the firm. 

“We already had standards and rule sets, and being a CCIE the person had depth and context of what to write for this API,” he says. “It was amazing.”

In another case, incident response team members started building out their own apps.

“They were one of the first teams who started taking on development practices, and they reached this mandate where they were not going to use anything that a developer at UnitedHealth Group could not use,” he says. “Because sometimes security people think they’re above the law, but they’re not.”

3. Moving to Security as Code

In addition to helping security people build their own automation, the coding skills also allowed the team to start building out security policy as code. 

“So there are products now where we are taking our policies and standards and we’re actually creating code that reflects those requirements,” Rinehart says, explaining this is much preferable to having people try to translate these technical or industry regulatory requirements into written documents that developers then have to read, parse and then figure out how to code accordingly. “We’re contributing tangible pieces back into the product now, versus trying to explain policies that even we don’t really understand to somebody who has to do something with it.” 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/application-security/3-reasons-to-train-security-pros-to-code/d/d-id/1333541?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Drones shut down major international airport

About 11,000 passengers are crammed into Gatwick Airport, their flights grounded since last night as a drone operator repeatedly flew two unmanned aerial vehicles (UAVs) close to the runway.

Flights can’t take off or land until it’s safe to do so, and that can’t happen until police find the operator.

Gatwick, a major international airport, is the UK’s second busiest.

The BBC reports that 110,000 passengers on 760 flights were due to arrive or depart today, with 2.9 million passengers due to pass through over the Christmas/New Year stretch.

Good luck with that. Travelers have been stuck on planes for hours, sleeping stretched out in seats or anywhere they could find as they waited for the all-clear, but every time airport authorities thought it might be safe, the drone buzzing would start again.

Gatwick, scrambling to provide all the food and water needed by the hoards of stranded people, has brought on extra staff to help out. Some people who were heading for sunny, warm holidays told a BBC Live reporter that they’ve been “left out in the cold”:

Megan Rayner: We are now in a hotel in Heathrow after being sat outside in the cold waiting for a coach transfer at 13:30 in the freezing cold with no coats (we are going on holiday to the Maldives) with my family – 16 of us in total including two babies.

Police are searching for whomever’s operating the holiday-ruining drones. They don’t think it’s terror-related; rather, they’re considering it a “deliberate act” of disruption by somebody using “industrial specification” drones.

Transport Secretary Chris Grayling described the Gatwick situation as a “very serious ongoing incident in which substantial drones have been used to bring about the temporary closure of a major international airport”. He called for the stiffest possible punishment to be doled out to whoever’s responsible:

The people who were involved should face the maximum possible custodial sentence for the damage they have done.

UK Prime Minister, Theresa May, has said that the perpetrators will be caught and will, in fact, face a prison sentence.

People are baffled, asking 1) why a few drones are such a big deal, and 2) why police can’t just shoot them down.

The answers: drones getting anywhere near aircraft are a big deal because they can get sucked into engines and down the craft. They’re similar to birds: flocks of birds have been blamed for bringing down scores of small planes and causing at least two major US disasters.

In the UK, a helicopter crash left five people dead in Leicester City. The cause of that deadly crash hasn’t yet been determined, but aviation experts have suggested that the helicopter’s loss of power to the tail rotor could have been caused by a large bird or a large drone.

As far as shooting them down goes, police can’t, because of 1) the danger of stray bullets harming people, and 2) the danger of somebody getting hit by a disabled drone crashing to the ground.

Airline sources told the BBC that the disruption could last several days and that as of Thursday, flights had been cancelled until at least 19:00 GMT. Airlines are saying that the disruption will stretch into Friday.

The BBC is providing live updates here.

On Thursday morning, officials from the Department for Transport, Home Office, the police and the Civil Contingencies Secretariat were among a cross-government contingent involved in a meeting about the crisis held at the Prime Minister’s office.

Meanwhile, the Army is reportedly deploying “specialist equipment” to handle the drones.

What that is, we can’t say, though we’ve seen all manner of solutions suggested through the years, be it sound (technically, resonant frequencies used in acoustic weapons), birds of prey, nets shot out of bigger drones, or jamming a drone’s radio to force it to auto-land.

Above all else, whatever technique is used has to avoid having the drone turn into a juggernaut as it loses control, and possibly plummet toward people on the ground.

Our thoughts are with those stranded at Gatwick; we hope you make it to your destinations safely and that holiday joy will eventually be yours.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3kMYOg0m0g8/

France next up behind Britain, Netherlands to pummel Uber with €400k fine over 2016 breach

Uber has been slapped with a €400,000 fine by the French data protection agency for the hack that exposed the data of 57 million users.

The hack happened in 2016, but the firm hushed it up for more than 12 months, even paying the hackers $100,000 to keep quiet about the incident.

The attackers stole login credentials for Uber’s AWS S3 data stores from the firm’s GitHub code repo in order to make off with info on customers’ and drivers’ email addresses, names, city and phone numbers.

Of the 57 million people affected, some 1.4 million were in France, with most of these (1.2 million) being customers.

The French data watchdog CNIL said that the attack wouldn’t have succeeded if the firm had put “basic security measures” in place.

Uber has said that multi-factor authentication isn’t mandatory on GitHub, but in a statement on the decision, the CNIL said the firm should have used strong authentication measures for access.

Moreover, Uber shouldn’t have stored login IDs unencrypted on GitHub, the agency said, and it should have set up a system based on IP addresses to protect access to the S3 bucket.

“When employees are made to connect remotely to the servers used by a company, securing this connection is a basic precaution to preserve the confidentiality of data processed,” the (translated) decision said.

“This security can, for example, be based at least on setting up an IP address filtering measure so that only requests originating from identified IP addresses can be executed, which makes it possible to avoid any illegal connection, by securing data exchange and authenticating users.”

Even if establishing such a system involved a long development process, this was a necessary effort that should have been planned from the outset, given the very large number of people’s data kept on the servers.

The CNIL concluded that Uber was “negligent in failing to implement some basic security measures” and that this “widespread lack of caution” was evident in the success of the hackers.

As such, it handed the French arm of the ride-hailing service a fine of €400,000.

This follows a £385,000 penalty from the UK watchdog and a €600,000 fine from the Dutch authority. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/20/uber_france_400k_euro_fine_for_2016_hack/

Automating a DevOps-Friendly Security Policy

There can be a clash of missions between security and IT Ops teams, but automation can help.

Much has been written about the need for a balance between DevOps and security architecture. While DevOps is all about getting whizbang apps to market as efficiently as possible, the security team is often portrayed as the party pooper — folks who tell you why you can’t do something instead of how you can do it.

There’s a similar clash of missions between the security team and the IT operations team. Security can be disruptive to IT Ops, with requests that can seem arbitrary and needlessly time consuming. IT Ops generally has the goal of simply keeping things running — even if this opens opportunities for bad actors.

Security has a strong case for wanting what it wants and for not wanting what it doesn’t want. Security vulnerabilities are rising at an alarming clip — data theft, leakage of intellectual property, corporate sabotage, denial-of-service attacks, and more. A company’s profits, reputation, brand, and even viability are at stake. But the relentless march of commerce takes no prisoners. Any company that’s hamstrung from acting with speed and agility is pretty much already dead on its feet. So, we find companies willing to accept risk that those in charge of security see as unacceptable, and implementing solutions that may be insecure.

It’s a tension that raises its head frequently for us at HyTrust. Our customers routinely face this tension when they try to implement our offerings. The security team wants to do X, but the IT Ops team does not want to do X. In one recent case, a client’s security team wanted to implement our solution, but the IT Ops team put up roadblocks every step of the way, with a list of reasons why our offering wasn’t a good idea. In this case, those on the Ops team feared the rollout would expose process flows they hadn’t been following — weaknesses that made the client vulnerable.

An Unsafe House
Security architecture work is like building a safe, compliant house. The problem is that the builders made it so hard to lock the windows, turn on the security system, and operate the surveillance cameras that the inhabitants leave these tasks undone. DevOps promises to make life in the house a comfortable and efficient experience, but without a security policy that is friendly to DevOps, this promise is never fulfilled.

In a recent “Ask Me Anything” session on Reddit, Mike Foley described this problem. Mike is a senior technical marketing architect at VMware whose main goal is to help IT admins build secure platforms that stand up to scrutiny from security teams with the least impact to IT Ops. Asked what he thought was “the coolest new security toy” in the recently released vSphere 6.7, Mike said virtual Trusted Platform Module (vTPM). But it’s the rationale he gave that’s germane to this post:

“I like the balance of security and IT operations impact. Let’s face it, if I’m an IT guy and security comes to me and says I have to do a bunch of things that I know are only going to ‘make work’ for me, then it’s no longer a technical discussion. It’s political. And everyone loses. If, as an IT guy or gal, I can meet the security requirements of security with the least impact to my day-to-day operations, then I’ll be much more open to enabling those features.”

Mike used VM encryption to illustrate what he means, then added: “It’s the same with virtual TPM. I’m not having to completely re-jig my environment to support this need.”

Automating Doing the Right Thing
No one wants to “completely re-jig” their IT environment without good cause. One reason those in charge of security don’t always make a compelling case is that many security and compliance monitoring tools haven’t kept pace with the times. They certainly weren’t built to test code at the breakneck speed DevOps requires.

The good news is that security can take a page from the DevOps playbook — in particular regarding automation. DevOps encourages automation, and as I argued in a previous post, I favor an approach to security that automates doing the right thing — that automates ethics for cybersecurity. That’s because most security breaches and system failures result from human mistakes. Automating the right thing takes a load off security architects. It spares them from having to configure security consoles manually. It reduces the chances of human error.

What is the right thing? When it comes to risk, most companies already have clear governance of what they consider it to be. From this, they derive their security policies. A security policy is whatever you decide is the correct behavior, based on experience. The next step is to bake these policies into your processes — which puts us squarely into DevSecOps territory.

This is a hot topic. In a 2017 survey, Gartner found that the issue of how to securely integrate security into DevOps — delivering DevSecOps — was one of the fastest-growing areas of interest of clients in the preceding 12 months. A key finding was that information security must adapt its security testing tools and processes to developers, not the other way around. As more companies embrace DevOps principles to help developers and operations teams work together to improve software development and maintenance, they’re increasingly automating security in this way. In fact, the Gartner report predicts that DevSecOps will be embedded into 80% of rapid development teams by 2021.

Automating doing the right thing is key to a DevOps-friendly security policy — a policy that’s also friendly to IT Ops. Going back to the “Ask Me Anything” session on Reddit, one poster joked that Mike’s focus on minimizing impact to IT Ops was “the lazy man’s approach.” Mike’s terrific comeback: “One man’s lazy is another man’s ‘I have only so much time in the day.’ :)”

Related Content:

John De Santis has operated at the bleeding edge of innovation and business transformation for over 30 years — with international and US-based experience at venture-backed technology start-ups as well as large global public companies. Today, he leads HyTrust, whose … View Full Bio

Article source: https://www.darkreading.com/application-security/automating-a-devops-friendly-security-policy/a/d-id/1333500?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

2018 In the Rearview Mirror

Among this year’s biggest news stories: epic hardware vulnerabilities, a more lethal form of DDoS attack, Olympic ‘false flags,’ hijacked home routers, fileless malware – and a new world’s record for data breaches.

It was a year that shook IT security experts and users out of their post-holiday cheer as soon as they got back to their desks after the new year began, with the disclosure of a new and widespread class of hardware attack that affected most computers worldwide. 

In addition, the long tail of the now-infamous Spectre and Meltdown vulns continued to haunt the security industry all year, with more findings exposing security flaws in hardware and related side-channel attack scenarios. Mass updates to operating systems, browsers, and firmware ensued – often with performance trade-offs. 

A researcher at Black Hat USA this summer also added a new spin to hardware hacking when he demonstrated how he cracked CPU security controls to gain kernel-level control, aka “God mode.” 

What else? Deceptive cyberattacks became a new M.O. for nation-states this year: Russia’s GRU military hacking team posed as North Korean hackers in a widespread targeted attack against the Winter Olympics in South Korea. They employed destructive malware to knock out the games’ IT systems, Wi-Fi, monitors, and ticketing website. 

Meanwhile, Russia was up to its old tricks with another novel and destructive campaign: Some 500,000 home and small-office routers and network-attached storage (NAS) devices worldwide were discovered infected as part of a massive botnet. The so-called VPNFilter attack infrastructure included stealthy, modular components that infect, spy, steal, and self-destruct. The initial target appeared to be Ukraine, where the majority of infected Internet of Things (IoT) devices were found, but the losing battle of getting consumers to update or patch their home and IoT devices was a chilling wake-up call.

2018 also featured a new more damaging form of distributed denial-of-service (DDoS) attack that exploits unprotected Memcached servers, as well as the new reality of attackers “living off the land” with so-called fileless malware attacks, using legitimate tools such as PowerShell to do their hacking. These malware-free attacks increased by 94% in the first half of the year, and they don’t show any signs of slowing down. 

And those are just some of the biggest news stories of 2018. For a closer look at yet another year to remember, check out Dark Reading’s new report, “The Year in Security: 2018,” here.

 

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/2018-in-the-rearview-mirror/a/d-id/1333532?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hackers Bypass Gmail, Yahoo 2FA at Scale

A new Amnesty International report explains how cyberattackers are phishing second-factor authentication codes sent via SMS.

Amnesty International this week released a report detailing how hackers can automatically bypass multifactor authentication (MFA) when the second factor is a text message, and they’re using this tactic to break into Gmail and Yahoo accounts at scale.

MFA is generally recommended; however, its security varies depending on the chosen factor. Consumers prefer second-factor codes sent via text messages because they’re easy to access. Unfortunately for some, cybercriminals like them for the same reason.

Amnesty discovered several credential phishing campaigns, likely run by the same attacker, targeting hundreds of individuals across the Middle East and North Africa. One campaign went after Tutanota and ProtonMail accounts; another hit hundreds of Google and Yahoo users. The latter was a targeted phishing campaign designed to steal text-based second-factor codes.

Throughout 2017 and 2018, human rights defenders (HRDs) and journalists from the Middle East and North Africa shared suspicious emails with Amnesty, which reports most of this campaign’s targets seem to come from the United Arab Emirates, Yemen, Egypt, and Palestine.

Most targets initially receive a fake security alert warning them of potential account compromise and instructing them to change their password. It’s a simple scheme but effective with HRDs, who have to be on constant high alert for physical and digital security.

From there, targets are sent to a convincing but fake Google or Yahoo site to enter their credentials; then they are redirected to a page where they learn they’ve been sent a two-step verification code. Entering the code presents them with a password reset form. Most people wouldn’t question a password change prompt from Google as it seems legitimate.

Attackers automate the full process: getting victims to log into their email accounts, obtaining the two-factor code, and prompting them to change their passwords.

It’s worth noting text-based authentication is mostly unsafe for high-risk people because attackers have to pick a specific target. For corporate leaders and other folks holding sensitive data, it’s worth exploring stronger methods of MFA, such as physical security keys.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/hackers-bypass-gmail-yahoo-2fa-at-scale/d/d-id/1333534?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

US Indicts 2 APT 10 Members for Years-Long Hacking Campaign

In an indictment unsealed this morning, the US ties China’s state security agency to a widespread campaign of personal and corporate information theft.

Two members of China’s APT 10 hacking group have been indicted by the US Department of Justice on charges unsealed this morning. Zhu Hua (aka Afwar, CVNX, Alayos, and Godkiller) and Zhang Shilong (aka Baobeilong, Zhang Jianguo, and Atreexp) were charged with conspiracy to commit computer intrusions, conspiracy to commit wire fraud, and aggravated identity theft.

The pair “acted in association with the Chinese Ministry of State Security’s Tianjin State Security Bureau,” said the DOJ in a statement. During a campaign lasting at least six years, the two targeted managed service provicers and individual companies, with victims including at least 45 companies in a dozen US states as well as a number of government agencies.

“It is galling that American companies and government agencies spent years of research and countless dollars to develop their intellectual property, while the defendants simply stole it and got it for free. As a nation, we cannot, and will not, allow such brazen thievery to go unchecked,” said US Attorney Geoffrey Berman during the press conference announcing the indictments. 

“The indictment alleges that the defendants were part of a group that hacked computers in at least a dozen countries and gave China’s intelligence service access to sensitive business information,” said Deputy Attorney General Rosenstein, speaking at the same news conference.  “This is outright cheating and theft, and it gives China an unfair advantage at the expense of law-abiding businesses and countries that follow the international rules in return for the privilege of participating in the global economic system.”

In addition to the theft of commercial intellectual property, the indictment alleges that the two “compromised more than 40 computers in order to steal sensitive data belonging to the Navy, including the names, Social Security numbers, dates of birth, salary information, personal phone numbers, and email addresses of more than 100,000 Navy personnel.”

In a statement provided to Dark Reading, CrowdStrike co-founder and CTO Dmitri Alperovitch said, “It is unprecedented and encouraging to see the US government, joined by so many international allies, taking a decisive stance against Chinese state-sponsored economic espionage. Today’s announcement of indictments against Ministry of State Security (MSS), whom we deem now to be the most active Chinese cyber threat actor, is another step in a campaign that has been waged to indicate to China that its blatant theft of IP is unacceptable and will not be tolerated.”

Read more details here and here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/network-and-perimeter-security/us-indicts-2-apt-10-members-for-years-long-hacking-campaign/d/d-id/1333535?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Phone repair shop employees accused of stealing nude photos

Ever broken your phone screen? Had your computer fritz? Ever taken a device to a repair shop? Ever been asked for your password when you hand it over? Ever wonder whether the shop workers lift the lid to rifle through your little treasure chest of personal data?

Anybody should think about that last one, but it goes double for women or girls, as recent news makes clear.

Terrence M. Roy, 47, of Seekonk, in the US state of Rhode Island, is now facing two counts of accessing a computer for fraudulent purposes and one conspiracy count. He’s one of six defendants who were/are employed by Flint Audio Video, in Middletown, RI.

An RI State Police investigation has found that 13 women between the ages of 22 and 47 never gave anyone from Flint permission to go through their “media files, make copies and later disseminate them.” Nonetheless, the women allege that store employees stole and shared their nude images and videos.

The statute of limitations means that only five of the alleged victims are now associated with the case. Police believe that the alleged thefts have been going on for seven years – since 2011.

As reported by the Providence Journal on Monday, Roy said in an interview with state police that he had surreptitiously taken photos of customers inside the store and sent them to former employee George Quintal, 34, who’s also facing five counts of accessing a computer for fraudulent purposes and five conspiracy counts.

Roy told police that he sent the photos to Quintal, along with the customers’ computer and password information. Roy reportedly told police that he did so to enable Quintal to access the computers and scrounge around for intimate images…

…But only if the target was female and good-looking. From a narrative of the interview seen by the Providence Journal:

Roy stated that he was never given permission to search any customer’s electronic devices and there was no reason for him to do this. When asked if he would ask for a customer’s computer/cellphone password for cases where just a broken screen needed to be fixed, Roy stated they typically wouldn’t ask unless the customer was pretty.

The other four defendants:

  • Co-owner Daniel A. Anton, 35, of Jamestown, RI, faces two counts of accessing a computer for fraudulent purposes and one conspiracy count.
  • Co-owner Gary W. Gagne, 59, of Jamestown, RI, faces two counts of accessing a computer for fraudulent purposes and one conspiracy count.
  • Former employee Adam M. Jilling, 36, of West Warwick, RI, faces one count each of accessing a computer for fraudulent purposes and conspiracy.
  • Former employee Geoffrey P. Preuit, 43, of Warwick, RI, faces one count each of accessing a computer for fraudulent purposes and conspiracy.

Quintal was the first one to be arrested, in mid-June, and was subsequently fired.

When detectives searched Quintal’s home, they say they found a thumb drive with more than 2,000 nude photos and videos of suspected Flint customers. Police allege that when customers dropped off phones, computers and other devices for repair, Quintal searched them without the customers’ knowledge or consent and took photos of any provocative media he found. He would then share the content with some of his coworkers.

On 1 July 2015, Quintal allegedly sent a text to Preuit in which he said that transferring the photos directly to a thumb drive made for higher-quality images.

Court documents show that Quintal regularly texted the men with photos of nude and partially nude women. Preuit told state police that this was what the all-male culture at Flint led to:

You know it’s just, I understand what [Quintal] was doing and I guess I was, I mean, I know I was obviously involved. You know it just started out as I worked for a company of all men, you know and it was just funny banter, back and forth that snowballed into this.

Well, not all male, not all the time, at any rate. Two Flint employees ratted out the alleged thieves, one of whom was a woman.

Identified in court documents as Jane Doe, she told state police about the photo- and video-sharing circle in late May. She showed them four email conversations between Quintal and the stores’ owners, Anton and Gagne.

Police said that one of the alleged victims told them that she and her mother have known Gagne for several years. When her iPhone started to malfunction, Gagne allegedly told her to bring the phone to Flint, so she did. Police said that Quintal sent two photos of her to Anton and Gagne via email.

Local station WPRI’s Eyewitness News shared this transcript of an exchange between Quintal and Preuit:

Quintal: Did the girl that sounded hot bring her computer last night?

Preuit: No

Quintal: I’m depressed

Preuit: Sorry

Quintal: Pages! Of nudes

Preuit: I’m sure it will resurface.

The six defendants are due to appear in court on 10 January. Quintal’s attorney told WPRI Eyewitness News that his client plans to plead not guilty.

Investigators are asking Flint Audio Video customers to contact Detective Adam Houston at (401) 921-8152 if they suspect that their media might have been taken.

How to protect your data and your gadgets

This is only the latest entry into the hall of terror that lists “terrible things that can be done to your device at the hands of so-called repair shops.” We’ve seen Best Buy Geek Squads misdiagnosing and overcharging for simple cases of IDE cables unplugged from hard drives, and then there are those Apple store Geniuses who’ve used hard drives for skateboards and/or destroyed all our data if they thought customers were being rude to them and/or poured whiskey into Macs.

They can get at our money by overcharging us or doing unnecessary repairs, and they can get at our data… unless we take some precautions, that is. Here are a few that can help:

  • Log out of all your online accounts and delete your browsing history.
  • Do a complete and encrypted backup of the device, and then erase it. That’s especially important if the device contains significant contact or password info or any sensitive files, such as intimate photos. Once the gadget’s back from the shop, you can then perform a restore.
  • Data can be spread all over the hard drive, so deleting it can be tough. A privacy cleaner extension can help with this.
  • Thoroughly check out anyone you’re planning on to do your repairs. Are they licensed/bonded? Are they listed in the phone book? Is the engineer or tech certificated for the work? Have many complaints against them have been lodged?
  • Unless you’ve already worked with the repair provider and trust their reputation, always get a second and, preferably, a third bid for comparison.
  • Beware of sending items, like cell phones and iPods, away for repairs unless it’s to the manufacturer or the retailer you bought it from. Online and classified ads offering cheap fixes could be a front for a repair scam. You may never see the item again.
  • Beware of being bamboozled by jargon. Repair scammers and even legit engineers and geeks may use terms you don’t understand, either innocently or to try to convince you they know what they’re doing. If you don’t understand, ask.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/b2Q9jaiShfM/