STE WILLIAMS

Windows 10 updates under fire from unhappy security admins

Windows 10 is finally within spitting distance of being the most popular version of Microsoft’s OS, and yet at this moment of apparent triumph, some security professionals are not satisfied.

The evidence emerges in a survey of admins by the patchmanagement.org listserv, which uncovered a rich seam of unhappiness at the state of recent Windows updates, especially for Windows 10.

In her open letter to Microsoft, patchmanagement.org moderator and Microsoft Most Valuable Professional (MVP) Susan Bradley, doesn’t sugar coat it:

The quality of updates released in the month of July, in particular, has placed customers in a quandary: install updates and face issues with applications, or don’t install updates and leave machines subject to attack.

Bradley points to glitches with July’s updates after which products failed, particularly in the aftermath of the Security and Quality Rollup updates for .NET Framework. As she notes:

In the month of July 2018 alone there are 47 knowledge base bulletins with known issues.

Forty-seven bulletins with issues sounds like a lot. Asking users of patchmanagement.org to rate how satisfied they were with quality of Windows 10 updates, 64% said they were either ‘not satisfied’ of ‘very much not satisfied’.

The feature updates that have become a defining part of the Windows 10 strategy come in for particular flak, both in terms of their overall business benefit and unhelpful regularity.

In Bradley’s view, the fault lies with the Windows 10 Insider Program, the channel through which developers and enthusiasts test new versions to spot problems before software is let loose on everyone else.

This compared badly with the Security Update Validation Program used to test older versions of Windows from 2005 onwards, she said.

Adding to the woe, communication was poor after the patches required to mitigate the effects of January’s Meltdown and Spectre CPU vulnerabilities.

This was an informal survey from a possibly self-selecting group of respondents, so let’s proceed with that caveat in mind. Assuming the survey is an accurate reflection of the attitude of at least some security professionals – what, if anything, might be going wrong?

One possibility is that three years after launch, Microsoft is starting to struggle with Windows 10’s more complex patching, updating and testing schedule.

Clearly, the days where Microsoft could just post updates and a grateful user base would download them are over.

Or perhaps it’s more frightening than that and it’s not that Microsoft isn’t doing a good job but that nobody could – updating an operating system smoothly across hundreds of millions of computers has become too complex. You will never satisfy everyone and the people who are dissatisfied are likely to seek out others of their kind.

In the nick of time, Microsoft is reportedly looking to launch a Windows desktop-as-a-service called Microsoft Managed Desktop (MMD), under which the company will manage the whole Windows installation, including updating, for a fee.

It’s possible that this might one day be offered to consumers which would mean that Windows will have come full circle.

In the old days, users installed Windows on their computers from diskettes. As the years passed, Microsoft started helping them out with security and feature updates across the internet, which now include major feature upgrades too. Spot the pattern? The logical end is Microsoft does it all and Windows becomes the service that Microsoft perhaps secretly wants it to be anyway.

If this happens we will have reached the moment when everyone accepts that full-service operating systems such as Windows have become too tricky for ordinary mortals to look after.

Some might raise their glass to salute the irony of this – for Windows at least, the computer will have stopped being truly personal.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/YyX1NNmLf_8/

Facebook cracks opens its bottle of Fizz – a carbonated TLS 1.3 lib

Looking for a TLS 1.3 library? Facebook has you covered. On Monday, the ads and data peddler plans to release Fizz, a TLS 1.3 library written in C++14, as an open source project.

TLS 1.3 is the latest and greatest version of the Transport Layer Security protocol, the successor to Secure Sockets Layer or SSL, which encrypts network communication between clients and servers. Finalized as a specification in March, it features stronger security and more efficient networking than previous iterations.

The protocol is still working its way into the wild. Eric Rescorla, a Mozilla Fellow and editor of the TLS and HTTPS specifications, said in an email to The Register that a lot of work has already been done to make TLS 1.3 as easy to deploy as possible.

“It’s a drop-in replacement for TLS 1.2, uses the same keys and certificates, and clients and servers can automatically negotiate TLS 1.3 when they both support it,” he said. “There’s pretty good library support already, and Chrome and Firefox both have TLS 1.3 on by default.”

It's beer o clock for sysadmins. Photo by SHutterstock

World celebrates, cyber-snoops cry as TLS 1.3 internet crypto approved

READ MORE

That said, the rollout has had some rough spots.

“Earlier draft versions did have some deployment challenges: a lot of middleboxes turned out to be broken in a way that caused failures with TLS 1.3,” he said. “We made some modifications to the protocol in response and aren’t seeing significant problems with the new version.”

Facebook in a draft blog post, provided to The Register, describes Fizz as “performant,” and points to several features, including stronger security, that may make it more appealing to developers than alternatives like Google’s BoringSSL or OpenSSL.

The company claims more that 50 per cent of its internet traffic is now secured by TLS 1.3, with Fizz handling millions of handshakes a second.

“Fizz has reduced not only the latency but also the CPU utilization of services that perform trillions of requests a day,” said Facebook software engineers Kyle Nekritz, Subodh Iyengar, Alex Guzman.

According to the trio, Facebook’s load balancer synthetic benchmarks exhibit roughly 10 per cent better throughput with Fizz than with the company’s previous stack.

Chunky

Fizz, we’re told, handles memory in a more efficient manner than other TLS libraries, which require contiguous memory space. Because apps tend to store data in discontiguous chunks, copying back and forth involves some extra latency as the data gets split and reassembled.

Fizz, by contrast, supports vectored I/O, also known as scatter/gather I/O, which lets it send and receive chunked data using fewer memory allocations and copy operations. These zero-copy write operations help make it “performant.”

So too does the code’s native support for asynchronous server operations and for handing off TLS key signing to a separate service via API, which can help keep keys secure.

The library also includes support for sending data as soon as the TCP connection is established. Early data transmission reduces request latency, though it raises the risk of a replay attack. Facebook defends against this by limiting early sends to whitelisted data and deploying a cache with its load balancers to detect reply attempts.

There’s some extra security baked in too. Fizz has been designed in a way that its state is defined explicitly in a single location, as a way to avoid attacks that attempt to change the code’s state, such as the CSS Injection Vulnerability identified in OpenSSL. And it implements an abstraction layer to avoid incorrect state transitions.

“If a state handler uses an incorrect state transition that is not defined in the explicit state machine, the code will fail to compile,” explain Nekritz, Iyengar and Guzman. “This helps us catch bugs during compile time rather than at runtime, thereby preventing mistakes.”

TLS 1.3 hasn’t been officially published, though RFC 8446, which is how the spec will be known once it becomes an official internet standard, is expected to be published soon. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/06/facebook_tls_1_3_fizz/

Chip flinger TSMC warns ‘WannaCry’ outbreak will sting firm for $250m

Chipmaker TSMC has warned that a previously disclosed virus infection of its Taiwanese plant may cost it up to $250m.

The malware struck on Friday and affected a number of computer systems and fab tools.

“The degree of infection varied by fab,” the firm said in an update on Sunday. “TSMC contained the problem and found a solution. As of 14:00 Taiwan time, about 80 per cent of the company’s impacted tools have been recovered, and the company expects full recovery on August 6.”

Although unnamed in its statement, TSMC execs reportedly blamed a variant of WannaCry for the infection during the course of follow-up conference calls.

TSMC warned that the incident is likely to “cause shipment delays and additional costs”.

“We estimate the impact to third quarter revenue to be about 3 per cent, and impact to gross margin to be about one percentage point,” it said. “The company is confident shipments delayed in third quarter will be recovered in the fourth quarter 2018, and maintains its forecast of high single-digit revenue growth for 2018.”

The chipmaker had previously forecast revenues of $8.45bn to $8.55bn in its September quarter. A 3 per cent loss would shave this by up to $250m, though actually losses may come out lower, and execs have already revised down revenue losses to no more than 2 per cent, Bloomberg reported.

TSMC added that it was working with its customers to develop revised shipment schedules. TSMC – which supplies components to Apple, AMD, Nvidia, Qualcomm, Broadcom and others – said malware spread across its systems after an infected sub-component of an unspecified tool was connected to its network.

Marcus Hutchins famously halted the spread of WannaCry across NHS networks and elsewhere after registering a domain that turned out to act as a kill switch, preventing further spread of the malware in cases where infected hosts could pin the domain. Even so, the malware is still capable of causing a problem in closed systems such as factories, UK infosec guru Kevin Beaumont told El Reg.

“Factory networks sometimes don’t have internet access so can’t reach a kill switch,” Beaumont said. “WannaCry is still one of the biggest infections seen in AV detections.”

This sort of thing is not unprecedented. Last March, around eight months after the original May 2017 malware outbreak, WannaCry crash landed on the factory systems of US aerospace giant Boeing.

TSMC pointed to a silver lining in its malware-occulted short-term outlook – the breach could have been a lot worse. “Data integrity and confidential information was not compromised,” it said. “TSMC has taken actions to close this security gap and further strengthen security measures.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/06/tsmc_malware/

Mastering MITRE’s ATT&CK Matrix

This breakdown of Mitre’s model for cyberattacks and defense can help organizations understand the stages of attack events and, ultimately, build better security.PreviousNext

When talk turns to rigorous analysis of attacks and threats, the conversation often includes the Mitre ATTCK model. Originally developed to support Mitre’s cyberdefense work, ATTCK is both an enormous knowledge base of cyberattack technology and tactics and a model for understanding how those elements are used together to penetrate a target’s defenses.

ATTCK, which stands for Adversarial Tactics, Techniques, and Common Knowledge, continues to evolve as more data is added to the knowledge base and model. The model is presented as a matrix, with the stage of event across one axis and the mechanism for that stage across the other. By following the matrix, red team members can design an integrated campaign to probe any aspect of an organization’s defense, and blue team members can analyze malicious behavior and technology to understand where it fits within an overall attack campaign.

Mitre has defined five matrices under the ATTCK model. The enterprise matrix, which this article will explore, includes techniques that span a variety of platforms. Four specific platforms — Windows, Mac, Linux, and mobile — each have their own matrix.

The four individual platform matrices echo the horizontal axis of the enterprise matrix with the exception of the mobile matrix, which divides the first stage, initial access, into six separate possibilities because of the nature of mobile devices and networks.

While the Mitre ATTCK matrix is useful and frequently referenced in the security industry, it is not the only attack model in use. Many organizations, for example, reference the Lockheed Martin “Cyber Kill Chain” in their security planning. Dark Reading will take a closer look at the Cyber Kill Chain in the future.

Do you or your organizations use the ATTCK matrix or model in your security work? If so, tell us how you use it and whether you find it effective. If there’s another model that you find even more useful, we’d like to hear about it, too. Let us know in the comments, below.

(Image: EVorona)

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/threat-intelligence/mastering-mitres-attandck-matrix/d/d-id/1332460?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Spot the Bot: Researchers Open-Source Tools to Hunt Twitter Bots

Their goal? To create a means of differentiating legitimate from automated accounts and detail the process so other researchers can replicate it.

What makes Twitter bots tick? Two researchers from Duo Security wanted to find out, so they designed bot-chasing tools and techniques to separate automated accounts from real ones.

Automated Twitter profiles have made headlines for spreading malware and influencing online opinion. Earlier research has dug into the process of creating Twitter datasets and finding potential bots, but none has discussed how researchers can find automated accounts on their own.

Duo’s Olabode Anise, data scientist, and Jordan Wright, principal RD engineer, began their project to learn about how they could pinpoint characteristics of Twitter bots regardless of whether they were harmful. Hackers of all intentions can build bots and use them on Twitter.

The goal was to create a means of differentiating legitimate from automated accounts and detail the process so other researchers can replicate it. They’ll present their tactics and findings this week at Black Hat in a session entitled “Don’t @ Me: Hunting Twitter Bots at Scale.”

Anise and Weight began by compiling and analyzing 88 million Twitter accounts and their usernames, tweet count, followers/following counts, avatar, and description, all of which would serve as a massive dataset in which they could hunt for bots. The data dates from May to July 2018 and was pulled via the Twitter API used to access public data, Wright explains.

“We wanted to make sure we were playing by the rules,” Wright notes, since doing otherwise would compromise other researchers’ ability to build on their work using the same method. “We’re not trying to go around the API and go around limits and tools in place to get more data.”

Once they obtained a dataset, the researchers created a “classifier,” which detected bots in their massive pool of information by hunting for traits specific to bot accounts. But first they had to determine the details and behaviors that set bots apart.

What Makes Bots Bots?
Indeed, one of the researchers’ goals was to learn the key traits of bot accounts, how they are controlled, and how they connect. “The thing about bot accounts is they can come up with identifying characteristics,” Anise explains. Traits may change depending on the operator.

Bot accounts are hyperactive: Their likes and retweets are constant throughout the day and into the night. They reply to tweets quickly, Wright says. If a tweet has more than 30 replies within a few seconds, they can deduce bot activity is to blame. An account’s number of followers and following can also indicate bot activity depending on when the account was created. If a profile is fairly new and has tens of thousands of followers, it’s another suspicious sign.

In their research, Anise and Wright came up with 20 of these defining traits, which also included the number of unique accounts being retweeted, number of tweets with the same content per data, number of daily tweets relative to account age, percentage of retweets with URLs, ratio of tweets with photos vs. text only, number of hashtags per tweet, and distance between geolocated tweets.

Hunting Bots on the Web
The researchers’ classifier tool dug through the data and leveraged these filters to detect automated accounts. Once they found initial sets of bots, they took further steps to determine whether the bots were isolated or part of a larger botnet controlled by a single operator.

“We could still use very straightforward characteristics to accurately find new bots,” Wright says. “Bots at a larger scale, in general, are using many of the same techniques they have in the past few years.” Some bots evolve more quickly than others depending on the operator’s goals.

Their tool may have been accurate for this dataset, but Anise says many bot accounts are subtly disguised. Oftentimes accounts appeared to be normal but displayed botlike attributes.

In May, for example, the pair found a cryptocurrency botnet made up of automated accounts, which spoofed legitimate Twitter accounts to spread a giveaway scam. Spoofed accounts had randomly generated usernames and copied legitimate users’ photos. They spread spam by replying to real tweets posted by real users, inviting them to join a cryptocurrency giveaway.

The botnet, like many of its kind, used several methods to evade detection. Oftentimes, malicious bots spoof celebrities and high-profile accounts as well as cryptocurrency accounts, edit profile photos to avoid image detection, and use screen names that are typos of real ones. This one went on to impersonate Elon Musk and news organizations such as CNN and Wired.

Joining the Bot Hunters
Anise and Wright are open-sourcing the tools and techniques they used to conduct their research in an effort to help other researchers build on their work and create new methodologies to identify malicious Twitter bots.

“It’s a really complex problem,” Anise adds. They want to map out their strategy and show how other people can use their work to continue mapping bots and botnet structures.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/spot-the-bot-researchers-open-source-tools-to-hunt-twitter-bots/d/d-id/1332489?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

IT Managers: Are You Keeping Up with Social-Engineering Attacks?

Increasingly sophisticated threats require a mix of people, processes, and technology safeguards.

Social-engineering attacks are no longer the amateurish efforts of yesterday.

Sure, your company may still get obvious phishing emails with blurry logos and rampant misspellings, or the blatantly fake “help desk” calls from unknown phone numbers, but more sophisticated attacks are becoming the norm.

Using both high-tech tools and low-tech strategies, today’s social-engineering attacks are more convincing, more targeted, and more effective than before. They’re also highly prevalent. Almost seven in 10 companies say they’ve experienced phishing and social engineering.

For this reason, it’s important to understand the changing nature of these threats and what you can do to help minimize them.

Know the Threat
Today’s phishing emails often look like exact replicas of communications coming from the companies they’re imitating. They can even contain personal details of targeted victims, making them even more convincing.

In one incident, bad actors defrauded a U.S. company of nearly $100 million by using an email address that resembled one of the company’s vendors. And in the most recent presidential election, hackers used a phishing email that appeared to come from Google to access and release a top campaign manager’s emails.

Bad actors can get sensitive data in many other ways. In one case, they manipulated call-center workers to get a customer’s banking password.

Another way is to target data that’s visually displayed on a laptop or mobile-device screen. For example, a bad actor could pose as a trusted vendor in an office or a business associate in a foreign country and subtly capture data with a smartphone or hidden recording device.

A Three-Tiered Defense
Given the prevalence and advanced nature of social-engineering threats, your privacy and security measures should cascade across three key areas: people, processes, and technology.

Some measures to consider using in each area include:

1. People: Provide ongoing training to educate workers about social-engineering threats, and procedures for preventing or responding to them. Employees who regularly handle sensitive information are more likely to be targeted — including HR, sales, and accounting workers. They should be your company’s most knowledgeable workers about threats and procedures — and should be fully engaged to help identity threats.

For example, encourage workers to use the “Report email” or “Report as phishing” icons that can be enabled in Microsoft Outlook. The service provides an easy way for workers to report suspicious messages so IT can take steps to mitigate their impact. IT managers can also monitor the use of the icon to statistically track worker awareness and engagement.

If your company has separate IT and security teams, make sure there is a clear understanding about who is responsible for managing social-engineering threats. Any misunderstanding between these parties can lead to security gaps and a lack of accountability if an attack occurs.

2. Processes: Policies that encourage workers to not click on suspicious links or provide information to outside organizations go without saying. But make sure you also have procedures for workers to give you details about attempted attacks. This can help you investigate suspicious emails, URLs, and phone numbers, and better understand your vulnerabilities.

As you review and refine your policies, always aim for simplicity. Overly complex security protocols can be too much for workers to remember and can fail.

3. Technologies: Security-perimeter controls like antivirus protection and intrusion-detection/intrusion-prevention systems remain vital. Also, use security intelligence tools to understand your security ecosystem and the potential risks you face. And encrypt data to make it unreadable, even if it’s stolen. 

All laptop and mobile-device screens should be fitted with privacy filters. The filters black out the angled views of screens to help office workers and business travelers safeguard data from onlookers or even cameras.

Keep Evolving
A strong defense against social-engineering threats requires more than training and educating workers. You and your IT team must be vigilant about emerging threats so that as they evolve, your security and privacy measures evolve with them.

Related Content:

 

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Click for more info

Dr. Larry Ponemon is the chairman and founder of Ponemon Institute, a research think tank dedicated to advancing privacy and data protection practices, and a privacy consultant for 3M. Dr. Ponemon is considered a pioneer in privacy auditing and the Responsible Information … View Full Bio

Article source: https://www.darkreading.com/endpoint/it-managers-are-you-keeping-up-with-social-engineering-attacks/a/d-id/1332423?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Dark Reading News Desk Live at Black Hat USA 2018

Watch here Wednesday and Thursday, 2 p.m. – 6 p.m. Eastern Time to see over 40 live video interviews with straight from the Black Hat USA conference in Las Vegas.

BLACK HAT USA – Whether you are hitting the Mandalay Bay for the Black Hat USA 2018 conference this week or peeking at the news feeds from afar, keep your browser open here from 2 pm to 6 pm Eastern (11 – 3 Pacific) on Wednesday, Aug. 8 and Thursday Aug. 9. The Dark Reading News Desk will once again be streaming live right here at https://darkreading.com/drnewsdesk, bringing you over 40 video interviews with speakers and exhibitors at the show.

Keep tabs on the action on Twitter by following @DarkReading and #DRNewsDesk. Here’s our all-star lineup, in order of appearance (subject to change): 

Wednesday, August 8

  • Allison Bender, Counsel, ZwillGen PLLC and Amit Elazari, Doctoral Law Candidate, Berkeley Law, CLTC, MICS Program
  • Hugh Njemanze, CEO, Anomali
  • Cathal Smyth, Machine Learning Researcher, Royal Bank of Canada and Clare Gollnick, CTO and Chief Scientist, Terbium Labs
  • Haiyan Song, SVP and GM of Security Markets, Splunk and Oliver Friedrichs, VP of Security Orchestration and Automation, Splunk
  • Dr. Paul Vixie, CEO Co-Founder, Farsight Security
  • Carsten Schuermann, Associate Professor, University of Copenhagen
  • Rohyt Belani, CEO, Cofense Inc.
  • Marc Ph. Stoecklin, Principal Research Scientist and Manager of Cognitive Cybersecurity Intelligence, IBM Research
  • Balint Seeber, Director of Vulnerability Research, Bastille
  • Scott Schneider, Chief Revenue Officer, CyberGRX
  • Justin Engler, Technical Director, NCC Group and Tyler Lukasiewicz, Security Consultant, NCC Group
  • Andrew Blaich, Head of Device Intelligence at Lookout and Michael Flossman, Head of Threat Intelligence at Lookout
  • Ben Gras, Security Researcher, PhD Student, VU University
  • Chester Wisniewski, Principal Research Scientist, Sophos
  • Mark Dufresne, VP of Threat Research and Adversary Prevention, Endgame
  • Jonathan Butts, CEO, QED and Billy Rios, Founder, Whitescope
  • Mike Hamilton, Chief Executive Officer, Ziften
  • IJay Palansky, Partner, Armstrong Teasdale LLP
  • Russ Spitler, SVP Product, AlienVault

Thursday, August 9:

  • Daniel Crowley, Research Baron, IBM X-Force Red and Jennifer Savage, Security Researcher, Threatcare
  • Mark Orlando, Chief Technology Officer, Cyber Protection Solutions, Raytheon
  • Brittany Postnikoff, Researcher, University of Waterloo and Sara-Jayne Terp, Data Scientist, AppNexus
  • Chris Eng, Vice President of Security Research, CA Veracode
  • Ruben Santamarta, Principal Security Consultant, IOActive
  • Justin Shattuck, Principal Threat Researcher, F5 Networks
  • Ofer Maor, Director of Solutions Management, Synopsys Software Integrity Group
  • Andrea Carcano, PhD, Co-founder and CPO, Nozomi Networks and Younes Dragoni, Security Researcher, Nozomi Networks
  • Kingkane Malmquist, Information Security Analyst, Mayo Clinic
  • Jordan Wright, Principal RD Engineer, Duo Security and Olabode Anise, Data Scientist, Duo Security
  • Celeste Lyn Paul, Senior Researcher, NSA and Josiah Dykstra, Deputy Technical Director of Cybersecurity Operations, NSA
  • Juan Pablo Perez-Etchegoyen, CTO, Onapsis and Michael Marriott, Research Analyst, Digital Shadows
  • Ofer Israeli, CEO, Illusive Networks
  • Andrei Costin, Assistant Professor at University of Jyvaskyla, JYU.FI and Security Researcher at Firmware.RE
  • Ashley Holtz, Intelligence, NBCUniversal
  • Michelle Johnson Cobb, Chief Marketing Officer, Skybox Security
  • Kenneth Geers, Chief Research Scientist, Comodo

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/dark-reading-news-desk-live-at-black-hat-usa-2018-/d/d-id/1332471?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

TSMC chip fab tools hit by virus, payment biz BGP hijacked, CCleaner gets weird – and more

Roundup This week we took a close look at Google security keys, bid adieu to Facebook’s head security honcho, and had a few email credentials overshared by Atlassian.

Here’s everything else that happened in infosec land this week beyond what we’ve already reported.

TSMC chip assembly line computers infected

Chipmaker TSMC – which supplies components for Apple, AMD, Nvidia, Qualcomm, Broadcom, and others – said its semiconductor fab tools were downed by a virus.

The malware hit the Taiwanese manufacturing giant’s systems on Friday night, and some plants remain infected at time of writing while others have been restored to operation. It is not believed to be the result of an intrusion by one or more hackers – it sounds as though a staffer accidentally ran some kind of software nasty, and pwned computers on the network.

“Certain factories returned to normal in a short period of time, and we expect the others will return to normal in one day,” the biz told the media on Saturday.

Hack, hack, hack, hack, hackin’ car high school

Long known as America’s hub for autos, Michigan is once again looking to get to the forefront of the industry, this time through security.

Governor Rick Snyder has set forth plans for a new set of high school curricula aimed at teaching students skills they can use to design car security systems of the future.

Dubbed “Masters of Mobility: Cyber Security on the Road,” the new education push will aim to train teachers who will in turn lead classes on the basics of cybersecurity and software development for automobiles. The aim is to help the state regain its clout in the industry by offering a bumper crop of security research talent that specializes in the automotive field.

The program will begin as a pilot with two Michigan schools in the Fall and, if successful, will roll out to nine more schools next year.

Linux’s leaky timer bug: Countdown to patching

A researcher have detailed a bug in the Linux kernel that can be exploited to leak sensitive data – such as cryptographic keys and passwords – from protected kernel memory, much in the same way as the Spectre and Meltdown processor design vulnerabilities. Interestingly, it took months for the fix to wind its way into Linux distributions, if at all.

Andrey Konovalov spelled out the situation to the Full Disclosure list this week: the programming blunder (CVE-2017-18344) was introduced way back in kernel version 3.10, and is due to a buggy show_timer() function. This code can be potentially abused by a malicious application to read memory it should not be about to snoop on.

“This allows to access kernel memory and leak keys, credentials or other sensitive information that is stored there (so the bug has a similar impact to Meltdown),” Konovalov explained.

The flaw is present in kernels version pre-4.14.8, and was fixed in version 4.15-rc4 in late December. The patch has since been backdated to the version 4.4 stable branch. Make sure you’re not running a vulnerable build as Konovalov said he will be releasing a proof-of-concept exploit some time next week.

Essentially, although the vulnerability has been known about for eight months, a CVE was only assigned late last month, and some Linux distributions are still shipping vulnerable kernels. Canonical, at least, pulled in the patch in Ubuntu 16.04.

“In this particular case of a somewhat ‘scary’ bug there was a window of 3.5 months between the bug being reported and the fixing commit reaching the Ubuntu Xenial 4.4 kernel branch,” Konovalov noted.

“This gives some insight into how much time it usually takes for a fix to travel from upstream through stable into a distro kernel when there’s no CVE. Compared to the 14 days that distros are usually given to fix a security bug reported through linux-distros@, that seems rather long.”

Homeland Security gets into Risk Management

The US Department of Homeland Security is stepping up its efforts to better manage the various IT security projects being undertaken by its various agencies. Earlier this week, the DHS announced the creation of something called the National Risk Management Center. The office will apparently be tasked with overseeing cyber security projects and coordinating risk management studies.

The cybersecurity nerve center will also set the priorities for securing critical government functions and help sync up joint efforts between agencies and the private business sector.

“The National Risk Management Center advances the ongoing work of DHS and government and private sector partners to move collaborative efforts beyond information sharing and develop a common understanding of risk and joint action plans to ensure our nation’s most critical services and functions continue uninterrupted in a constantly evolving threat environment,” the DHS said of the new program.

“The Center will work closely with the National Cybersecurity and Communications Integration Center (NCCIC), which will remain DHS’s central hub for cyber operations focused on threat indicator sharing, technical analysis and assessment services, and incident response.”

CCleaner tries to explain shady behavior

Piriform, the developer of bloatware-cleaning tool CCleaner, is on the defensive after users spotted the version 5.45 of the app monitoring their activity without consent. Netizens noted that even when they turned off Active Monitoring in the app CCleaner continued to collect some data about what they were doing, and phoned it home to the developers’ servers. This led some to question whether turning off Active Monitoring really did anything.

As it turns out, Active Monitoring itself is switched off when you disabled it, but that doesn’t stop other telemetry from being beamed back to base.

Piriform said that even when its Active Monitoring tool is turned off, it will still collect, for its own internal analytics, some anonymized information, such as the installed version, which features have been used, and details useful for hunting bugs. The developer, though vague on exactly what is slurped, assured users that the snooping was nothing to be afraid of.

“The information which is collected through these new features is aggregated, anonymous data and only allows us to spot trends,” Piriform explained. “This is very helpful to us for the purposes of improving our software and our customers’ experience. No personally identifiable information is collected.”

Piriform said it will be updating the tool soon to highlight exactly what is gobbled up when Active Monitoring is switched on and off – and has pulled version 5.45 for now.

Oracle warns of new attacks on payment systems

As if retail giants’ IT departments didn’t have enough security issues to worry about, now there is the threat of BGP and DNS hijackings.

Researchers at Oracle found a company handling card payment processing was the target of an attack that redirected DNS traffic to malicious servers. The technique, Oracle says, was nearly identical to what was used earlier this year to redirect traffic from Amazon’s DNS service.

This resulted in an extended outage of the payment systems on July 10 as the traffic was instead run through the attacker-controlled networks in the Ukraine. According to Oracle researcher Doug Madory, this is probably the sort of thing we will all have to get used to.

“If previous hijacks were shots across the bow, these incidents show the Internet infrastructure is now taking direct hits,” Madory wrote. “Unfortunately, there is no reason not to expect to see more of these types of attacks against the internet.”

And finally

Free-HTTPS-cert-issuing org Let’s Encrypt suffered a brief DNS outage at the start of the week that rendered some of its systems temporarily inaccessible after one of its upstream providers misconfigured its domain settings.

“The upstream provider beyond Namecheap accidentally set the status of our domain to clientHold,” the project’s Josh Aas told us. “Seems like it was some sort of administrative error. We’re looking into steps we can take to reduce the likelihood of something like this happening again.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/04/security_roundup/

Security world to hit Las Vegas for a week of hacking, cracking, fun

About a quarter of a century ago, a handful of hackers decided to have a party in a cheap hotel, and had a whale of a time.

Fast forward to 2018, and that get-together has grown into events that will see an estimated 30,000 people converge on Las Vegas for the biggest security shindig in the world – the combination of Black Hat USA, DEF CON and BSidesLV.

While that first gathering morphed into the DEF CON hacking conference, the biggest event is Black Hat USA, which begins on Saturday, and runs through until Thursday, August 9. This is the flashy corporate brother of DEF CON, and features four days of security training, a one-day invite-only CISO summit day (from which press are strictly barred) and two days of briefings featuring everything from government agents to hardcore hackers talking about the tricks of the trade.

Although they have a shared origin – DEF CON founder Jeff Moss also set up Black Hat USA – these days, DEF CON and Black Hat USA are run and operated separately. We’ve previously described the behind-the-scenes and arduous task of setting up and maintaining computer networks for attendees of hacker conventions.

As a ten-year veteran of Black Hat, your humble vulture can tell you there’s a lot to be learned from the event. There are always too many talks to get to and a host of ancillary events. However, the quality of talks is very good. This isn’t RSA, which auctions off some of its keynotes to the highest bidder.

Instead, talks are mostly chosen on merit and originality, and there have been some barnstormers in the past. It was at Black Hat ten years ago that Dan Kaminsky detailed how he discovered a gaping security hole in the globe’s Domain Name System (DNS). I asked him at the time why he didn’t just run riot, exploit the flaw, and buy his own desert island from the proceeds. He responded he didn’t want his mother to have to visit him in prison.

And, of course, the sadly departed Barnaby Jack went on stage at Black Hat in 2010 to make a cash machine spew dollar bills, a technique known as jackpotting.

Youtube Video

But as important are the networking opportunities. The bars and cafes of the Mandalay Bay conference center will be jam backed with security folks making deals, swapping tips and meeting up with their contemporaries.

Ditch the suits

As Black Hat ends on August 9, DEF CON begins and runs until Sunday, although it’s worth staying on an extra couple of days if you have the stamina as there are numerous unofficial events – notably some explosives and fireworks hacking that goes on in the Nevada desert.

If Black Hat teaches you the secrets and tricks you’ll need to stay a step ahead of miscreants, DEF CON is for those who prefer to dive straight into the source code, disassembled binaries, and torn-apart hardware. It’s the event that hardcore hackers go to, and increasingly bring their kids to so that the next generation of techies can get their skills in order.

Black Hat also has a lot of suits, whereas DEF CON is more a cargo pants and Mohawk kind of affair. Things are relaxed, fun, occasionally drunken and the parties are legendary. Sadly the show has got a little too big for its own good and there’s always too much to see and do.

The late arrival is Bsides Las Vegas, which runs on the Tuesday and Wednesday. This is a smallish event, with only a couple of thousand attendees, but that gives it the feel that DEF CON used to have before it got so large.

Last year’s event saw a host of interesting talks that were either too controversial or too involved for Black Hat. Again, it’s a very informal affair and its pool parties are fast gaining a reputation for being enormous drunken fun.

Hacking the hackers

All three conferences share a common concern for some – the possibility of getting hacked. All attendees are specifically warned to be careful as there are, well, you know, hackers around.

This risk is somewhat overstated. It has been years since anyone was seriously pwned at Black Hat, and the last people caught doing it were summarily ejected from the venue. Bsides, too, takes a dim view of such proceedings.

defcon

Dear alt-right morons and other miscreants: Disrupt DEF CON, and the goons will ‘ave you

READ MORE

But at DEF CON, it’s positively encouraged – indeed there’s a constantly updated Wall of Sheep displaying the names of those whose systems or connections have fallen prey to cunning crackers. Anything sent via plaintext – passwords, email addresses, etc – over a public network will be snooped on and writ large.

Only use the DEF CON conference Wi-Fi if you want your system to be comprehensively penetration tested for free. Of course, you’re likely to be hit with exploits and techniques for known vulnerabilities; someone’s unlikely to potentially burn a zero-day exploit against you in public when such code can earn big paydays from private buyers. Splashing an exploit around for people to capture, keep, and distribute themselves somewhat lowers its market value.

So, just take sensible precautions. Bring kit you don’t mind wiping at the end of the trip, don’t keep or log into anything personal or sensitive on it, and make sure it’s fully patched and locked down. Don’t join public or sketchy Wi-Fi and cellular networks. Consider leaving your phone switched off. Don’t plug free stuff into your USB ports, and so on.

There are also wider security considerations. You’d be a fool to use the ATMs in and around the conference venues, givein how many card skimmers may be operating. Back when Black Hat was still at Caesar’s Palace, an attendee noticed a dodgy-looking cash machine in the hotel. They went to the organizers to let them know, and the conference was abuzz with who was responsible. As it turned out no one was – the ATM had been installed weeks previously by unknown criminals.

So if you are going, be safe, be nice, and be secure. And if you see a Reg hack hammering a keyboard in a corner, feel free to say hello. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/04/black_hat_def_con_bsides_2018/

Web doc iCliniq plugs leaky S3 bucket stuffed full of medical records

Exclusive Online medical consultation service iCliniq left thousands of medical documents in a publicly accessible Amazon Web Services S3 bucket.

iCliniq locked down the online silo earlier this week only after the slip-up was brought to its attention by German security researcher Matthias Gliwka. He approached El Reg after failing to get any response to notification emails he sent to the firm.

The global health startup, which is based in India, allows users to privately ask medical questions, to which they can attach their medical records, and have the queries answered by doctors. However, iCliniq stored these private medical documents in a misconfigured wide-open AWS S3 bucket that could have been potentially pored over by anyone.

This cloud storage box, according to Gliwka, contained about 20,000 medical documents, such as information on blood screens and HIV tests.

Woman accidentally kicks over bucket of popcorn in cinema

From Bangkok to Phuket, they cry out: Oh, Bucket! Thai mobile operator spills 46k people’s data

READ MORE

Gliwka was able to establish a link between the icliniq.com website and the S3 bucket. Test files he uploaded through the website appeared in the same cloud-based system.

The German researcher also found a second problem. He said iCliniq had failed to check for permissions in its web app so every user was able to see every question asked by other members – simply by guessing the ID number of the question. Technically, this is known as an IDOR (Insecure Direct Object Reference) vulnerability.

El Reg ran Gliwka’s findings past UK security researcher Scott Helme, who quickly confirmed iCliniq had a serious cockup to resolve. “They need to get this locked down ASAP,” Helme told us. “The bucket should be easier to fix than the IDOR… but both need work.”

Armed with this confirmation, El Reg joined Gliwka in chasing up iCliniq. This wasn’t easy, and as soon as we escalated the issue to iCliniq’s chief exec Dhruv Suyamprakasam, both problems were promptly resolved.

Siddharth Parthiban, iCliniq’s data protection officer, apologised to Gliwka for the organisation’s failure to respond to a vulnerability notification. An internal investigation revealed that medical files of patients of two regions of India, the states of Tamil Nadu and Punjab, that were meant to be open only to lab-testing partners were actually publicly accessible.

Online healthcare

iCliniq bills itself as an online medical consultation platform where netizens across the globe can solicit medical advice from doctors and therapists. Folks can either post a health query or book a slot for face-to-face consultation over HD video or a phone call, among other services.

It fields queries on everything from back pain, hypertension and pregnancy to sexual health and STDs.

iCliniq serves patients worldwide, however, its panel of experts “consists of medical practitioners, physicians and therapists from US, UK, UAE, India, Singapore, Germany” and more.

“The S3 folder taken for these regions in India must have been moved [from] private,” Parthiban explained in an email. Challenged on this point, the data protection officer reiterated that only Indian data was exposed to the public. “I confirm that ONLY files of the two states in India (Tamil Nadu and Punjab) were public. Files of other regions/countries/continents were/are NOT public,” Parthiban told El Reg.

Once it had confirmed the issue, iCliniq treated the problem as a critical priority, and promptly restricted access to confidential medical data. iCliniq promised it would contact the particular patient whose data Gliwka cited as an example. It didn’t offer any commitment to other people whose data was kept in the same previously insecure S3 bucket.

Gliwka confirmed that when he tried to access the confidential repository on Wednesday, access was denied.

Leaking bucket

Who’s leaving Amazon S3 buckets open online now? Cybercrooks, US election autodialers

READ MORE

“The Amazon S3 bucket no longer publicly lists its contents and the direct links to documents I have the link to are no longer accessible,” Gliwka told El Reg. “The IDOR vulnerability, which allowed to see the private questions of other users, is also fixed.”

Gliwka remains dissatisfied with iCliniq’s response. He’s not convinced that the issue was geographically contained to India, and challenged iCliniq on this point.

The Register notes that test documents uploaded by both researchers – Gliwka in Germany, and Scott Helme in the UK – ended up in the same publicly accessible AWS S3 bucket before the firm made the fix. “Your file is definitely accessible by you alone,” iCliniq told Gliwka when he raised this point.

Breach alert

The startup should notify everyone whose details were potentially exposed by the security blunder – not just the handful of files Gliwka and Helme accessed in verifying the problem, and not solely the patient whose file was emailed around by way of example. Ostensibly, even the names of files stored in the repository exposed sensitive information.

“While I believe that you’ve tried to protect those files by setting appropriate ACLs [Access Control Lists], I still had access to other files, even some files regarding data subjects outside of India,” Gliwka told iCliniq in an email shared with The Register. “The file listing did indeed contain sensitive information. Some file names contain the name of a patient combined with the name of a medical test/diagnosis/procedure, i.e. john-doe-hiv-test.pdf, john-doe-cancer.pdf… just with a real name.”

The startup said the files were pseudonymous, and did not constitute personally identifiable information.

Gliwka told us: “The system uses the filename provided during the upload and saves it verbatim after prefixing the file id, user id, question id and a random looking value.”

Leaky buckets

Instances of sensitive data being publicly viewable in Amazon-hosted cloud storage are far from rare. For instance, thousands of files containing the personal information of US citizens with classified security clearances were exposed last year.

There has since been a steady stream of such cockups, which shows little sign of letting up. That’s bad enough, but at the same time it is getting easier for interested parties to locate unsecured S3 buckets thanks to automated scripts, as previously reported.

Gliwka came across iCliniq’s bucket in the process of developing a tool to discover leaks sensitive in nature, something he described as a side project. “During the research on how to approach this problem I came across a multitude of buckets with sensitive information,” he said. “Most companies took them down rather quick[ly].”

The UK’s Information Commissioner’s Office has been informed of the medical data fumble. ®

Have you spotted a leaky S3 bucket? Let us know, and we’ll even help plug the holes.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/03/icliniq_cloud_breach/