STE WILLIAMS

Apple offers another Meltdown fix for Mac users…

For Apple users worried about the Spectre and Meltdown CPU security vulnerabilities – what we’ve been collectively referring to as F**CKWIT – it’s been a busy and slightly confusing few weeks.

First, on January 8, macOS High Sierra 10.13.2 users were offered a supplemental update (including for Safari and WebKit) meant to mitigate Spectre (CVE-2017-5753 and CVE-2017-5715).

Two weeks on and we have the 2018-001 update, a scheduled collection of security fixes including one that addresses Meltdown (CVE-2017-5754) for users running the older macOS Sierra 10.12.6 or OS X El Capitan 10.11.6.

Separately, iOS got the same treatment, in December (for Meltdown), when nobody knew about it, and then again in mid-January (for Spectre), when they did.

(If you need a reminder of why Meltdown and Spectre are a big deal, read Naked Security’s explainer.)

The latest updates coincide with Intel issuing a rather confusing advisory, warning system makers to stop shipping a version of its Meltdown and Spectre patches after reports that they caused some systems to reboot unnecessarily.

Appparently, Apple’s updates don’t include the Intel code that might be causing this, because the warning was aimed at high-end systems used mainly by cloud service providers.

Into the kernel

Elsewhere in 2018-001 – implemented on macOS High Sierra as 10.13.3 – there is a sprinkling of kernel-level security fixes worthy of attention.

These include the memory validation flaw (CVE-2018-4093), and memory initialization issue (CVE-2018-4090) in High Sierra, reported by Google Project Zero researcher Jann Horn, who helped uncover Spectre and Meltdown.

Next come two flaws that might allow malware to access restricted memory (CVE-2018-4092 affecting all desktop versions and CVE-2018-4097 affecting macOS High Sierra 10.13.2, macOS Sierra 10.12.6.).

Closing out the kernel-level theme is a memory corruption vulnerability that could allow a malicious program to run code with kernel privileges (CVE-2018-4082).

And beyond

There are several more intriguing flaws, including the one affecting remote code execution (RCE) on Sierra and High Sierra, discovered by a team from South Korea’s Yonsei University (CVE-2018-4094) who spotted that “a maliciously crafted audio file may lead to arbitrary code execution.”

Plus three WebKit flaws in High Sierra (CVE-2018-4088, CVE-2018-4096, and CVE-2018-4089, the latter one of two in this month’s list discovered by Google’s OSS-Fuzzing system), and one in Wi-Fi (CVE-2018-4084, affecting all desktop versions).

Mobile users, meanwhile, get iOS 11.2.5, which fixes 13 CVEs, while for the Safari browser, which reaches 11.0.3, it’s three.

A notable iOS fix is for the recently-reported “ChaiOS” LinkPresentation flaw (CVE-2018-4100) that could allow a malicious text message to crash the device (affecting wrapping text on pages, this is also patched on desktop versions).

There’s even something for Windows users in the shape of security fixes for iTunes (Windows 7 onwards) and iCloud for Windows 7.3.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MjcUiOBkQ64/

Non-‘fiscally neutral’ defence review is go, minister tells MPs

A long-rumoured review of British defence spending will not be “fiscally neutral”, Secretary of State for Defence Gavin Williamson told Parliament this morning as he announced that it is going ahead.

The review has been split out of an ongoing national security review headed up by the Prime Minister’s National Security Adviser, Sir Mark Sedwill. Unlike Williamson, career civil servant Sedwill is not an elected MP and does not answer to Parliament.

“We’re driving this review, this programme of modernisation, everyone including the Prime Minister thinks it’s right to do this,” Williamson told MPs in the House of Commons, adding that the review “isn’t aiming to be fiscally neutral – that’s why we brought it out of the national security capability review.”

Williamson did not elaborate on whether the review not being “fiscally neutral” means an increase in the defence budget, as hoped for by many MPs, or further cuts, as has been the trend of British defence spending since the end of the Cold War. Though the government insists that it is spending 2 per cent of GDP on defence, it does not break this figure down. Rumours persist that pension liabilities and other such figures are rolled into the public “defence spending” figure to bulk it out.

“Every government that makes statements about spending more inevitably ends up spending less,” said prominent Conservative MP Iain Duncan Smith. “Can we please not repeat the nonsense of saying ‘when we modernise’ you actually mean ‘cut’?”

“I want [the Ministry of Defence] to drive efficiencies so that money can be put into the front line for our armed forces,” Williamson replied.

Former defence secretary Sir Michael Fallon, who resigned from the post last year after allegedly propositioning fellow Tory MP Andrea Leadsom, stood up to tell Williamson that he would have “the support of the whole House [of Commons] if he manages to secure additional funding”, exhorting the defence secretary to “put the defence budget on a more sustainable footing to allow our armed forces to tackle these challenges… what really matters in the end is money. More money.”

Labour’s shadow defence secretary, Nia Griffith, asked: “What is the timetable for this review and when will it be published? It is vital our personnel are not kept in the dark,” to which Williamson replied that he aimed to “publish it in the summer, before the House rises for the summer recess.”

One of the more penetrating questions asked of Williamson was posed by Conservative MP Andrew Murrons, who asked what the difference is between “cyber, intel, asymmetric warfare and drones” in defence terms and security terms. “How is he going to delineate Sir Mark Sedwill’s review from the one that he leads?”

Williamson said the MoD would be working with the Cabinet Office, adding: “In terms of cyber attack, this is something that the MoD leads on. All the work on those realms is done in complete conjunction with all the parts of our national security infrastructure whether that’s GCHQ, MI5, MI6, and that’s something that’s completely essential.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/25/defence_spending_review_green_light/

Intel alerted computer makers to chip flaws on Nov 29 – new claim

Intel quietly warned computer manufacturers at the end of November that its chips were insecure due to design flaws, according to an internal Chipzilla document.

French tech publication LeMagIT reported this week it had obtained a top-secret Intel memo sent to OEM customers on November 29 under a confidentiality and non-disclosure agreement, meaning the hardware makers were banned from discussing the file’s contents.

That date is about six months after the chip maker was warned in June 2017 about the blunders in its blueprints by researchers at Google and university academics.

On Wednesday this week, LeMagIT’s Christophe Bardy revealed the first page of that 11-page document, titled “Technical Advisory”, from the Intel Product Security Incident Response Team. It describes the security vulnerabilities we now know as Meltdown and Spectre, and when it planned to go public.

It stresses that the issue should remain absolutely confidential. Recipients should “encrypt any sensitive details using our PGP key” if they had “any questions, requests for technical details or proposed coordination with other parties”, the note added.

The flaws would be publicly disclosed in an Intel security advisory on January 9, Intel said in its memo (failing to predict El Reg‘s scoop on January 2.)

The date of the disclosure to OEMs is likely to raise eyebrows as it happened on the same day Intel chief exec Brian Krzanich sold shares in his company worth $25m before tax.

Intel has denied any impropriety, saying Krzanich’s decision to sell was part of a standard stock sale plan that had been organized in October.

At the end of November – when the general public was none the wiser – the stock dump was seen as notable because Krzanich sold about half his Chipzilla shares, keeping the minimum of 250,000 required under his employment contract.

After The Register revealed the processor design flaws, Intel’s stock price dropped at least eight per cent – enough to trigger lawsuits from investors seeking to recoup their losses.

The company’s quarterly results are due out later today – and execs will no doubt be preparing for a grilling from analysts on the earning call. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/25/intel_spectre_disclosed_flaws_november/

Avoiding the Epidemic of Hospital Hacks

What’s This?

Lessons learned about cyber hygiene from inside one of America’s highest ranked medical institutions.

On January 9, 2005, the Donttrip malware infection hit Northwest Hospital, a large medical facility in Seattle that served thousands of people. The malware clogged up the hospital’s network systems with surges of exploit network scanning. Medical operations ground to a halt as laboratory diagnostic systems couldn’t transfer data. Intensive Care Unit terminals went offline, internal pagers were silenced, and even the automatic operating room doors were locked down.

The IT team bailed valiantly against the sinking ship, but as soon as they cleaned a machine and put it back on the network, it got re-infected. Staff moved to implement disaster recovery processes, using manual or backup systems to bypass the affected systems. The quality of patient care was diminished, but no lives were seriously endangered, and the hospital weathered the storm. The FBI swooped in, eventually arresting the guilty parties, and the word went out: hospitals can be affected by cyber incidents.

That was over 12 years ago. Have things improved?

In April 2017, a 1,000-bed hospital in Buffalo, New York was virtually shut down by a ransomware attack. Then on May 12, the WannaCry ransomware pounded through Europe like a tsunami, leading to substantial IT outages. WannaCry was particularly devastating to UK’s National Health Service, disrupting over a third of health-serving organizations at 603 primary care facilities. With the proliferation of complex IT in healthcare on highly critical but widely open networks, it is no surprise that their systems are at such high risk. Ransomware had proven itself as an existential threat to medical service delivery in modern hospitals.

I recently had a chance to observe the procedures from inside one of the highest ranked hospitals in America. As I was under their care, I couldn’t help but carefully watch and ask questions about their IT security procedures. I followed this up with some research into their security policies and incident histories. I found their cyber hygiene was of an equally high standard. There are also some good IT security lessons worth sharing.

I’ve written before about the insecurity of the Internet of Things, and during my hospital stay I had concerns about all the medical gear like scanners and monitoring devices hospital staff wheeled around and used on patients. It turned out that much of that equipment was air-gapped away from the main network. Most of this equipment was manually controlled, requiring a live operator to enter commands. Old fashioned, yes, but no malware was going to infect an IV feed and overdose a patient. To retrieve data, nurses used a very regimented manual process to capture and record readings from these systems. The amount of data captured was also very small, usually a short series of numbers, which reduced mistakes in the handoff. Over the course of weeks, I observed dozens of different medical professionals executing the same manual process in the same exact way.

In order to enter this data, the nursing staff needed access to wired data terminals. All the corridors and hospital rooms had a computer for data entry and retrieval. All these terminals required a login to access, and I didn’t see one that was left unlocked when unattended. Again, the staff was obviously well trained in locking their systems when leaving. When not logged in, each terminal displayed screen savers showing security awareness and anti-phishing warnings.

It’s no surprise that medical personnel could be well trained and disciplined in following detailed instructions involving complex systems and tools. It was refreshing to see that this hospital put as much emphasis into their secure IT procedures as they did into their patient care processes. My review of their training materials also showed me that they made direct connections between security and patient care quality—a great way to link security processes to the organizational goals.

Nevertheless, being a security professional, I am always thinking about what can go wrong. So, I asked about the plan if the IT systems went down. A nurse told me that the hospital overstaffs with redundant personnel on every shift so they can backfill for emergencies. I could see the value in this. Not only is this insurance against IT disasters but also any kind of medical emergencies when additional personnel may be needed. It may cost extra in the daily operational budget, but it’s worth it to fulfill the mission of the organization (that is, save lives).

In addition to the processes I was directly exposed to, I was given an opportunity to review a lot of the hospital’s security policies and training materials. I was pleased with what I found. There was a method for securely leveraging BYOD smartphones and tablets, as well as two-factor remote access. Both disk and network encryption were placed throughout the entire infrastructure. And there was an appropriate emphasis on asset management and vulnerability testing. They were doing everything they were supposed to be doing.

The most significant thing that struck me was the emphasis on disciplined process that covered the entire staff. I’ve always felt that strong process around key controls was a game changer in maintaining a secure organization. As they say: it’s always the simple things, but the simple things are hard.

Get the latest application threat intelligence from F5 Labs.

 

Raymond Pompon is a Principal Threat Researcher Evangelist with F5 labs. With over 20 years of experience in Internet security, he has worked closely with Federal law enforcement in cyber-crime investigations. He has recently written IT Security Risk Control Management: An … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/f5/avoiding-the-epidemic-of-hospital-hacks/a/d-id/1330877?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Meltdown & Spectre: Computing’s ‘Unsafe at Any Speed’ Problem

Ralph Nader’s book shook up the automotive world over 50 years ago. It’s time to take a similar look at computer security.

Back in 1965, a young Ralph Nader wrote an evisceration of the US auto industry. This book, Unsafe at Any Speed, attacked the industry for lagging behind best practices with respect to safety — essentially, carmakers were putting the public at risk by their reluctance to invest in safety features. It’s hard to believe that over 50 years have passed since then, but at the opening of 2018, and as we deal with serious security and safety issues in the computer world, I’ve been reflecting on the situation in which we found ourselves half a century ago.

What’s triggered this reflection is the rotten start of this year, with the revelation of Spectre and Meltdown, two serious vulnerabilities that between them impact most modern computers. The newspapers and Web have been full of descriptions, and yes, these bugs are as unpleasant as they sound. Unlike most of the things we read about, these represent problems in the actual hardware, so there’s no simple software patch that makes everything better. These are not problems that involve a programmer forgetting to check the size of an array; these are problems in the very “brain” of the computer, the CPU.

As chief scientist for a large security company, I’m pretty immune to hype and spin: I deal in realities. As such, I recently gave a company-wide tutorial on these two vulnerabilities (and, really, they involve a class of vulnerabilities rather than discrete things). There’s nothing like having to teach how something works to test that you really understand it. In the case of these bugs, I understand them all too well: these are nasty little side-channel attacks that allow the slow leak of data to an attacker.

Let me be technical for a moment. These problems exist and are exploitable because of a few features of the chip: the translation lookaside buffer and memory caching in general (used to make memory access much quicker), speculative and out-of-order execution (used to make the CPU execute a set of instructions more quickly), and, in the case of one version of Spectre, JIT, or just-in-time compilation (used to make interpreted code run more quickly). When I put it like this, do you see a pattern? I do. These are all related to steps we’ve taken to speed up computing. I get it — people buy CPUs because this year’s model is a shade faster than the one they have. Speed good. Lag bad. Features, especially speed, sell.

Computers have moved from an adjacent spot in our lives (I remember my first computer, on which I mostly played Elite, a space trading game) and have become machines that literally are responsible for helping to keep us alive. My cellphone is with me at all times, a computer applies the brakes in my car, my thermostat happily interacts with servers on the Internet to let me know what the weather outside is, and the lights literally stay on because of modern computation. And it’s not just me — the entire modern world is based upon secure, safe, reliable computing. There is not one aspect of our lives, from birth to death, that doesn’t rely on the magic of computation.

These new vulnerabilities should remind us that the foundation that technologically enables our society is cracked. We have focused on performance, on glitz… more pixels, a couple more gigahertz, animated emojis. The list is endless. But what we haven’t done, outside of a woefully small group of people who make security their life’s work, is put the safety of that complex, beautiful system ahead of its glitter. I’m picking my words with care — security sounds abstract and cold, but we all “get” what it means to make something safe and what the consequence of something being unsafe can be.

I am in awe at the advances we have made in computation. During my career, I’ve gone from hand-coding a machine that ran at 3.25MHz and had a whopping 1KB (!) of memory. By way of contrast 30-something years later, my home laptop works away almost 1,000 times faster per core (and it has several of them) and with seven orders of magnitude more memory available. I can deploy cloud services with a wave of my hand, commanding more computation than I ever dreamed. What we have done is amazing. We should look at those accomplishments with pride. We should also look at the lack of attention we have put into security, at the design stage, with dread: without this infrastructure being secure, it means nothing.

My wish, though made with little hope, is that Spectre and Meltdown will be a wake-up call for all of us. For too long, security has been placed second or worse to features and performance. This must change if we are to really realize the benefits that computation can bring to mankind. I don’t blame the vendors here, but the entire ecosystem. Security and safety typically haven’t been drivers for purchases in IT, and companies can’t be blamed for making products that sell. Somehow, this must change.

Ralph Nader’s Unsafe at any Speed shook up the automotive world over 50 years ago. Perhaps it’s time to apply those same concepts to computation. I don’t want to be unsafe at any speed. No matter how fast my computer, if I can’t trust it, it’s less than worthless — it’s downright dangerous. 

Related Content:

Dr. Richard Ford is the chief scientist for Forcepoint, overseeing technical direction and innovation throughout the business. He brings over 25 years’ experience in computer security, with knowledge in both offensive and defensive technology solutions. During his career, … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/meltdown-and-spectre-computings-unsafe-at-any-speed-problem/a/d-id/1330884?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook Buys Identity Verification Firm

Facebook has purchased startup Confirm, which uses pattern analysis to confirm identities.

Confirm, a Boston-based identity verification startup, has agreed to be acquired by Facebook. The two companies have not disclosed the terms of their deal.

For three years Confirm has specialized in building technology that confirms user identities using photos of their driver’s licenses or other forms of ID. The company has developed APIs and SDKs that can be integrated into applications for verification purposes.

Following the acquisition, Confirm has updated its website to state its current digital identity authentication software offerings will be wound down. It will join Facebook’s office in Cambridge, Mass., but has not confirmed how many employees will stay with the company.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/application-security/facebook-buys-identity-verification-firm/d/d-id/1330904?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New Voice MFA Tool Uses Machine Learning

Pindrop claims its new multi-factor authentication solution that uses the “Deep Voice” engine could save call centers up to $1 per call.

Voice security company Pindrop today announced the release of Deep Voice, a new voice biometrics engine that uses deep neural network machine learning technology, and the release of Pindrop Passport, a new multi-factor authentication product that is powered by the Deep Voice engine.

According to Pindrop, the Deep Voice engine can recognize a repeat caller regardless of whether there is background noise, whether they are making only short utterances, and whether they’re speaking to an interactive voice response system, or a human call center agent. The engine uses whitelisting, blacklisting and anomaly detection. The individual’s voice is analyzed every time they call, thereby strengthening the credential with every call.

The Pindrop Passport product combines Deep Voice, device authentication, and behavior analytics to create a passive multi-factor authentication solution that call centers can use to confirm callers’ identity – instead of requesting personal information that might have been exposed in a data breach.  

Pindrop claims that Passport could reduce call handle times by up to 55 seconds and call center operations by up to $1 per call, as well as reduce fraud. 

Read more information here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/new-voice-mfa-tool-uses-machine-learning-/d/d-id/1330905?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How Containers & Serverless Computing Transform Attacker Methodologies

The pace of hacker innovation never slows. Now security technologies and methods must adapt with equal urgency.

In technology, as in life, the only constant is change. As systems undergo innovation, so do the ways people attack them, adapting their methodologies in tandem with their motives to stay ahead of the curve and maximize returns.

When money was to be made by compromising individual databases through the corporate data center, attackers learned to bypass firewalls and network intrusion prevention systems. As the network perimeter eroded and data moved into software-as-a-service offerings, smart attackers shifted to endpoint compromise and ransomware. With the rise of cloud-based systems, attackers now seek to exploit the massive quantities of data available via Web applications, microservices, and APIs.

The pace of hacker innovation never slows. Now security technologies and methods must adapt with equal urgency.

Renewable Infrastructure Changes the Security Game
The old-school application, simple and static, is quickly becoming a relic of the past. Once upon a time, the entire technology stack for a typical app was contained entirely within the data center. Now, it’s more likely to incorporate a mix of cloud-based infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) elements assembled checkbox by checkbox. Instead of being updated once or twice each year, application code is now pushed to production upward of 10 to 20 times each day by DevOps teams using Agile methodologies. While the long shelf life of traditional applications once left system-level attacks available for a long period of time, serverless architectures and containers have now decreased both system footprint and attack surface.

The increasing adoption of this modern infrastructure has important implications for security. While many traditional Web-style attacks can still effectively target poorly written code, the shift in how applications are built, deployed, and developed has opened many new opportunities for attackers to compromise sensitive and valuable data. In fact, IaaS misconfigurations have figured in more than one high-profile breach in the last year, and enterprises using modern deployment models must now protect their configuration as if it were the infrastructure itself. This includes configuration management, constant assessment for configuration errors, and appropriate access control. They must also monitor the provider and configuration in real time and make sure that logging provides adequate data to detect attack.

However, new development and deployment models leveraging renewable systems (or temporal systems) also afford security teams new protection methods, including a security model that Justin Smith of Pivotal calls the three Rs. “Its idea is quite simple,” he writes. “Rotate data center credentials every few minutes or hours. Repave every server and application in the data center every few hours from a known good state. Repair vulnerable operating systems and application stacks consistently within hours of patch availability.”

The rotate, repave, and repair model gives application security teams a road map into limiting the exposure window for attack, making it much more difficult to target a system built and deployed into a modern stack. It’s a great way to stay ahead of attackers — but it’s not bulletproof.

A Shift to Attacker Persistence and Automation
Traditional persistent infrastructure allows attackers to take a methodical approach, first penetrating the environment, then moving laterally to seek high-value targets. With the shift to containers and serverless computing, the infrastructure can be entirely refreshed rapidly, as often as every hour or even every few minutes. If the box you’re attacking is about to disappear it’s much more difficult to persist on the host, therefore you’ll shift your attack to the app instead. This makes strong application security a requirement in the modern era.

As the concept of attack persistence diminishes, hackers are turning to automation so they can restart their attack from scratch in a matter of seconds each time a system is reset. When long persistence becomes unavailable, automation of attack sequences becomes key, making it possible to return to the furthest penetration point in seconds, every time the infrastructure is refreshed.

Image Source: Signal Sciences

This provides a new key indicator for security teams via identification of real-time attack telemetry. If you’re seeing the same system, infrastructure, or application requests or changes being made over and over again, there’s a good chance you’re under attack. To detect this type of automation, application security experts have to focus on threshold-based detections of actions over time. They can do this by creating scripts or systems in their current Web protection technology, or they can look at log entries or use a security information and event management system, such as Splunk. It might not always be an exploit that’s detected; it could be as simple as a multistep application manipulation being executed from the same user account or source IP address every time a refresh is triggered, or N times in X minutes.

For modern attackers, the game is no longer about achieving system persistence but, rather, simply achieving the goal. Instead of advanced threats, persistent threats and long-term compromise, the shift to cloud- and service-based infrastructures favors a hit-and-run style attack model that can be executed within a single refresh period, or automated to live and execute over multiple refreshes.

It’s impossible to overstate the importance of these shifts — in both application technology and attack methodology — for security teams. Hackers thrive by staying on the leading edge of innovation, and the targets that are slowest to adapt are the easiest to compromise. By adapting your security model to match the emerging threat landscape, you can ensure that your next-generation application environment is every bit as secure — or even more so — as it was in the traditional data center and perimeter days.

Related Content:

 

Tyler Shields is Vice President of Marketing, Strategy, and Partnerships at Signal Sciences. Prior to joining Signal Sciences, Shields covered all things applications, mobile, and IoT security as distinguished analyst at Forrest Research. Before Forrester, he managed mobile … View Full Bio

Article source: https://www.darkreading.com/cloud/how-containers-and-serverless-computing-transform-attacker-methodologies/a/d-id/1330896?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook to give you more control over your data

In preparation for a tough new data law coming in May to the European Union – the General Data Protection Regulation (GDPR), considered by many as the biggest overhaul of personal data privacy rules since the internet was born – Facebook plans to make it easier for users to manage their own data, Chief Operating Officer Sheryl Sandberg said on Tuesday.

Reuters quoted the COO speaking at a Facebook event in Brussels.

We’re rolling out a new privacy center globally that will put the core privacy settings for Facebook in one place and make it much easier for people to manage their data.

Sandberg said that the creation of this “privacy center” was prompted by the requirements of the GDPR: a regulation that requires any company that does business in the EU to take specific steps to more securely collect, store and use personal information. The aim of the GDPR is to give Europeans more control over their information and how companies use it.

Facebook has been trying to do this for a while, Sandberg said. The Guardian quoted her:

Our apps have long been focused on giving people transparency and control and this gives us a very good foundation to meet all the requirements of the GDPR and to spur us on to continue investing in products and in educational tools to protect privacy.

That may well be true, but there’s nothing like the pain of brutal fines to kick up the work a notch. As the Guardian reports…

… companies found to be in breach of GDPR face a maximum penalty of 4% of global annual turnover or €20m (£17.77m), whichever is greater. In Facebook’s case, based on a total revenue of $27.6bn in 2016, the maximum possible fine would be $1.1bn.

Ouch. Those kind of numbers have got to put some sweat on the brows of Facebookers. No wonder they want users to handle their own privacy. “Giving people transparency and control” are all well and good, of course. We applaud giving users more insight into, and control over, what’s done with their data. If it took a $1.1bn cattleprod to get Facebook there, well, so be it.

Sandberg also reiterated what Facebook product manager Samidh Chakrabarti said in a post on Monday: that the company plans to double the number of people working on safety and security to 20,000 by the end of the year.

Chakrabarti said that in 2016, Facebook was “far too slow to recognize how bad actors were abusing our platform.” They’re trying hard to catch up and neutralize the risks now, he said.

Sandberg echoed Chakrabarti, saying that Facebook will work hard to end abuse of the platform by those seeking to spread fake news and tinker with the democratic process around the world:

If we can prevent people from being part of our ad networks, prevent people from advertising and take away the financial incentive, that is one of the strongest things we can do against false news, and we are very focused on this.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wqJ1BVFikIA/

Babies’ data being sold to tax fraudsters on the dark web

Christina Warren was about 12 years old when she started receiving credit card bills. After that, she started getting collection notices. It was at that point that her parents had to convince the creditors that their daughter’s identity had been stolen.

As CNN tells it, credit reporting agencies told Warren that they fixed the issue, but she later had trouble getting her first credit card.

You can see the appeal of targeting children: credit-wise, they present a clean slate. Given that they won’t be working or filing tax returns for years, identity theft on children can go on, undetected, for over a decade. In fact, a 2011 report from Carnegie Mellon University’s CyLab found that 10.2% of the 40,000 children whose identities were scanned in the report had someone else using their Social Security number – that’s 51 times higher than the 0.2% rate for adults in the same population.

Now, it seems that identity thieves are after the details of not just children, but babies too. Security researchers have found Social Security numbers, mothers’ maiden names and dates of birth for babies, selling for hundreds of dollars on the dark web.

Terbium Labs, which focuses on dark web data intelligence, said last week that in addition to child data it’s seen listed for sale, it’s also now seeing information “specifically advertised as infants’ data.”

The researchers came across one vendor on the dark web, Dream Market, that had listed “Infant fullz get em befor tax seson [sic].” “Fullz” refers to full identity packs that contain first and last names, Social Security numbers, dates of birth, and other personal information. To commit tax fraud – and we are in tax fraud season right now, in the months leading up to the standard US filing deadline of April – a fraudster needs a fullz, and he needs a W-2 form, employee identity numbers or paystubs.

How much will all that set an identity thief back? Terbium Labs says that for the “relatively high price” of $312, a buyer can purchase an infant’s name, Social Security number, date of birth, and mother’s maiden name. All a thief has to do is claim a child dependent that they don’t actually have, and presto: that $312 investment turns into the maximum child tax credit of $1,000 per child.

As far as committing tax fraud on an adult, the thief would need the W-2. That’s also available on Dream, for $52. Again, that’s peanuts compared with the money a fraudulent tax return can make. There are even lower prices out there for W-2 forms: the researchers have spotted them priced at $45 each, or even discounted to $35 each for orders of 10 or more.

The crooks can also get enterprising and turn an infant into a tax-paying, tax return-filing wage slave through the magic of scouring the internet to find parents’ full names – social media accounts come in handy for that.

Dark web vendors are happy to oblige when fledgling fraudsters need help in getting up to speed on tax fraud. There are tax fraud guides for sale, ranging from what Terbium Labs found being sold for anywhere between $2 and $175 – or in some cases free on carding forums. Terbium says they’re often posted as “security exercises for discussion.”

The quality varies, Terbium says. Most of the time, the thieves keep their tips and tricks close to the vest:

While many guides for sale on the dark web are essentially useless either because they are out of date or because they do not contain any topical information at all, a high quality guide walks buyers through the process of committing fraud. The most useful guides may not be publicly advertised at all; rather than selling to just any buyer, experienced fraudsters tend to keep the most valuable tips and tricks to themselves, or circulate it among a small, trusted group.

As tax season approaches, tax fraud spikes. This year has brought a massive tax code overhaul to the US, while law enforcement has cracked down on all types of illegal activity on the dark web, but none of that will dissuade the money machine, Terbium predicts:

As always, fraud finds a way.

What to do?

The Federal Trade Commission (FTC) provides tips and advice on how to avoid having a thief misuse your child’s information to commit fraud and how to detect it if they have. One such warning sign might be a notice from the IRS saying the child didn’t pay income taxes, or that the child’s Social Security number was used on another tax return, for example.

Pity the IRS its Cassandra fate: it’s apparently cursed to speak true prophecies that no one believes. The IRS saw a huge spike in phishing and malware attacks during the 2016 tax season, which came on top of a 400% increase in phishing and malware in 2015. And in early 2017, the US tax agency sent out an urgent warning about a new type of tax fraud taco: CEO spearphishing fraud stuffed with W-2 tax form scamming and a dollop of wire fraud on top.

But according to the second annual Tax Season Risk Report from ID theft protection firm CyberScout, a survey from last year showed that the public’s not using the security practices we need to protect ourselves from identity theft.

As in, 58% of people in the US simply don’t worry about tax fraud. They should! In November 2016, the IRS said that it had stopped 787,000 confirmed ID theft returns that year, totaling more than $4 billion in potential fraud.

So that’s one simple thing we can do to prevent tax fraud: listen to the IRS!

Unstick your kid from the web of tax fraud

Getting a child tax-fraud victim out of the mess is similar to how it’s done for people of any age. Here are the FTC’s tips and resources:

  • Contact each of the three nationwide credit reporting companies.
    Send a letter asking the companies to remove all accounts, inquiries and collection notices associated with the child’s name or personal information.
    Explain that the child is a minor and include a copy of the Uniform Minor’s Status Declaration.
  • Place a fraud alert.
  • Learn about your rights.
    The credit reporting company will explain that you can get a free credit report, and other rights you have.
  • Consider requesting a credit freeze.
    The credit reporting companies may ask for proof of the child’s and parent’s identity.
  • Order the child’s credit reports.
    Review the credit reports.
  • Contact businesses where the child’s information was misused.
  • Create an Identity Theft Report.
  • Learn more about repairing identity theft.
  • Update your files.
    Record the dates you made calls or sent letters.
    Keep copies of letters in your files.

How to keep your kid out of that web in the first place

It’s a lot of time and work to fix identity theft. Better still to protect children from identity misuse in the first place. More tips from the FTC:

  • Stash all paper and electronic records that show your child’s personal information in a safe place.
  • Don’t be too obliging when it comes to handing over your child’s Social Security number or other taxpayer ID. Rather, ask questions, as in: Who’s asking for my kid’s identity information? Why do they need the data? Do you trust them? How do they plan to protect the data? Can they use a different identifier, such as the last four digits of a Social Security number?
  • Shred documents that show your child’s information before throwing them away.
  • Be aware of events that can trigger identity theft: loss of a wallet or purse with a child’s information inside, a home break-in, or a notice from a school, doctor’s or dentist’s office saying that they’ve had a security breach, for example.

The FTC also has a range of ways to limit your child’s risk of identity theft – check them out here.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/BXCjcqshHrU/