STE WILLIAMS

Guess who’s back, back again? China’s back, hacking your friends: Beijing targets American biz amid tech tariff tiff

Three years after the governments of America and China agreed not to hack corporations in each other’s countries, experts say Beijing is now back to its old ways.

And if that’s the case, we can well imagine Uncle Sam having a pop back.

Speaking at the Aspen Cyber Summit in San Francisco on Thursday, a panel including ex-NSA super-haxx0r Rob Joyce and Symantec CEO Greg Clark said the 2015 truce the Obama administration struck with Beijing has been all but wiped out over the past year. The Middle Kingdom has now returned to spying on American businesses, stealing secrets and gathering intelligence with a new zeal.

“It had a marked impact on the way that the Chinese were behaving,” former White House security czar Rob Joyce said of the Obama-Xi agreement. “We have certainly seen the behavior erode in the last year, and we are very concerned with those troubling trends.”

The rise in activity comes amid the escalating trade war between the US and China, with each side raising tariffs on imports.

Post hoc ergo propter hoc

However, correlation does not necessarily mean causation. Dmitri Alperovitch, cofounder and CTO of CrowdStrike, noted that events within China, most notably a series of government reforms and anti-corruption campaigns, played a significant part in limiting activity against the US in past years as Beijing-backed hackers focused their activities within the country, spying on undesirables, rather than against foreign targets.

“We were tracking those Chinese threat actors,” Alperovitch said. “They didn’t cease to exist, but they were just hacking Chinese companies.”

Regardless of the cause, the panelists agreed that China is now back with a vengeance.

Image by beccarra http://www.shutterstock.com/gallery-1124891p1.html

US government charges two Chinese spies over jet engine blueprint theft

READ MORE

Clark said that things have got so bad that one of his clients has given up entirely on its latest generation of products, believing the blueprints to have been completely and comprehensively snatched by Chinese hackers.

“I had a very large corporate manufacturer come into our office and say ‘we have stopped trying to protect it, it has all been stolen, we want to innovate faster’,” the Symantec boss said.

Where the panelists differ is on what to do about the renewed hacking efforts from the world’s most populous nation. For Joyce, the US government and the information security community at large can help by making life more difficult for President Xi’s hackers themselves.

“Overall, it is about making it harder for them to succeed, and some of that will be taking away the infrastructure they’re using, some of that will be exposing their tools,” Joyce said. “There are a number of strategies you can come up with that make it less effective, less efficient, and less likely to be successful.”

The solution: more war

Alperovitch argued that further economic sanctions will go a long way toward sending a message to Beijing.

“It is not a cyber problem, it is an economic warfare problem,” he said. “Responding with economic actions of our own and putting pressure on China is the right strategy,”

There is also hope that China’s own internal development will alleviate the problem. Panelist Elsa Kania, adjunct fellow at the Center for a New American Security, believes that as Chinese companies further move into the international market, coming up with their own innovative ideas and concepts will take precedence over stealing intellectual property from rivals.

“There is clearly an ambition to graduate beyond IP theft and become more of a leader,” Kania said. “There may be a broader transformation going forward.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/09/china_hacking_usa/

Symantec Uncovers North Korean Group’s ATM Attack Malware

Lazarus Group has been using FastCash Trojan on obsolete AIX servers to empty tens of millions of dollars from ATMs.

Researchers from Symantec have uncovered the malware tool North Korea’s infamous Lazarus Group has been using since 2016 to empty millions of dollars in cash from ATMs belonging to mostly small and midsize banks in Asia and Africa.

In a report this week, the security vendor described the malware as designed to intercept and approve fraudulent ATM cash withdrawal requests before they reach a bank’s underlying switch application server that processes them.

The malware is an executable file that can be injected into a running and legitimate process on application servers running IBM’s AIX operating system. All of the switch application servers that the Lazarus Group has managed to compromise with the malware so far were running unsupported versions of AIX, Symantec said.

“The takeaway is not only one for banks but any organization that runs a production environment with legacy, outdated, or unsupported equipment and software,” says Jon DiMaggio, senior threat intelligence analyst at Symantec.

The financial loss and public embarrassment accompanying such attacks far outweigh the cost of bringing obsolete infrastructure up to speed. “At a minimum, financial institutions should use current and supported systems and software in order to minimize the risk of exposure of both monetary losses as well as sensitive customer data, such as PII,” DiMaggio says.

The US government has dubbed the Lazarus Group’s ATM attacks as the FastCash campaign. In an Oct. 2 technical advisory, the FBI, Department of Homeland Security, and US Treasury Department described the attacks as costing banks tens of millions of dollars. The advisory noted two incidents, one in 2017 and another in 2018, where Lazarus Group actors enabled simultaneous cash withdrawals from ATMs spread across two dozen countries.

In each of Lazarus Group’s multiple attacks, the threat actor configured and deployed legitimate scripts on the application servers to intercept and reply to fraudulent ATM withdrawal requests, the advisory said.

But Symantec’s investigation has shown that the executable enabling the fraudulent activity is, in fact, malware, the security vendor said in its report this week. Symantec has named the malware Trojan.Fastcash and described it as having two functions.

One of them is to monitor for and read the Primary Account Number (PAN) in all incoming traffic from ATMs. The malware is designed to block all traffic containing PANs previously identified as belonging to the attackers. It then generates a fake response approving the fraudulent request, ensuring all attempts to withdraw money are successful. The US government’s technical alert had previously noted that most of the accounts against which the fraudulent transactions were initiated had minimal or zero balances.

“The malware responds with formatted messages as documented in ISO 8583,” DiMaggio says. ISO 8583 is a messaging standard that is used by banks for financial transactions. “This is how the attacker could get around the messaging system and essentially trick the ATM into believing it was receiving response from the banks legitimate internal systems.”

The responses the malware is programmed to generate include an “Invalid PIN” message and one for insufficient funds, DiMaggio notes.

Symantec said it has discovered multiple versions of the FastCash Trojan so far, each equipped with different response logic. The vendor says it has been unable to determine why the attacks have programmed the different responses to withdrawal requests into the malware.

In all instances where the Lazarus Group successfully deployed the malware, the application servers were running versions of AIX well past their support dates.

The attacker targeted smaller banks with fewer resources in places like Asia and Africa because they likely were aware that larger, better-funded organizations would have better security, DiMaggio said. “The vulnerable version of AIX was simply what was in the environment the attacker targeted. It was not the driving piece of the attack as much as a characteristic of the specific environment the attacker had access to,” he notes.

For the moment, there is little indication as to how exactly Lazarus Group actors might have gained access to the switch application servers in the first place. But it is quite likely that they employed spear-phishing emails to illicitly obtain credentials belonging to bank employees, which they then used to access the network.

Once they had gain an initial foothold, the attackers would have enumerated the network for high-value systems and gain access to them. “By taking the time to learn the environment and use legitimate credentials, the attacker was able to execute this attack from the inside out, meaning the banks firewalls would not play a factor in this attack,” DiMaggio says.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/symantec-uncovers-north-korean-groups-atm-attack-malware-/d/d-id/1333233?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Apple 0, José 3 – Man versus Megacorp! [PODCAST]

This week: hyperthreading considered harmful, how to avoid lock screen hacks, and what happens when cryptocurrency exchanges implode.

With Anna Brading, Paul Ducklin, Mark Stockley and Matthew Boddy.

If you enjoy the podcast, please share it with other people interested in cybersecurity, and give us a vote on iTunes and other podcasting directories.

Listen and rate via iTunes...
Sophos podcasts on Soundcloud...
RSS feed of Sophos podcasts...

Thanks to Purple Planet Music for the opening and closing music.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/fwB3y29Gpjw/

Oops: Cisco accidentally leaked in-house Dirty COW exploit code with biz conf call software

Cisco this week patched critical vulnerabilities in its switches, Stealthwatch, and Unity voice messaging system.

Oh, and ‘fessed up that it accidentally shipped software that included in-house-developed exploit code for attacking Linux systems via the Dirty COW flaw.

The networking giant also announced it has begun combing its products to identify any that might inherit the Apache Struts vulnerability patched this week. So far, that search hasn’t turned up any vulnerable products.

QA having a COW

If you’re in the mood for schadenfreude, this notice doesn’t get a CVE number, but reveals Cisco left code to exploit Linux’s Dirty COW vulnerability in test scripts it shipped with its TelePresence Video Communication Server software.

A dirty cow

Dirty COW explained: Get a moooo-ve on and patch Linux root hole

READ MORE

Cisco blamed the blunder on internal quality control: the code exists to make sure software is patched against known exploits, and someone neglected to remove it before shipping.

The bundled exploit doesn’t open up TelePresence to attack, and new software images without the attack code are available.

Cheeky root account

Thor Simon, of Two Sigma Investments, probably needed a stiff drink when he realised his Cisco Small Business Switch had an undocumented admin account. He reported what effectively was a backdoor in the firmware to Cisco, which labelled it CVE-2018-15439. It affects the Small Business 200 Series, 250 Series, 300 Series, 350 Series, 350X Series, 500 Series and 500X Series switches.

Unless the admin creates a user account with top-level privileges (Privilege 15 in Cisco-speak), the undocumented root account will persist; and if someone deletes all users with Privilege 15, the switch will recreate the account. There’s no patch in the works, but the workaround is simple: create a Privilege 15 user.

Threat detected in threat detection kit

Stealthwatch is Cisco’s enterprise threat detection and forensics software, and it had an insecure system configuration that let a remote attacker bypass the management console authentication with “crafted HTTP packets”.

Designated CVE-2018-15394, the bug affected Stealthwatch Enterprise versions 6.10.2 and prior.

Are you Java a laugh?

If you drew “Java deserialisation bug” in the sweepstake, your number came up in Cisco Unity Express.

Cisco explained the impact of the insecure deserialisation this way: “An attacker could exploit this vulnerability by sending a malicious serialised Java object to the listening Java Remote Method Invocation (RMI) service. A successful exploit could allow the attacker to execute arbitrary commands on the device with root privileges.”

Unity Express versions prior to 9.0.6 were affected. If you can’t patch, Cisco’s post provided access control list rules that will shove malicious traffic over TCP port 1099. Cisco said the bug was found by pen-tester Joshua Graham.

And the rest

If you own a Cisco Meraki MR, MS, MX, Z1, and Z3, patch it against CVE-2018-0284, a bug in the local status page that gave an authenticated, remote attacker access to device configuration.

Cisco announced a further 11 bugs rated Medium and listed them here. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/08/cisco_dirty_cow_exploit_blunder/

Banking Malware Takes Aim at Brazilians

Two malware distribution campaigns are sending banking Trojans to customers of financial institutions in Brazil.

Two ongoing malware distribution campaigns are sending banking Trojans to customers of Brazilian financial institutions, report Cisco Talos researchers, who also identified a spam botnet delivering malicious emails as part of the infection process.

Two separate infection processes were used in these campaigns between late October and early November, they say. The campaigns use different file types for the download and infection processes, but both target Brazilian firms. Researchers believe the attacker is from South America, where it would be easiest to use victims’ credentials to carry out fraud.

Both campaigns eventually deliver banking Trojans. Researchers also found additional tools and malware – a remote administration tool with the ability to create emails – hosted in an Amazon S3 bucket. The emails, which are created on the BOL Online email platform, suggest the attacker’s primary goal is to create a botnet of systems specifically for email creation.

More than 700 compromised systems were identified on the servers that are members of the botnet, the researchers report. Overall, the botnet created more than 4,000 unique emails using the BOL Online service; some of them were used to launch the spam campaigns they analyzed.

Talos identified two payloads deployed during the campaigns. The first, also detected by FireEye, collects information on the target machine and exfiltrates it to a C2 server. It also includes a keylogger. The second has the same features but is implemented differently; it primarily targets two-factor authentication by showing users fake pop-ups.

Read more details here.

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/banking-malware-takes-aim-at-brazilians/d/d-id/1333229?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

User Behavior Analytics Could Find a Home in the OT World of the IIoT

The technology never really took off in IT, but it could be very helpful in the industrial world.

Second of a two-part series. 

In my last piece for Dark Reading, I explored the security uncertainty created by the convergence of information technology (IT) and operational technology (OT) in organizations undergoing Industrial Internet of Things (IIoT) digitalization. Among the manifestations of this uncertainty — and occasional friction between internal IT and OT teams — is a lack of clarity regarding ownership of IIoT security solutions.

As someone who has worked in OT and IT, I suggested that industrial companies adopting IIoT use the hard-won lessons of IT to leapfrog to an advanced state of IIoT security, and proposed separation of endpoint networks and microsegmentation as pure IT approaches that could be ported as-is and work well in the OT world.

There is also a fertile middle ground. User behavior analytics (UBA) focuses on user behavior to detect anomalies that indicate potential threats. It arose first in IT but failed to catch fire primarily because of IT’s complexity. I think it could be profitably employed in OT. 

UBA has been around in data-centric IT for at least four years, but it has never become industry-standard primarily because in the real world, user behavior in IT is so varied and complex that UBA often creates more false alarms than useful ones. In IT, UBA has often failed to find the dangerous needle in the immense haystack of user behavior. But user behavior in process-centric OT is much simpler: OT systems run the plant, and scripted user activity is nowhere near as varied as in IT, with its multiple endpoints and inputs, email browsing, multipart software stacks, etc.

UBA can be applied more precisely in OT than in IT thanks to OT’s relative simplicity. A potential attacker can stump UBA in IT because of IT’s complexity, rendering UBA less than optimal. But it is extremely difficult to fool UBA in OT because of OT’s well-defined process orientation. OT’s nature would allow security teams to apply UBA more successfully at specific points. One would be the “border crossing” between IT and OT. Any user or machine entering the OT network from IT — a necessary function in IIoT — would be strictly vetted at the border crossing: Where are they going? What have they been doing?

Another potential point for effective UBA application would be the human-machine interface (HMI). Many OT systems are accessed within factories by people sitting down at these HMIs. The moment they start doing anything, UBA begins creating a profile of their actions for future use.

It would not be difficult to build profiles of machines, systems, and their users to determine what is normal and what is abnormal, whether users/operators enter the OT network from IT or by entering the OT environment via HMIs. Once we’ve defined normal, anything abnormal could be identified as a potential anomaly and investigated as a vulnerability for attack.

Easier Definitions
The beauty of UBA in OT is that “normal” and “abnormal” are relatively easier to define. In IT, users with ostensibly the same roles do a variety of different things — all of them “normal.” Thus, true normal is harder to discern. By contrast, in OT users operate by strictly defined processes — and each user with the same role should be doing the same thing — so “normal” exists within narrow boundaries. Even when new users in particular roles begin to use the system, their functions would be the same as the old operators in the same role. Therefore, there is no need to begin creating new profiles — making the UBA function easier — and “abnormal” would be much easier to detect and investigate.

A real-life example of UBA’s potential value is the August 2017 Triton/Trisis malware intrusion in an oil and gas plant believed to be in Saudi Arabia, which caused the plant to shut down. Malware shutting down an OT system is the second-worst thing that can happen in OT. The worst is for the malware to target the industrial control system and send the plant spinning wildly out of control, costing not only money but lives. Many experts believe Triton/Trisis is meant to do just that.

Interestingly, it has been reported that the Triton/Trisis intrusion in Saudi Arabia began when two malware files were copied onto an OT system in the Saudi plant that were later executed to begin the attack. Because OT user behavior is so heavily scripted, the haystack is much smaller and the “needle” of two files being copied abnormally onto the system is theoretically easier to find.

It’s certainly worth exploring whether in similar situations an OT UBA system could detect the anomaly and trigger a warning when a threat arises. 

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Satish joined San Jose-based ABB in February 2017 as chief security officer and Group VP, architecture and analytics, ABB Ability™, responsible for the security of all products, services and cybersecurity services. Satish brings to this position a background in computer … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/user-behavior-analytics-could-find-a-home-in-the-ot-world-of-the-iiot/a/d-id/1333212?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Microsoft President: Governments Must Cooperate on Cybersecurity

Microsoft’s Brad Smith calls on nations and businesses to work toward “digital peace” and acknowledge the effects of cybercrime.

It’s an exciting time to be in technology, according to Microsoft president Brad Smith. It’s also a dangerous time.

Smith took the stage at this year’s Web Summit, a tech conference held in Lisbon, Portugal, to emphasize the need for global cooperation on cybersecurity as technology continues to evolve. The benefits that technology has created are as dangerous as they are awe-inspiring, he said.

“It’s an exciting time to be at a place like this,” he said. “But that’s not the only thing that’s happening. We also live in a time when new threats are emerging … new threats that involve technology itself” and culminate in attacks on electrical grids and elections alike.

Addressing an audience of tech professionals, Smith explained: “The tools that we’ve created — the tools, oftentimes, that you’ve created — have been turned by others into weapons.” It’s something Microsoft sees in 6.5 trillion signals and data points it receives daily, he added.

Smith said often when he speaks to people in government about these attacks, they sometimes say “we don’t really need to worry” because cyberattacks involve machines targeting machines, not machines targeting people. He disagrees.

“That is a problem. Because people are being victimized by these attacks,” he explained. He called 2017 “a wake-up call” in terms of the way people in nation-states and governments are using technological tools as weapons. WannaCry and NotPetya were the prime examples.

We can’t expect people to recognize the problems of cybercrime if we don’t recognize how people are suffering. Hospitals were paralyzed when WannaCry hit the UK. At England’s National Health Service, 19,000 appointments were canceled. Surgeries didn’t happen. Shortly after WannaCry hit 300,000 machines in 150 countries, he added, NotPetya struck.

“What NotPetya represents is not just the evolution of the attack in terms of methodologies involved, but also the evolution of intent,” said Smith. Last year, almost 1 billion people were victims of a cyberattack. “These issues and these threats are going to continue to grow … because everything is connected,” he warned. It’s time to have a conversation around security.

“In a world where everything is connected, everything can be disrupted,” he continued.

Governments around the world must play a role in protecting civilians and civilian infrastructure, he said, and protect people while they’re using devices on which their lives exist. However, governments can’t do this alone, and so he also called on businesses to step up.

“Businesses need to do better as well, and there is no part of the business community, across Europe or in the US or around the world, that has a higher responsibility than one part of the business community — and that is the tech sector,” Smith noted. IT has the greatest responsibility to be “first responders” in keeping people safe when there are cyberattacks.

The same week he gave this talk at Web Summit, Smith explained in an interview with CNBC how Microsoft wants to connect with Congress and work together to create cybersecurity guidelines for civilians. Key issues range from threats on democracy to artificial intelligence in the workplace.

We have reached a point at which people are enthusiastic about the evolution of technology; however, their eagerness is matched with growing worry about what this technology can do.

“The big shift has been [that] the era where everyone was just excited about technology has become an era where people are excited and concerned at the same time — and that’s not unreasonable,” he explained in a conversation with CNBC.

Smith says Microsoft wants to work with President Trump, as it worked with President Obama, to address the risk of technology. The concern isn’t only for America, but for all countries.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/microsoft-president-governments-must-cooperate-on-cybersecurity/d/d-id/1333231?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Phone companies slammed for lousy robocall efforts

Federal Communications Chairman (FCC) Ajit Pai wrote to telephone service providers on Monday, slamming them for their lousy efforts on blocking robocalls and saying that a year from now, he expects that we can all get back to actually answering our phones without finding we’ve been tricked by illegally spoofed caller IDs.

Here’s Pai, quoted in an FCC release:

Combatting illegal robocalls is our top consumer priority at the FCC. That’s why we need call authentication to become a reality – it’s the best way to ensure that consumers can answer their phones with confidence. By this time next year, I expect that consumers will begin to see this on their phones.

What the FCC wants to see is a robust call authentication system to combat illegal caller ID spoofing. Some phone service providers are “well on their way” to implementing such, Pai said, thanking ATT, Verizon, T-Mobile, Comcast, Bandwidth.com, Cox, and Google for their efforts.

But there are laggards, and that includes seven big names. On the list of Pai scoldees are phone providers that apparently don’t yet have “concrete plans to implement a robust call authentication framework,” Pai said. His letters asked those carriers – CenturyLink, Charter, Frontier, Sprint, TDS Telecom, US Cellular, and Vonage – to answer a series of questions by 19 November.

Those companies are dragging their feet when it comes to implementing the new STIR (Secure Telephone Identity Revisited) and SHAKEN (Secure Handling of Asserted information using toKENs) protocols, Pai said. Those are frameworks that service providers can use to authenticate legitimate calls and identify illegally spoofed calls.

There has, actually, been progress on this front.

In September, the Alliance for Telecommunications Industry Solutions (ATIS) announced the launch of the Secure Telephone Identity Governance Authority (STI-GA), designed to ensure the integrity of the STIR/SHAKEN protocols. That move paved the way for the remaining protocols to be established, and it looks like STIR/SHAKEN is going to be up and running with some carriers next year.

Last month, 35 state attorneys general told the FCC to please, by all means, pull the plug on robocalls. The AGs said that the situation is beyond what law enforcement can handle on its own. The states’ respective consumer protection offices are receiving and responding to tens of thousands of consumer complaints every year from people getting plagued by robocalls.

Reuters reports that robocall blocking service YouMail estimated there were 5.1 billion unwanted calls last month, up from 3.4 billion in April.

SHAKEN/STIR isn’t expected to be a cure-all, but it could be a big help. From Pai’s press release:

Under the SHAKEN/STIR framework, calls traveling through interconnected phone networks would be ‘signed’ as legitimate by originating carriers and validated by other carriers before reaching consumers. The framework digitally validates the handoff of phone calls passing through the complex web of networks, allowing the phone company of the consumer receiving the call to verify that a call is from the person supposedly making it.

The questions that Pai put to the carriers that don’t yet have a concrete STIR/SHAKEN plan:

  • What is preventing or inhibiting you from signing calls today?
  • What is your timeframe for signing (i.e., authenticating) calls originating on your network?
  • What tests have you run on deployment, and what are the results? Please be specific.
  • What steps have you taken to work with vendors to deploy a robust call authentication framework?
  • How often is Charter an intermediate provider, and do you intend to transmit signed calls from other providers?
  • How do you intend to combat and stop originating and terminating illegally spoofed calls on your network?
  • The Commission has already authorized voice providers to block certain illegally spoofed calls. If the Commission were to move forward with authorizing voice providers to block all unsigned calls or improperly signed calls, how would you ensure the legitimate calls of your customers are completed properly?

Ars Technica’s Jon Brodkin notes that some of these carriers have registered reservations about SHAKEN/STIR.

Sprint, for one, told the FCC in October that the protocols will be helpful in fighting illegal robocalls, but it’s not a “complete solution.” Nor is it cheap. From its letter to the FCC:

Sprint is also concerned about the costs of implementing the certificate management requirements of SHAKEN and encourages the Commission and industry to explore more cost-effective alternatives to the central repository process originally contemplated in the development of SHAKEN.

Carriers have also complained that SHAKEN doesn’t tell them anything about the content of a call or whether it’s legal. From Sprint’s letter:

It just authenticates origination of the call path and the Caller ID information of individual calls.

Nor will it be useful without universal adoption, Sprint wrote:

Without universal adoption of SHAKEN from originating carrier to completing carrier, call authentication will not be passed to the terminating carrier.

T-Mobile concurred, among other carriers. From its filing to the FCC:

First, SHAKEN/STIR can only provide a positive affirmation of the source of a given call. It cannot provide confirmation of the opposite – that is, that a call is definitively ‘bad’ or fraudulent. This is particularly true where calls are carried by international providers that do not participate in SHAKEN/STIR and send calls to the United States through wholesale partners.

T-Mobile also touched on an issue raised by the 35 state AGs, who noted that it’s tough to prosecute calls that travel through a maze of smaller providers: If the caller can be found at all, they’re usually located overseas, making enforcement difficult. On the part of the carriers, T-Mobile said, protocol adoption has to happen outside the US to include international carriers in order to have a real effect on the “onslaught of fraudulent calls.”

In spite of these points, Pai is threatening action if SHAKEN/STIR isn’t implemented within a year:

I am calling on those falling behind to catch up… If it does not appear that this system is on track to get up and running next year, then we will take action to make sure that it does.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/eyKYEfVsiNw/

Google warning: Fix your dodgy ads within 30 days or get banned

Having taken what it thought was a decisive swipe at the problem of “abusive” advertising a year ago, Google now says next month’s Chrome 71 will unleash an even tougher crackdown.

From now on, where Chrome detects poor ad behaviour, site owners will have 30 days to do something about it or face having their ads blocked altogether.

Having analysed the blocking that started with Chrome 67 earlier this year, Google admits that more than half of the shifty ads pushed to Chrome users were not blocked, almost all of which were harmful.

Bad behaviour manifests in a variety of forms most of us will have experienced at some point, including misleading redirections and pop-ups, close buttons that don’t close, and generally strange goings-on when users stray into the wrong neighbourhood.

As Google’s blog announcement states:

Stronger protections ensure users can interact with their intended content on the web, without abusive experiences getting in the way.

However, as ever, it can be revealing to look below the surface of Google’s long campaign to banish bad people from our eyeballs.

Two things jump out, starting with the simple observation that despite numerous Chrome tweaks and initiatives, the mighty Google is still struggling to stop ad abuse once and for all.

In recent times, users have been given new settings to mute autoplaying videos, the arrival of what Google claimed was adblocking, new controls for Chrome extensions, and all this after years of Safe Browsing upgrades.

All good stuff no doubt but is it working? If it is, it looks like a long war fought from muddy trenches rather than through a single decisive knockout.

The second is that a lot of what Google talks about as abusive advertising is a lot more serious than that euphemistic description implies.

By most fair definitions, a lot of it is simple fraud, including phishing attacks, malware distribution, and tech support scamming – in other words, malvertising.

Stopping this is not easy because in some cases it sneaks itself into advertising on legitimate sites, served through third-party ad networks.

Cracking down on this is a complex undertaking, not least because Google depends on legitimate advertising from the same sites.

That’s why Google’s Chrome ad control initiatives always feel as if they’re designed to impress two audiences – the end users of course, but also the advertisers themselves.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ZRrilfYfuAM/

Closed doors are no match for a Wi‑Fi peeping tom and a smartphone

Wherever people are these days, there are wireless transmissions, be it GPS, AM/FM, or Wi-Fi.

If you’re in a home, at the office, or walking down the street, you’re being bathed in signals, ranging in RF frequency from a few kilohertz to terahertz. Many of the invisible transmissions pass through us, while others bounce off.

It’s the signals that bounce off us that interest researchers, who’ve identified a way to use smartphones to see through walls, analyze reflected, ambient transmissions, and spy on people’s presence and movements in their own homes or offices.

This might sound familiar: MIT researchers also used wireless transmissions three years ago to do the same thing. They created a device that can discern where you are and who you are, detecting gestures and body movements as subtle as the rise and fall of a person’s chest, from the other side of a house, through a wall, even though subjects were invisible to the naked eye.

Earlier systems had drawbacks, however. The MIT system, as well as earlier systems, required knowing the exact position of Wi-Fi transmitters and had to be logged in to the network so they could send known signals back and forth, according to MIT Technology Review.

For example, a system created by University of Utah researchers in 2009 involved a 34-node wireless network. You couldn’t exactly put MIT’s 2015 RF-Capture system into your pocket, either. Other drawbacks: the MIT system’s sensor was fussy. It required a person to be walking directly at it to function and had a tougher time picking up on somebody walking at an angle.

The latest in peeping tom technology is far different: it only requires a smartphone and some clever computation. A team of researchers headed up by Yanzi Zhu, at the University of California Santa Barbara, have demonstrated using a smartphone to successfully track people in 11 real-world locations, with “high accuracy.”

As the researchers describe in their recently published paper, titled Adversarial WiFi Sensing, their technique enables unprecedented invasion of privacy:

We believe that, by leveraging statistical data mining techniques, even a weak adversary armed with only passive off-the-shelf Wi-Fi receivers can perform invasive localization attacks against unsuspecting targets.

They suggest one attack scenario: thieves looking to break in to an office building. Specialized Wi-Fi hardware – devices such as directional antenna, antenna array, and Universal Software Radio Peripheral (USRP) – are not only expensive; they’re bulky and conspicuous.

But commodity Wi-Fi receivers could be used to identify the location of employees or security personnel, enabling the thieves to avoid detection. They could take advantage of near-ubiquitous Wi-Fi transmissions – such as digital assistants or Wi-Fi access points – to passively locate and track moving users.

Unlike earlier systems, the researchers’ smartphone location attacks are entirely passive, relying on Wi-Fi sniffing that doesn’t actively transmit any RF signals.

MIT Technology Review describes the challenges Zhu and his team were up against when it comes to the noisy, smeared world of RF signals that forced them to come up with a computational scheme to enable them to pick out humans and their movements:

If humans were able to see the world as Wi-Fi does, it would seem a bizarre landscape. Doors and walls would be almost transparent, and almost every house and office would be illuminated from within by a bright light bulb – a Wi-Fi transmitter.

But despite the widespread transparency, this world would be hard to make sense of. That’s because walls, doors, furniture, and so on all reflect and bend this light as well as transmitting it. So any image would be impossibly smeared with confusing reflections.

But this needn’t be an issue if all you are interested in is the movement of people. Humans also reflect and distort this Wi-Fi light. The distortion, and the way it moves, would be clearly visible through Wi-Fi eyes, even though the other details would be smeared. This crazy Wi-Fi vision would clearly reveal whether anybody was behind a wall and, if so, whether the person was moving.

Out of this Wi-Fi haze, Zhu and his team had to detect changes in ordinary Wi-Fi signals that would point to the presence of human bodies.

The problem is that Wi-Fi sniffers don’t render images. Zhu and his team instead relied on measuring signal strength as they walked around a building. After all, you can’t figure out where signal-distorting humans are without knowing where the signals are coming from. On their walk, they took brief spatial measurements of the received signal strength (RSS) and where it strengthened and faded out, depending on an app they had built that used the smartphone’s built-in accelerometers to record their movement and to then analyze the change in signal strength as they moved.

Walking back and forth helped them to pretty reliably nail the location of a transmitter, they said:

We found that consistency check across 4 rounds of measurements is sufficient to achieve room level localization of 92.6% accuracy on average.

The researchers tested their technique using Nexus 5 and Nexus 6 Android smartphones to peep into 11 offices and apartments whose owners had agreed to participate in the project. Many of those locations had Wi-Fi devices, and they found that the more there were, the easier it made their job:

We see that with more than 2 Wi-Fi devices in a regular room, our attack can detect more than 99% of the user presence and movement in each room we have tested.

How to draw the Wi-Fi blinds?

The researchers propose three possible defenses: geo-fencing Wi-Fi signals, rate limiting Wi-Fi signals, and signal obfuscation.

Geo-fencing works pretty well to fend off attackers who might go after us with cellphones and algorithms in this manner: it more than doubled localization errors, dropping room-level accuracy from 92.6% to 41.15%. In practice, though, it’s extremely tough to deploy and configure. Rate limiting messes up devices’ operability, particularly Internet of Things (IoT) devices.

That leaves signal obfuscation: adding noise so devices can’t be located accurately. The downsides include that attackers can just use an extra sniffer to suss out the noise and subtract it from the signal traces. Another major drawback is extra consumption of Wi-Fi bandwidth and energy at the access point. Still, it looks to be the best potential defense so far: the researchers hope to refine obfuscation defense in the future to protect against these attacks.

For now, people should be advised that Wi-Fi everywhere might be convenient, but it also threatens our privacy, they said:

While greatly improving our everyday life, [wireless transmissions] also unknowingly reveal information about ourselves and our actions.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/r9RHQEyUPKQ/