STE WILLIAMS

$500m cryptocoin heist could be biggest cybertheft ever

You’ve probably heard of Bitcoin – the cryptocurrency world’s first and best-known child, with a value that soared during 2017 from about $100 to almost $20,000.

Even though Bitcoin has plummeted in the past month, BTC 1 will still cost you $11,000.

You may know about Monero, or XMR for short, a cryptocoin that has been in the headlines for all the wrong reasons recently – crooks have taken to mining it via JavaScript in your browser, so that visiting a booby-trapped website will essentially “borrow” your CPU (and electricity) to make money for them.

And if you did the Naked Security #sophospuzzle crossword at New Year, you’ll have heard of Ethereum, a combination of cryptographic blockchain, distributed computational environment and cryptocurrency.

We won’t be surprised if you’re not familiar with NEM, a public blockchain (a form of distributed database) and cryptocurrency that is probably best known in Japan.

NEM promotes itself through a product called Mijin, which is effectively a way of using NEM’s technology to run a private blockchain of your own, for example for processing financial transactions, keeping track of stock movements, and more.

We hadn’t even heard of NEM until this morning…

…when a Japanese cryptocurrency exchange called Coincheck admitted that it had, well, “lost” NEM523,000,000.

(We’ll call them NEM coins from now on, although they aren’t coins in the traditional sense, and NEM doesn’t have the word “coin” in its name.)

Unlike Bitcoin, the number of which will gradually increase until there are a maximum of 21,000,000 in circulation, NEM started out in 2015 with 9 billion “preminted” coins (actually 8,999,999,999) for its ecosystem.

Two years ago, NEMs were worth just four-thousandths of a US cent each – although with 9 billion NEMs in the world that nevertheless gave the NEM currency an astonishing overall valuation of $3,600,000.

Today, they are, even more astonishingly, worth about a dollar each, for what the cryptocoin industry buoyantly refers to as a market capitalisation of about $9 billion – that’s a hard-to-get-your-head-around valuation that make NEM alone worth as much as 1% of Apple.

With that sort of value, you’d imagine that anyone who had been entrusted with a large stash of NEM coins would take care not to lose them – especially if those NEMs belonged to other people and were being held for the purposes of trading.

Indeed, you’d think that any majoor-league cryptocoin exchange would be extra careful given the history of high-value security implosions in the cryptocurrency scene, such as:

Biggest cryptoheist ever?

Well, all cryptocurrency carelessness records may just have been broken by Japanese exchange Coincheck.

According to the company’s own blog, it recently lost NEM 523,000,000 belonging to approximately 260,000 different users.

The company has said it will offer reparations to affected users, paying them about at JPY 88.549 ($0.81) per NEM coin they had held.

That means the company has to come up with more than $400 million in cash.

As you can imagine, that might take a while:

We are currently deciding on the best method for applying for reparations and the period in which they will be made. The principal used for reparations will be derived from company funds.

We realize that this illicit transfer of funds from our platform and the resulting suspension in services has caused immense distress to our customers, other exchanges, and people throughout the cryptocurrency industry, and we would like to offer our deepest and humblest apologies to all of those involved. In moving towards reopening our services, we are putting all of our efforts towards discovering the cause of the illicit transfer and overhauling and strengthening our security measures while simultaneously continuing in our efforts to register with the Financial Services Agency as a Virtual Currency Exchange Service Provider.

Thank you for your attention and your support.

What to do?

Cryptocoins can be stored in what’s called a hot wallet, meaning that the cryptographic secrets needed to spend them are trusted to an exchange like Coincheck, making those coins easier to use, or a cold wallet, where you keep the coins offline.

That’s a bit like the difference between having shares lodged with online broker, where you can trade them directly, or having shares issued as share certificates that you can keep under the mattress at home.

Good advice is to keep the bulk of your cryptocurrency stash offline, keeping only a modest amount online for your immediate needs.

In this case, the average NEM coinholder had apparently entrusted about NEM 2000 to the Coincheck exchange – not a lot, at least in hard currency, until fairly recently, when the value of NEM reached about one dollar.

Even at $2000, many users may have thought that their online balances were reasonable enough, perhaps assuming that an attack that drained hundreds of thousands of accounts at the same time was unlikely.

If you do have cryptocurrency holdings, now is a good time to reconsider how much you keep hot (online), and how much you keep in cold wallets (for example backed up on various encrypted USB devices in multiple locations).

Large fluctuations in cryptocoin values mean that a hot wallet deposit that seemed almost trivially modest a few months ago might by now be worth more than your car…


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/khFiCk7sf2s/

UK infrastructure firms to face £17m fine if their cybersecurity sucks

Infrastructure firms could face fines of up to £17m if they do not have adequate cybersecurity measures in place, the UK government has announced today.

The plans follow proposals earlier this year from the Department for Digital, Culture, Media and Sport intended to comply with the EU Network and Information Systems (NIS) Directive, which comes into effect next May.

The government intends to use those powers on grounds of national security; a potential threat to public safety; or the possibility of significant adverse social or economic impact resulting from a disruptive incident.

The powers will also cover other threats affecting IT such as power outages, hardware failures and environmental hazards. Critical infrastructure firms will also be required to show they have a strategy to cover such incidents.

The maximum penalty will be applied if firms are deemed to have not cooperated with the competent authority, failed to report an incident, not complied with the regulator’s instruction, or failed to implement appropriate and proportionate security measures.

Under the measures recent cyber breaches such as WannaCry would be covered by the NIS Directive.

Threats against Blighty’s national infrastructure appear to be increasing. In November, Ciaran Martin, chief exec of the National Cyber Security Centre (NCSC), revealed that hackers acting on behalf of Russia had targeted the UK’s telecommunications, media and energy sectors.

Margot James, Minister for Digital and the Creative Industries, said: “Today we are setting out new and robust cybersecurity measures to help ensure the UK is the safest place in the world to live and be online.

“We want our essential services and infrastructure to be primed and ready to tackle cyber attacks and be resilient against major disruption to services.”

Incidents will have to be reported to the regulator, which will assess whether appropriate security measures were in place. The regulator will have the power to issue legally binding instructions to improve security, and – if appropriate – impose financial penalties.

The measures will dramatically increase the limit regulators can impose on companies.

In October 2016, TalkTalk was hit with a record £400,000 fine by the Information Commissioner’s Office for not taking adequate steps to prevent the personal data of 156,959 customers – including names, addresses, dates of birth, phone numbers and email addresses – from being accessed by hackers.

The NCSC has today published guidance on the security measures to help organisations comply. Martin said: “Our new guidance will give clear advice on what organisations need to do to implement essential cybersecurity measures.

“Network and information systems give critical support to everyday activities, so it is absolutely vital that they are as secure as possible.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/29/infrastructure_firms_to_be_slapped_with_17m_fine_for_poor_cyber_security/

What do you press when flaws in Bluetooth panic buttons are exposed?

Security researchers have uncovered flaws in Bluetooth-based panic buttons that, in a worst-case scenario, make the affected kit “effectively useless”.

Duo Labs put a range of Bluetooth-based personal protection devices (panic buttons) from ROAR, Wearsafe and Revolar through their paces. Researcher Mark Loveless found vulnerabilities in two of the popular personal protection devices which, if exploited, can open its users to tracking or worse.

Wearsafe’s personal protection device was vulnerable to denial-of-service attacks. If flooded with connection requests, a hacker could lock the user out of the device until the battery is removed and reinserted. The device also continually broadcasts its Bluetooth radio, creating a tracking risk.

Revolar’s device was also found to be vulnerable to Bluetooth tracking.

“While it wasn’t nearly as easy to remotely track a Revolar owner, it is still possible to track the owner of either the Revolar or Wearsafe device from a distance via Bluetooth with inexpensive antennas that extend the scanning range,” said Loveless.

“Both devices allow for Bluetooth scanning to identify the device as a personal protection device. Both devices allow for somewhat insecure Bluetooth pairing.”

IoT panic button security report card

IoT panic button security report card [Source: Duo Labs]

El Reg asked both Wearsafe and Revolar to comment. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/29/bluetooth_panic_buttons_hackable/

Intel warned Chinese partners before US govt about CPU bugs – report

Intel warned Chinese firms about its infamous Meltdown and Spectre processor vulnerabilities before informing the US government, it has emerged.

Select big customers – including Lenovo and Alibaba – learned of the design blunders some time before Uncle Sam and smaller cloud computing suppliers, The Wall Street Journal reports, citing unnamed people familiar with the matter and some of the companies involved.

The disclosure timeline raises the possibility that elements of the Chinese government may have known about the vulnerabilities before US tech giant Intel disclosed then to the American government and the public.

The Meltdown and Spectre chip flaws were first identified by a member of Google’s Project Zero security team shortly before they were independently uncovered and reported by other teams of security researchers. “Intel had planned to make the discovery public on Jan. 9… but sped up its timetable when the news became widely known on Jan. 3, a day after U.K. website The Register wrote about the flaws,” the WSJ reports.

Intel worked on addressing the vulnerabilities with security researchers at Google and other teams that uncovered the processor vulnerabilities as well as PC makers – specifically, the larger OEMs – and cloud-computing firms. Those informed included Lenovo, Microsoft, Amazon and Arm.

The WSJ omits any mention of when notification was made to Lenovo et al, but a leaked memo from Intel to computer makers suggests that notification of the problem for at least one group of as-yet unnamed OEMs took place on November 29 via a non-disclosure agreement, as previously reported.

Lenovo was quick out the gate on January 3 with a statement advising customers about the vulnerabilities because of work it had done “ahead of that date with industry processor and operating system partners.”

Speculative

Alibaba Group, China’s top provider of cloud services, was also notified ahead of time, according to a “person familiar with the company.” An Alibaba spokesperson told the WSJ that the notion the company may have shared threat intelligence with the Chinese government was “speculative and baseless”. Lenovo said Intel’s information was protected by a non-disclosure agreement.

It is a “near certainty” that Beijing was aware of information exchanged between Intel and its Chinese tech partners because local authorities routinely monitor all such communications, said Jake Williams, president of security firm Rendition Infosec and a former National Security Agency staffer.

An official at the US Department of Homeland Security, which runs US CERT, said it only learned of the processor vulnerabilities from early news reports. “We certainly would have liked to have been notified of this,” they added.

Rob Joyce, the White House’s top cybersecurity official, publicly claimed the NSA was similarly unaware of what became known as the Meltdown and Spectre flaws.

Because they had early warning, Microsoft, Google and Amazon were able to roll out protections for their cloud-computing customers before details of Meltdown and Spectre became public. This was important because Meltdown – which allows malware to extract passwords and other secrets from an Intel-powered computer’s memory – is pretty easy to exploit, and cloud-computing environments were particularly exposed as they allow customers to share servers. Someone renting a virtual machine on a cloud box could snoop on another person using the same host server, via the Meltdown design gaffe.

Smaller cloud service providers were left playing “catch up.” Joyent, a US cloud-services provider owned by Samsung Electronics, was among those that may have benefited from a warning but wasn’t included in the select group informed ahead of the public reveal.

“Other folks had a six-month head start,” Bryan Cantrill, the company’s chief technology officer, told the WSJ. “We’re scrambling.”

“I don’t understand why CERT would not be your first stop,” Cantrill added.

El Reg asked Intel to comment on its disclosure policy. In a statement, Chipzilla told us it wasn’t able to inform all those it had planned to pre-brief – including the US government – because news of the flaws broke before a scheduled 9 January announcement:

The Google Project Zero team and impacted vendors, including Intel, followed best practices of responsible and coordinated disclosure. Standard and well-established practice on initial disclosure is to work with industry participants to develop solutions and deploy fixes ahead of publication. In this case, news of the exploit was reported ahead of the industry coalition’s intended public disclosure date at which point Intel immediately engaged the US government and others.

US CERT acts as a security clearing house. The agency initially advised that the Spectre flaw could only be addressed by swapping out for an unaffected processor before revising its position to advise that applying vendor-supplied patches offered sufficient mitigation.

El Reg asked US CERT for its take on how the disclosure process went down in the case of the Meltdown and Spectre vulnerabilities but we’re yet to hear back. We’ll update this story as and when more information comes to light. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/29/intel_disclosure_controversy/

An Action Plan to Fill the Information Security Workforce Gap

Nothing says #whorunstheworld like an all-female blue team taking down a male-dominated red team in a battle to protect sensitive customer data, and other ideas to entice women into a cyber career.

It has become a familiar, fear-invoking industry statistic: the 2017 Global Information Security Workforce Study from Frost Sullivan estimates that a jaw-dropping 1.8 million positions will need to be filled globally by 2022. The same report revealed a mere 11% of information security professionals globally are women. This number has remained unchanged for the last few years, even though the number of women in postsecondary education is steadily increasing. Women today feel empowered to attain degrees and establish meaningful careers, but consistently they are not choosing careers in cybersecurity.

If we are going to make a true impact on the cybersecurity workforce shortage, we will need to tap into the largely untapped resource of educated women. But where do we start? We must enable women to envision themselves in a cyber career. Young girls practice for careers in fashion by dressing Barbie dolls and sketching clothing designs. They dress up as doctors, nurses, and teachers and practice these roles with siblings and friends. We must reach women — both young and seasoned — by making cybersecurity a more tangible, appealing career opportunity.

You’ve heard it before. In order to cultivate interest in cybersecurity as a career field, we must introduce it early to girls and young women. The United States is surprisingly delayed in its introductions to the technical skills that make up cybersecurity, such as computer science. In other countries, including Singapore, Hong Kong, and Israel, elementary schools cover these topics as early as kindergarten.

New Initiatives Launching in 2018
There are groups launching initiatives to fill this gap. During 2018, the Girl Scouts of America, in partnership with Palo Alto Networks, will roll out a program awarding a series of 18 cybersecurity badges to scouts grades K-12. Other groups, such as Girls Who Code, support clubs and summer enrichment programs for female students. At the college level, one- and two-day hackathons are becoming popular as well. This early and continued exposure is an amazing first step, but we cannot stop here.

As technology advances, we must leverage it to engage youth — especially young women — and make cybersecurity a tangible career path. Programs designed for youth, including SoCal Cyber Cup and CyberPatriot, utilize virtual environments, artificial intelligence, and machine learning to put students into real-world situations and actively defend like true practitioners. These programs transform the abstract into actual practice, allowing students to envision a career in cybersecurity and better train as future cyber warriors.

In addition, these programs pair student teams with coaches and mentors, which gives young girls a chance to interact with cyber professionals. Having real-world role models to guide them drives excitement and encourages commitment to the field. These programs often need more practitioners to volunteer their time, so in 2018, consider taking on a role as a mentor to do your part in encouraging young women to join the world of cyber professionals. Coach an all-female cyber team yourself. Nothing says #whorunstheworld like an all-female blue team taking down a male-dominated red team in a battle to protect sensitive customer data.

Proactive Hiring, Cross Training, and Wage Inequities
Exposing girls to a career in cybersecurity is crucial, but it will take time for those efforts to become fruitful. How an organization recruits, compensates, and trains staff can and will directly affect the number of women who join the field in the coming years.

When recruiting women to join their cybersecurity teams, hiring managers must carefully craft job descriptions to ensure they are inclusive of both genders. Although women are entering the cyber profession with higher education levels — 51% have a master’s degree or higher, as compared to 45% of men — studies have shown that most women will discourage themselves from applying for technical positions for which they do not meet every qualification listed, whereas men will still put their name in the hat when meeting only a portion of the requirements. Women also are less likely to apply when job descriptions use common male-associated language, such as analytical, assertive, or tactical, even if they fully possess the right characteristics. We must be sure that our hiring practices are not discouraging women.

There are other changes to make beyond recruiting. The largely untapped market of educated women applies to more than just college graduates. Cross-training women from other fields with transferrable skill sets is the fastest way to address the skills gap. To do this, we must make the field more appealing by addressing equal compensation and offering continuous professional development opportunities and mentorship. According to another Frost Sullivan report, females in nonmanagerial roles earn 6% less than their male counterparts. This wage gap must be addressed if organizations expect to attract and retain skilled women. The same report revealed that women who are encouraged by mentors and have opportunities to hone and build their skills are more satisfied and successful. Women are looking for this level of training and engagement to grow, and these efforts build a more skilled workforce in general.

The cybersecurity workforce shortage will soon reach a critical level, and it is imperative we tap into the market of educated women if we expect to have an impact. By working together — as professionals and organizations — we can ensure more individuals enter the field to fill those million-plus job openings and that the pool of talented, highly skilled cyber professionals continues to grow.

Related Content:

Laura Lee is executive vice president of cyber training and assessments at Circadence. She leads development around the company’s AI-powered, multi-player cyber training platform, Project Ares. Lee brings an exceptional record of leadership in the field of cyber exercises and … View Full Bio

Article source: https://www.darkreading.com/operations/an-action-plan-to-fill-the-information-security-workforce-gap/a/d-id/1330911?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Strava Fitness App Shares Secret Army Base Locations

The exercise tracker published a data visualization map containing exercise routes shared by soldiers on active duty.

In November 2017, the Strava fitness tracking app published a visualization map to show where users exercise across the world. However, that map also revealed location information about military bases and spy posts around the world, military analysts report.

The company lets users record running, walking, or biking activity on their smartphones or wearables, and upload it to the Internet. Military analysts noticed the map – which was constructed using more than three trillion individual GPS data points – has enough detail to give away potentially sensitive data on where soldiers on active duty are located. Users in locations like Afghanistan and Syria seem to exclusively be military personnel, they say.

“If soldiers use the app like normal people do, by turning it on and tracking when they go to do exercise, it could be especially dangerous,” says Nathan Ruser, analyst with the Institute for United Conflict Analysts. On Strava’s map, the Helmand province of Afghanistan shows the layout of operating bases via exercise routes. The base is absent from satellite views on both Google Maps and Apple Maps.

These findings arrive the day after Data Privacy Day, which was created to encourage both individuals and businesses to respect user privacy and protect data. Strava’s decision to publish sensitive location data is part of a growing discussion around how companies should handle the massive amount of information they collect on users.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/application-security/strava-fitness-app-shares-secret-army-base-locations/d/d-id/1330926?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

RELX Group Agrees to Buy ThreatMetrix for GB pound 580M Cash

Authentication firm ThreatMetrix will become part of Risk Business Analytics under the LexisNexis Risk Solutions brand.

RELX Group will acquire ThreatMetrix in a cash transaction of £580 million, or about $817 million USD. The digital identity firm will become part of RELX’s Risk Business Analytics under the LexisNexis Risk Solutions division, officials report.

San Jose-based ThreatMetrix was founded in 2005. Its technology analyzes connections among devices, locations, anonymous identity information, and threat intelligence, and applies behavioral analytics to detect risky transactions. It processes more than 100 million transactions each day, encompassing 1.4 billion unique identities from 4.5 billion devices.

The company has already been working with LexisNexis Risk Solutions, which works in fraud and authentication by applying analytics to identity credentials, addresses, and asset ownership. LexisNexis Risk Solutions uses ThreatMetrix’s device intelligence tools in its Risk Defense Platform. It appears the two companies are joining forces to expand RELX’s portfolio.

“Further integration of ThreatMetrix’s capabilities in device, email and social intelligence will build a more complete picture of risk in today’s global, mobile digital economy, providing both physical and digital identity solutions,” says ThreatMetrix. The deal is expected to close in the first half of 2018.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/relx-group-agrees-to-buy-threatmetrix-for--gb-pound-580m-cash/d/d-id/1330931?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Lyft investigates allegations of employees snooping on riders

The US ride-hailing company Lyft said on Thursday that it’s investigating allegations that its employees have snooped on riders, including looking up the trip data of their exes, famous actresses, porn stars or other famous users, such as Facebook founder Mark Zuckerberg.

The story was first reported by the technology news site The Information.

The publication said that somebody claiming to be a current or former Lyft employee made the allegations on an anonymous app called Blind, where people can gossip about their employers.

Lyft emailed a statement to Reuters in which it said that if the allegations are true, they’re offenses worth getting fired for:

The specific allegations in this post would be a violation of Lyft’s policies and a cause for termination.

CNN Tech reports that hours after the report, Lyft cofounders Logan Green and John Zimmer emailed employees. The subject line was “Upholding trust.” CNN Tech obtained the email, which said:

Our company’s values are based on creating a healthy environment of trust and accountability. If we find a violation, we will take appropriate action.

The news immediately drew comparisons to Uber’s “God view” mode, which tracks riders and displays their information in an aerial view.

Uber has a history of bristling at criticism of its security practices, and its history is littered with violating journalists’ privacy: one executive suggested spending $1 million to mine personal data for dirt to discredit a journalist who criticized the company, for example.

In another incident, Uber found itself having to investigate yet another exec for poking at yet another journalist’s personal data (twice) and tracking her movements without her permission.

Uber’s tracking of that reporter, BuzzFeed’s Johana Bhuiyan, is what triggered a data privacy investigation by New York Attorney General Eric Schneiderman into Uber’s use of the God View tool.

In November 2014, Uber responded by re-stating its privacy policy, including that it had deployed an automated tool to monitor employee access to God View as a way of deterring abuse.

The US FTC later discovered that tool was in use for less than a year, abandoned for reasons that weren’t clear. Separately, around the same time, the New York Times also discovered that Uber started using a tool called Greyball to track officials investigating the company’s operations in a number of cities.

Part of the 2016 settlement over Schneiderman’s investigation was a requirement that Uber encrypt rider geolocation information and that it adopt multifactor authentication that would be required before any employee could access especially sensitive rider personal information.

Uber said it introduced a “strict policy prohibiting” employees from accessing sensitive information, but the FTC in 2017 issued a complaint calling the company’s enforcement of the policy into question. Uber settled the complaint with the FTC, agreeing to privacy audits every two years until 2037 as part of the agreement.

CNN Tech said that in its statement about the allegations regarding its own employees’ God View-ish tendencies, Lyft said that some employees, such as engineers, have access to customer data. A source familiar with Lyft’s policies told the publication that that data includes details such as pickup and drop-off locations.

Lyft said its employees are required to undertake training and sign a confidentiality and responsible use agreement upon joining the company.

From the statement:

[Lyft’s policies] categorically prohibit accessing and using customer data for reasons other than those required by their specific role at the company. [They also] bar them from accessing, using, or disclosing customer data outside the confines of their job responsibilities.

The company didn’t provide a timeline for its investigation, but it did note that queries into Lyft’s rider data lookup system are logged. Thus, if the allegations are true, it shouldn’t be hard to find out who’s been God-Viewing Mark Zucerkberg or their exes.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/5IuTKIptqDA/

Researchers warn of invisible attacks on electrical sensors

Are the humble analogue transducers embedded in vast numbers of sensors the next low-level technology in need of a security rethink?

A new research note discussing what are termed “transduction attacks” argues that they are being taken for granted but shouldn’t be.

To simplify, transducers are electronic components that turn analogue signals such as radio, sound or light waves, or the physical movement of something like a gyroscope, into an electrical signal that can be digitised by a computer.

Under our noses, these are becoming ubiquitous, with more appearing every day in voice-activated devices, drones, motor cars, and other IoT systems.

According to the authors:

A transduction attack exploits a vulnerability in the physics of a sensor to manipulate its output or induce intentional errors.

Something targeting a sensor is, then, conducting a sort of spoofing attack to make the sensor respond to a rogue input.

For example, the recent DolphinAttack proof-of-concept demo used inaudible ultrasonic commands to show how voice-activated systems used by cars, smartphones and devices such as Amazon’s Alexa, Apple’s Siri, and Google Now, could be made to dial phone numbers or visit websites.

Researchers have even demonstrated how something as simple as the sound from a YouTube video could be used to control the behaviour of a smartphone’s MEMS accelerometer.

In theory, the same basic principle might be used to disrupt all manner of devices: from interfering with heart pacemakers to making self-driving cars blind to obstacles.

It needs pointing out that these vulnerabilities weren’t caused by a design problem in software but exploit the basic physics of the transducer itself.

How did it come to this?

Most likely, the sensors were designed before the community understood the security risks.

One challenge is that while the principles of this kind of attack are now in the public domain, detecting real-world examples is likely to be very difficult.

The messy solution is to build software integrity checking into devices using these components, and to manufacture them so they respond to a narrower range of inputs (e.g. stop the transducers used by voice-activated devices from being able to “hear” ultrasonic sound).

Given the continued failure by large parts of the IoT to embrace even software security basics this does not bode well.

For those who are prepared to address the problem, this research implies the need for a new generation of transducers, which in turn will need the old-fashioned skills of electrical engineers.

Intriguingly, the authors predict a role for engineers who can approach this problem in an inter-disciplinary way, the lack of which is arguably how the problem developed in the first place.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/iis1kwPvHho/

Cop buys mobile spyware, says he can’t remember why

FlexiSpy: it’s a nasty little piece of work. The company makes stalkerware, once marketed to jealous people who covertly installed it on their partners’ phones.

“Once marketed to,” since that’s actually illegal. Nowadays, smart spyware makers tend to stick to marketing that mentions the legally permissible surveillance targets of children and employees, though there are plenty of jealous people who still use the tools for illegal spying on partners.

Former marketing from its site:

Many spouses cheat. They all use cell phones. Their cell phone will tell you what they won’t.

At any rate, regardless of how the marketing has been smoothed over, the fact remains that covert surveillance tools such as FlexiSpy that log keystrokes and tap into mics, calls, stored photos, text messages, email and even WhatsApp messages, are cited by an overwhelming majority of survivors of domestic violence who report that their exes trained such tools on their whereabouts, communications and activities. A 2014 NPR investigation found that 75% of 70 surveyed domestic violence shelters in the US had encountered victims whose abusers had used eavesdropping apps. (Another 85% helped victims whose abuser used GPS to track them.)

So what’s a tool like FlexiSpy doing in the hands of a Florida state law enforcement officer, who apparently purchased it without the knowledge of his own agency, according to data obtained by Motherboard?

As Motherboard reports, this is the first known case of a U.S. regional agency purchasing what it called “malware.”

Jim Born, an ex-DEA cop and retired Florida Department of Law Enforcement agent (now a crime novelist), can’t quite recall why he bought FlexiSpy. He says that he thinks he “used on a case or tried it to understand how it worked.” Motherboard quotes Born:

 Nothing nefarious. Need a court order to use on someone without consent.

The state has no record of Born being granted approval to buy the spyware or to use it in an investigation. None of the people Born busted were told that evidence against them had been gathered with FlexiSpy, either.

So how did Motherboard learn of the purchase, if the state itself has no record of it?

The discovery can be traced back to April 2017, when two hackers stole details from 130,000 accounts they hacked out of Retina-X and FlexiSpy, which both market covert surveillance tools. Then, the hackers handed the stolen information to Motherboard, in part via the leaking platform SecureDrop.

The data set held details of the predictable type of customers: jealous people spying on partners. But it also held evidence of FlexiSpy’s government and law enforcement customers. Motherboard said it wasn’t clear whether the spyware was purchased for official or personal use in those cases.

Born’s purchase, done outside normal procurement processes, raises questions. Motherboard spoke with Riana Pfefferkorn, the cryptography fellow at the Stanford Center for Internet and Society, who said that none of this should be going on under the table:

Officers should not be buying malware on their own dime for use at work – and using their official email address in the process. Purchases of forensics software (already common in US police departments) should go through normal procurement processes, should have documentation (subject to public records laws), and should be subject to oversight.

If the malware was ‘used on a case,’ how exactly did he use it, and why did he apparently not document that? Did he get the appropriate court order? Given the functionality of FlexiSpy, it would seem to require a wiretap order, not just a search and seizure warrant.

Needing physical access to a phone in order to install FlexiSpy is no great hurdle, she said, given that police confiscate devices:

The police may have many mobile devices in custody, taken from crime scenes, suspects, victims, etc. Or an officer may take a device away only temporarily before returning it to the owner. There are ample opportunities for physical access to install this malware.

Motherboard itself, which has tested consumer spyware, has found that installing it could likely be done “in less than a minute.”


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/tTsLwnzERVE/