STE WILLIAMS

Apple scrambles to fix FaceTime eavesdropping bug

Apple is scrambling to fix an embarrassingly dangerous “snooping” bug in its popular FaceTime app.

In the meantime, Apple has apparently disabled the Group Facetime feature entirely, preferring to inflict a service outage than to leave the exploitable privacy hole gaping open.

The bug was reported on well-known Mac news site 9to5Mac, and how to abuse it is widely known.

In the simplest terms, the bug goes like this:

  • Call someone from your contacts using FaceTime.
  • Their phone will ring.
  • Use the “Add Person” option to include a new participant in the chat, namely yourself.

That might sound pointless, considering that you are, rather obviously, already part of the call.

In fact, it seems that this sequence of events is so pointless that no one ever tested it, because what happens is that both you and the person who hasn’t answered the call yet get added into the conversation…

…and you can immediately hear the audio feed from the person who hasn’t answered the call yet.

Sure, you can’t use this to eavesdrop entirely secretly, given that the other person’s phone will ring (or perhaps vibrate) when you call it.

But if they don’t notice the phone ringing, or can’t reach it and decide simply to ignore the call, they certainly don’t expect their device to be listening in and transmitting right away!

In fact, it’s even worse that that – 9to5Mac reports that if the person you’ve called is at the lock screen and hits the Power button when receiving one of these booby-trapped “group calls”, you get to see their video feed as well as to hear what they’re saying – or what other people in the room are saying.

In other words, if the person you’ve called picks up their phone, hits the Power button, sees it’s you, grimaces, announces to the room, “Oh, heck, it’s Captain Annoying calling – I’m not ready to tell him the deal is off just yet,” and hits the [Decline] option…

…you’ve just found out more than you probably ever would or could have discovered if they’d actually answered the phone immediately and told you they couldn’t talk right now.

What to do?

As far as we can see, this privacy breach happens because of a bug in the FaceTime app that causes it to “answer” a call before you’re ready.

In other words, this is a bug that you can’t control from your end, because it’s triggered by the activation of a feature in the app by the person who initiated the call.

In theory, Apple’s block of Group FaceTime in the FaceTime infrastructure itself ought to prevent the bug being exploited.

But in practice, at least until Apple updates the app and you’ve downloaded the patch, the only way to be sure this bug can’t be triggered is to disable the app yourself.

Go to iOS SettingsFaceTime and flip the slider to off:


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/27RHG1HM0N0/

I helped catch Silk Road boss Ross Ulbricht: Undercover agent tells all

Long read “How do you eat an elephant? Nibble at it, nibble at it, a lot of little bites.” That was how Special Agent Jared Der-Yeghiayan infiltrated notorious dark web market the Silk Road and helped unmask site operator Dread Pirate Roberts, aka Ross Ulbricht.

Der-Yeghiayan told an enthralled audience at France’s FIC2019 infosec shindig last week how, as a US Department of Homeland Security Investigations agent, he took over the online chat and forum accounts of key players in the Silk Road’s infrastructure – and headed off plans by hot-headed US law enforcement to blast the back wall off Ulbricht’s San Francisco home and fast-rope from helicopters into his top-floor flat.

The Silk Road was a Tor marketplace, rather like eBay, where anonymous sellers traded drugs, firearms, illegal pornography and more with anyone who cared to pay – in tricky-to-trace cryptocurrency Bitcoin, naturally. It was shut down in 2013 after Ulbricht, who styled himself as Dread Pirate Roberts after the identity-switching “villain” in the 1987 movie The Princess Bride, was arrested in a San Francisco library. He was later convicted and sentenced to life in prison without parole. As well as being accused of ordering six murders-for-hire through the Silk Road (these specific charges were later dismissed), Ulbricht was also linked to six drug overdose deaths where the narcotics had been ordered from his website.

The Silk Road case was one of the highest profile clashes between what criminally minded libertarians saw as the internet’s untouchability by real-world regulators and the determination of police forces to extend their writ, once and for all, into cyberspace. Though the events he described took place less than a decade ago, Der-Yeghiayan’s account shed light on just how much social engineering was necessary to crack the illegal goods empire that Ulbricht had built.

It all began with a single pill of E

“I was an inspector at Chicago Airport,” said Der-Yeghiayan, describing how the Homeland Security Investigations (HSI) case against Silk Road began, “and another inspector found some illegal drugs. He said ‘I’ve found some ecstasy!’ I said, ‘How many thousands of pills do you have?’ He said, ‘I’ve got one.’ One! Why would I be interested in one pill? He said ‘It looks more commercialised, a website or something behind this’.”

Further trawling through seized packages revealed amounts of amphetamines, powdered MDMA and LSD. Law enforcement went to the buyers’ homes and, in then-trainee Der-Yeghiayan’s words, introduced themselves by saying “we just want to talk” and “discuss” where they got the drugs from. This strategy paid off when one customer’s flatmate, irked at investigators turning up in the perp’s absence, merrily told the novice agent that his pal was ordering “weed, ecstasy, LSD, maybe some heroin” from “a website called Silk Road”.

That’s silkroad.com right?

“I said, ‘yeah, we know that’,” recalled Der-Yeghiayan. “We didn’t know that! I played it cool and said ‘that’s silkroad.com right?’ The guy said ‘nah, dot-onion, Tor.’ I said ‘Yeah, I was just testing you!’ My training officer later said ‘good interview’.”

After some Google searches and investigation into the Bitcoin transactions to and from the Silk Road (“there was nothing we could subpoena or get a search warrant on”), HSI went back to basics and started analysing seized packages from their Chicago Airport office, trying to find identifying clues from the senders of the drugs. Seizures went from “10 a week” to more than 200 as investigators broadened their net.

Getting inside the Dread Pirate Roberts’ head

While agents did the old-fashioned work of retrieving fingerprints from the reverse of address stickers and asked law enforcement bodies in the senders’ countries to run them against local population databases, Der-Yeghiayan started thinking about alternative routes to crack the Silk Road. While reading the site’s forums, he noticed that site admin Dread Pirate Roberts (DPR) had started a book club thread.

“He focused on libertarian beliefs that the free market enterprises, the Austrian school of economics, the principles of no government control over everything; that’s what the Silk Road was meant to represent,” said Der-Yeghiayan. “One of the things we focused on, though, was his signature [block]. We would see he would also put different comments there, things to read. The reading lists he had up there were websites on the regular internet.”

Sure enough, on checking the economic philosophy sites that DPR kept referencing over and again, Der-Yeghiayan found people with “the same writing style, same thoughts, same type of discussion”. The agent’s thinking was simple: “If we knew what inspired [DPR] we could talk to him, even engage him in an undercover capacity. Maybe he might spell something incorrectly, say something more about himself.”

Agents knew that the Silk Road had secret inner forums that only privileged, trusted vendors had access to. If they could infiltrate those and convincingly interact with the people using them, they stood a better chance of identifying and arresting the site’s operators.

‘If you haven’t done a search warrant with the Dutch, I recommend you do’

A breakthrough came when Der-Yeghiayan’s colleagues made a test purchase from the Silk Road for small plastic baggies, with the initial intention of tracing the Bitcoins from the purchase to see where they ultimately ended up. Their vendor, unusually conscientiously, posted the baggies in a package that “actually had a tracking number on it. We didn’t pay for tracking; we didn’t want tracking. The baggies cost 30 cents. This person paid $4.60 for tracking!” marvelled Der-Yeghiayan.

But it was a way in. Tracing back the tracking number to the credit card and terminal used to pay for the tracking label, HSI investigators found a CCTV camera overlooking the terminal. “They mailed 30 other packages the same day,” said Der-Yeghiayan, “that’s probably our vendor. We did some investigation, we saw him dropping packages, a drug dog alerted on it… and he turned over his account.”

While the account was a dead end as far as the forum was concerned, another tracking number from the vendor revealed a buyer in the Netherlands. HSI asked if they could execute a search warrant there and the Dutch police agreed. HSI went along – but things didn’t quite go to plan.

“We’re looking at the house,” said Der-Yeghiayan, describing the scene. “As we’re sitting there looking at it the police walked up and they knocked on the door and said, ‘open up’. A guy sticks his head out the second floor window and he said something in Dutch. My translator said it meant, ‘What do you want?’ The police said ‘Open the door’. Guy says ‘No’. So the police then come back and say, ‘Open the door now’. Guy says ‘No’. This goes back and forth for a good five minutes… As they’re going back and forth, the Dutch [police] say: ‘This is your last chance, open the door now’ and the guy says ‘Just give me one reason why’ and they say ‘Because we’re the police!’. He goes, ‘Why didn’t you say so, I’m blind!’ True story.”

While the blind man wasn’t their target, the vendor, who lived in the same house and ran his drug-dealing operation with his girlfriend, turned over his Silk Road vendor account to police – and it was another dead end.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/29/how_i_caught_silk_road_mastermind/

Mozilla security policy cracks down on creepy web trackers, holds supercookies over fire

The Mozilla Foundation has announced its intent to reduce the ability of websites and other online services to track users of its Firefox browser around the internet.

At this stage, Moz’s actions are baby steps. In support of its decision in late 2018 to reduce the amount of tracking it permits, the organisation has now published a tracking policy to tell people what it will block.

Moz said the focus of the policy is to bring the curtain down on tracking techniques that “cannot be meaningfully understood or controlled by users”.

Notoriously intrusive tracking techniques allow users to be followed and profiled around the web. Facebook planting trackers wherever a site has a “Like” button is a good example. A user without a Facebook account can still be tracked as a unique individual as they visit different news sites.

Mozilla’s policy said these “stateful identifiers are often used by third parties to associate browsing across multiple websites with the same user and to build profiles of those users, in violation of the user’s expectation”. So, out they go.

Of course, that’s not the only technique used for cross-site tracking. As detailed in Mozilla’s policy, some sites “decorate” URLs with user identifiers to make the user identity available to other websites.

Firefox isn’t yet ready to block that kind of behaviour, but Mozilla said: “We may apply additional restrictions to the third parties engaged in this type of tracking in future.”

Sites will be able to use URL parameters for activities such as advertisement conversion tracking, the policy said, so long as that isn’t abused to identify individuals.

Mozilla has also flagged browser fingerprinting (tagging an individual by the fonts they have installed is the most familiar example) and supercookies for future removal.

The “may block in the future” nature of the policy seems depressingly conditional, but independent cybersecurity researcher Dr Lukasz Olejnik told The Register the effort is at least an indication that Mozilla is taking user privacy seriously.

“After months of studies and preparation, Mozilla decided to take a hard stance on certain kinds of tracking measures,” he said. “Firefox will begin the blocking of scripts behaving in an unacceptable manner, such as as tracking or unconventional methods of identification via fingerprinting. It is… sending a strong message that the misuse of certain web browser features is no longer welcome.

“Certain script activity will keep working if user action indicates a clear intention, such as clicking on a link.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/29/mozilla_anti_tracking_policy/

Singapore fingers deported fraudster for leak of list of thousands of HIV+ people

The government of Singapore is once again apologising for a serious breach of citizens’ privacy: this time, the personal details of 14,200 individuals who tested positive for HIV, and 2,400 of their contacts, have been published online.

The country’s Ministry of Health (MoH) said it had been aware since 2016 that one Mikhy Farrera Brochez could be in possession of the information, but it had not announced this because Brochez had not published anything. It had, however, notified the individuals affected.

The announcement comes just seven months after the personal details of 1.5 million patients held by the SingHealth medical giant were blabbed after a cyberattack.

The country’s health ministry said Brochez had departed from Singapore last year, and while it didn’t say where he is now, the Ministry of Health said it was seeking assistance from foreign partners to bring him back to face charges.

The MoH said it first notified police that Brochez might have accessed health records in May 2016.

Brochez, who has been in a relationship with National Public Health Unit head Ler Teck Siang since 2008, was in prison from the time he was remanded in June 2016 (and later convicted in 2017), until his release and deportation in May 2018. He was convicted of fraud, drug offences and lying to the Ministry of Manpower about his HIV status so he could work. Singapore has banned foreigners with HIV from working in its territory – although it is possible for outsiders to obtain a short-term visa.

Channel News Asia reported that Ler helped Brochez by supplying his own blood for government tests. The ministry confirmed he’d received a 24-month sentence for this, among other offences, and that he has appealed the conviction. His appeal will be heard in March this year.

The fraud offences Brochez did jail time for also related to falsifying his educational credentials, the department claimed.

While it made no public announcement at the time, the MoH said it notified the individuals involved.

“This incident is believed to have arisen from the mishandling of information by Ler, who is suspected of not having complied with the policies and guidelines on the handling of confidential information,” the ministry alleged.

Health minister Gan Kim Yong has apologised on behalf of the government for the breach, and promised support for those affected. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/29/singapore_discloses_another_data_breach/

Hey boffin, take a walk on the wild side: Stuffy academics need to let out their inner black hat

Academics and grey-hat bug-hunters are a lot more alike than they care to admit.

This is according to Anita Nikolich, a computer science fellow with the Illinois Institute of Technology and former cybersecurity head at the National Science Foundation.

The problem, he said, is a gulf between the academic world of university researchers pushing papers and the “underground” world of hobbyist hackers, bug hunters and grey-hat researchers who actually hammer on products.

Traditionally, academic research was seen as more theoretical and abstract, while underground hacking dealt with breaking actual products. More recently, however, the two areas have increasingly overlapped, and many would be hard pressed to tell the agenda of DefCon from that of an academic security conference.

“It struck me as ironic that over the past 10 years it is getting harder to tell the difference,” Nikolich mused. “Academic and non-academic research have become indistinguishable from one another.”

This has lead to some missed opportunities in recent years. For example, last year’s work at the DefCon voting village touched on problems academic security researchers have known about for years without the public noticing, while in 2011 researcher Jay Radcliffe had his groundbreaking research on hacking insulin pumps held back by academic journals that refused to take a paper from someone who didn’t have a PhD.

In both cases, a bit of flexibility and understanding could have benefited everyone.

With their areas overlapping, Nikolich said he sees a need for academics, hobbyists and professional hackers to find common ground and share their ideas and findings with one another.

This approach has in the past yielded success. Nikolich pointed to Darpa’s highly successful “cyber fast track” programme and the explosion of bug bounty and “I am the cavalry” programmes in shedding light on potential risks.

For the gap to be bridged, however, both sides will need to become a bit more flexible in dealing with the other.

For academics, that means inviting people from non-academic backgrounds to participate in conferences and, more importantly, get themselves into the running for grant programmes that would let them pursue and share their findings in academia.

“Sponsor non-academics,” Nikolich advised, “there are a lot of very smart people, get them to participate on grants.”

Non-academics, meanwhile, would be wise to practice a bit of “matchmaking”. Teaming academics up with non-academics, particularly in conference settings, could help uncomfortable uni types open up and get both sides bouncing ideas off one another. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/29/stuffy_academics_need_to_let_out_their_inner_black_hat/

Creating a Security Culture & Solving the Human Problem

People are the biggest weakness to security breaches; people can also be your organization’s biggest defense.

Through nearly a dozen years of experience at the FBI and now at Qualtrics, I’ve seen that many of the most successful hackers no longer first look for software vulnerabilities. They’re coming after your people. The reason is simple: It’s cheaper, it’s easier, and it works.

Massive telecom data breach? Unprotected vendor server. Prominent media company? Stolen credentials. Website with compromising emails? Former contractor. All of these major breaches resulted from mistakes of individuals. The threat vector is you.

Despite years of education, millions of pages of policy, and pervasive annual mandatory trainings, 60% of security professionals rank employee carelessness or negligence as a top threat, up from 44% in 2015, according to the EY Global Information Security Survey. Fully 66% of all cyber insurance claims stemmed from employee negligence or malfeasance, according to a 2017 report from Willis Towers Watson.

But although we keep having human breaches, we haven’t changed the behaviors that lead to these breaches. On average, 4% of targets in a phishing campaign will click, according to Verizon’s 2018 Data Breach Report. Furthermore, people who have clicked once are more likely to click again.

Why? Because most modern workers think they know how to avoid security threats. We no longer have an awareness problem: Workers have heard the basics about phishing. We have a false confidence problem. Knowing about security threats is only half the battle. Employees also have to know what actions to take.

Awareness vs. Response
Qualtrics conducted a study of roughly 1,000 US adults to test two related, but significantly different points: awareness of phishing threats and appropriate responses to phishing threats. The gap was striking.

Awareness
We found that more than 70% of US adults knew what phishing was, and more than half said they knew how to avoid becoming a victim.

Appropriate Response
But when we asked harder questions from the same sample, we saw far less confidence. Only 10% of respondents knew the right way to determine if a link is legitimate. Equally concerning, one in three US adults incorrectly said that only clicking on links from people they know would protect them from falling victim to a phishing attack.

You are still the target, and the problem is getting worse because of the human gap. People develop false confidence when they’re aware of a problem but don’t know how to properly address it. Because security experts are still learning how to address human security vulnerabilities, even the best can substitute mere awareness for preparation.

Filling the Confidence Gaps with Elbow Grease
A lot of people purchase online training videos and throw them at the problem, or check the box for cybersecurity training by having their IT personnel provide basic reminders in training once a year. This kind of attitude can be even more dangerous than letting cybersecurity slip from top-of-mind. When companies focus on merely checking that box, they can lull themselves into a false sense of security, thinking their annual lecture or testing has prepared employees for future attacks.

If companies put as much thought, planning, and execution into helping their employees avoid cyber threats as they did creating firewalls and preventing software breaches, they would increase the security of their organization. But that seems like a lot of hard work for already overburdened security professionals. This could mean increasing training or implementing other processes for sharing information.

I have investigated dozens of cases where victims didn’t click a link or download any file, yet they still were tricked by a phishing email and lost millions. Awareness training and tests are an essential part of securing an organization. However, the end goal should be to create a security culture, not to just make people more knowledgeable. Culture implies intrinsically motivated action, which is what companies need to protect themselves.

Start from the Top
The most effective training program in the world will have a hard time gaining traction among employees if they don’t see those precautions and practices being demonstrated by leadership. Without an example from the top, the environment for a security-minded culture to develop won’t exist.

This culture is crucial for the same reason public health officials stress the necessity of herd immunity via vaccinations: If the bulk of a population is protected against a threat, that population has a much lower risk of being damaged by that threat. Exemplifying secure practices can help executives protect their workforce against breaches.

Leading the charge doesn’t have to take a lot of time or effort. It could be as simple as executives always wearing the security badges they expect employees to carry, or encouraging employee discussion during cybersecurity training.

Follow Up
Training or a phishing test is a great start, but what happens after that? Without following up on training, employees can forget crucial security measures, and the subject can drift into perceived irrelevance until the next year’s exercise.

Keep the message current by reiterating it throughout the year. Maybe that means instead of having one big training per year, you break it down into smaller quarterly training sessions. Maybe it’s having regular testing or routinely having conversations about cybersecurity. A combination of initiatives — an occasional newsletter with tips, regular training, etc. — can help foster a secure culture by imparting the severity of the problem and the necessity of every employee’s efforts to solve it.

Hardening devices and patching software are only part of the battle to secure your enterprise. Today, you must test and train employees and help them stay accountable for security practices. Each individual is a major threat vector to your organization, so you must create a culture of security and frequently reiterate the message. A security mindset in every employee is the only thing that will close the human security gap and the only way to truly protect your company.

Related Content:

Adam Marrè, CISSP, GCIA, GCIH, is a Qualtrics information security operations leader and former FBI cyber special agent. Adam has more than 12 years experience leading large-scale computer intrusion investigations and consulting as a cybercrime … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/creating-a-security-culture-and-solving-the-human-problem/a/d-id/1333722?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Q. What do you call an IT admin for 20-plus young children? A. A teacher

Usenix Engima Protecting students’ privacy – from securing their personal information to safeguarding their schoolwork – is a challenge for schools and software developers, apparently.

Alex Smolen, engineering manager for infrastructure and security at school management software maker Clever, told the 2019 Usenix Enigma conference in San Francisco on Monday that his company has unearthed a host of problems facing students, their parents, and their teachers, when it comes to handling and protecting pupil information.

Students and schools are, after all, a lucrative goldmine for data harvesters and brokers. Smolen noted that even with data protection laws in place specifically for children, everyone from university admission departments to recruiters and marketing companies are eager to get stats on things such as student ethnicity and family income data for school districts.

This all adds up to a minefield for schools and educational software developers who try to protect student data and children’s privacy with online services.

Part of the problem, said Smolen, is down to the differences between children and adults. Children are still developing parts of the brain that handle risk and consequences. This makes things like warning dialogues and opt in or opt out choices far less effective, even when the dialogues are aimed at parents.

“We give them our adult defenses,” Smolen said. “Even software designed for children inherits the security and privacy problems of the one size fits all internet.”

bank

Critical infrastructure needs more 21qs6Q#S$, less P@ssw0rd, UK.gov security committee told

READ MORE

Even something as simple as passwords can be a problem. Young students, for example, cannot be expected to remember and enter a password. This has forced Smolen to look at other methods. For example, Clever uses QR codes that kids can carry on student badges, and then scan to log in to a machine.

That doesn’t solve every issue, however. Smolen said that overworked teachers and underfunded schools can also pose a problem. A teacher, for example, will essentially have to serve as an administrator for 20 or more students in a class, collecting tablets, making sure students are logged off, and so on. Simply forgetting to log out of a tablet could lead to students’ classwork or data being shared with others.

Parental consent can also be tricky, as schools are hardly equipped to create the proper mechanisms to get consent.

“Making sure that works seamlessly can be tricky,” Smolen said, “because schools are often troubled with basic IT tasks, let alone complicated portals for parents to opt in opt out.”

Ultimately, Smolen says, developers and schools will need to take a different approach to handling student data. Ideally, going forward developers will make security and privacy systems that are both easy for kids to understand and simple for teachers and parents to set up and manage.

In the meantime, however, trying to protect student data is going to be a tall order. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/29/school_it_security/

Did you know? Monday was Data Privacy Day. Now it’s Tuesday. Back to business as usual!

Monday, January 28, was Data Privacy Day and you won’t get another for a year.

If you missed it, you didn’t miss much. It was a day like any other day, without meaningful privacy except for those offline and unobserved by the global surveillance panopticon.

The non-profit National Cybersecurity Alliance marked the occasion, observed since 2007, with a gathering of corporate privacy policy wonks at LinkedIn’s San Francisco, California, headquarters. Kelvin Coleman, executive director of the NCA, presided over the gathering, which he characterized as an exploration of the opportunities and challenges for the privacy road ahead.

In the opening panel, Eva Velasquez, president and CEO of the Identity Theft Resource Center, framed the discussion with a reference to her organization’s 2018 data exposure report. The report found a 23 per cent decline in the number of data loss incidents and a 126 per cent increase in the amount of personally identifiable information exposed, amounting to almost 198 million records.

Data spills of this sort would not happen if there were actual privacy, if the organizations involved lacked any records to lose.

watchdog

America cooks up its flavor of GDPR – and Google’s over the moon

READ MORE

But the panelists weren’t there to ponder the absence of information in some dream world where netizens have complete control over the data that describes them. They were not looking to explore privacy as defined in a dictionary: “the quality or state of being apart from company or observation.” Nor were they looking at tools like messaging client Signal that can actually provide some measure of privacy by denying data to those who’d gather it.

Rather, they chose to focus on the parameters of lawful data usage, on the ways companies can handle data with informed notice and consent from customers.

In other words, the focus was compliance rather than abstinence.

“Companies increasingly look at privacy and security as a cost of doing business, and those on the leading edge think of it as an opportunity,” said Kimberly Nevala, strategic advisor at software biz SAS.

The word “opportunity” here means using data to compete more effectively, to generate more revenue for one’s business. Data has been likened to oil as a commodity that fuels business growth. Hence for-profit companies willing to forego data collection are few and far between.

Behavior

Data from global IT biz Unisys suggests some forbearance might be wise. In a survey of more than 1,000 US adults described in the 2018 Unisys’ Security Index, the firm found: 42 per cent of survey respondents don’t want their health insurance provider using fitness data from wearable devices to influence premiums or incentivize behavior; 38 per cent don’t want police to determine their locations from their wearable fitness devices (good luck with that); and 27 per cent dislike the idea of baggage sensors interacting with airport baggage management systems to track bags and send text updates.

The results show folks are fearful of these technologies because they feel ill-equipped to prevent potential online abuse, said Tom Patterson, chief trust officer of Unisys, in a statement.

Legal types like to talk about how companies should obtain informed consent to collect data. But Nervala called the notion into question with her observation that technologies like artificial intelligence complicate matters by obscuring how information gets used.

“As a consumer, you can’t give informed consent because don’t know how data will be used or combined,” she said, in reference to the largely inscrutable decisions of machine learning algorithms.

She suggested some data uses ought to be viewed as toxic. “We don’t allow lead paint,” she said. “There should be some uses of information we just don’t abide.”

Velasquez added that consumers need to be motivated to become informed. She likened privacy to health, noting that it tends to be ignored until it causes pain. Your doctor can warn you to live a healthy lifestyle, but many people won’t pay attention until they experience chest pains, she said.

Patchwork protections

The discussion inevitably turned to privacy laws and the business community’s desire for a federal law in the US to override the emerging patchwork of privacy legislation at the state level.

Karen Zacharia, chief privacy officer at Verizon, declined to describe the features that should be present in federal privacy legislation but she said, “It’s important that we have a consistent regime that applies to all players in the ecosystem, enforced by the FTC.”

The Register asked Zacharia and Kalinda Raina, head of global privacy for LinkedIn, whether a federal privacy law should include the right for individuals to sue companies for failing to live up to privacy promises – included the Illinois Biometrics Protection Act and opposed by many companies.

Both were non-committal. Raina suggested that GDPR-style fines – the result of legal action brought by government authorities rather than consumers – work better to encourage responsible data handling by companies. Zacharia said a personal right to sue might not necessarily be the best way to protect consumers. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/29/data_privacy_day_comes_and_goes/

PSA: Disable FaceTime. Miscreants can snoop on your iPhone, Mac mic before you pick up call

You might want to disable FaceTime on your iPhone, iPad, or Mac until Apple patches this bonkers bug.

Folks have confirmed it is possible to call someone via FaceTime, and secretly listen in on their iThing or Mac’s microphone before they accept or reject a call. It’s a handy, creepy way to find out what someone’s up to before they answer. We’re told iOS 12.1 and 12.2, and macOS Mojave are vulnerable at least.

There’s no indication, on screen or otherwise, that this eavesdropping is happening to your victim. It’s even possible to snoop on the video camera.

Here’s the steps to reproduce the security blunder: on an iPhone, video call a contact using FaceTime on a vulnerable device, and while connecting, swipe up and add a person to the call. Then add your number, and your group call will secretly pipe in the other person’s microphone audio, even if they haven’t responded yet.

Incredibly, if the callee hits the power button, the front-facing camera feed is also secretly shown to the caller, though the callee can now hear your audio. Here’s a video doing the rounds demonstrating the hack:

Apple reckons it’ll push out a software fix for this privacy gaffe later this week. Instructions on disabling FaceTime in the meantime are here. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/29/facetime_bug/

Turn Off FaceTime in Apple iOS Now, Experts Warn

Newly found bug reportedly allows callers to spy on you — even if you don’t pick up.

Security experts are warning Apple iOS users to immediately disable FaceTime on their devices after word began to spread today about a newly discovered bug that allows anyone to call you via the app and access your audio and video even if you don’t answer the call.

Apple told BuzzFeed that the company was “aware of this issue and we have identified a fix that will be released in a software update later this week.”

A video of how to FaceTime someone and listen in or see them via their camera spread via social media today, and the blog 9to5Mac later posted the actual steps involved:

  • Start a FaceTime Video call with an iPhone contact.
  • Whilst the call is dialling, swipe up from the bottom of the screen and tap Add Person.Add your own phone number in the Add Person screen.
  • You will then start a group FaceTime call including yourself and the audio of the person you originally called, even if they haven’t accepted the call yet.”

The best protection for now is to disable or turn off FaceTime, experts say.

Read more here and here

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/turn-off-facetime-in-apple-ios-now-experts-warn-/d/d-id/1333748?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple