STE WILLIAMS

Nationwide facial recognition ID program underway in France

France is creating – and speeding up the rollout of – a nationwide program using facial recognition to create legal digital identities for its citizens.

The program is called Alicem – an acronym for “certified online authentification on mobile”. It was developed jointly by the Ministry of the Interior and the National Security Title Agency (ANTS), which maintain that it’s going to a) simplify getting online services while b) fighting identity theft, c) keeping the biometric data safe on the phone, making it disappear after validating identity, and d) not letting third parties get at the data.

France had planned to launch the Android-only app by Christmas. But now, it’s greasing the wheels and plans to have it up and running in November 2019, Bloomberg reports.

Privacy watchdogs are not pleased

The country’s privacy regulator, CNIL, says the program breaches the EU’s rule of consent. Europe’s General Data Protection Regulation (GDPR) mandates free choice. Bloomberg spoke to Emilie Seruga-Cau, the head of law enforcement at CNIL, who said that the independent regulator has made its concerns “very clear.”

The publication, which was able to check out the app, reports that Alicem will be the only way for French citizens to create a legal digital ID, and facial recognition will be the only way to do it.

It will require that residents use an Android app to take one-time selfie videos that capture their expressions and movements at different angles, to compare with photos of themselves stored in their biometric passports.

Meanwhile, the French privacy rights group La Quadrature du Net (LQDN) has filed a lawsuit over the program in France’s highest administrative court.

LQDN lawyer Martin Drago told the Telegraph that the government is rushing people into using Alicem and facial recognition:

The government wants to funnel people to use Alicem and facial recognition. We’re heading into mass usage of facial recognition. [There’s] little interest in the importance of consent and choice.

Security claims questioned

As Bloomberg points out, we might have just cause to question the Interior Ministry’s assurances that it can be trusted to guard the biometric data it plans to collect.

On 17 April 2019, the French Government launched a secure encrypted messaging app called Tchap that was hailed as being more secure than Telegram or WhatsApp, to be accessed only by officials and politicians with email accounts associated with email from government domains.

Two days later, it took 75 minutes for French security researcher Robert Baptiste – better known by his Twitter username, Elliot Alderson – to find a security loophole that allowed anyone to sign up an account with the Tchap app and access groups and channels without requiring an official email address.

The French government has promised that the security of Alicem is of the highest “state level” – a promise that doesn’t ease Baptiste’s concerns over its being rushed into use. Bloomberg quotes the researcher:

The government shouldn’t boast that its system is secure, but accept to be challenged. They could open a bug bounty before starting, because it would be serious if flaws were discovered after people start using it, or worse if the app gets hacked during enrollment, when the facial recognition data is collected.

France has promised not to use facial recognition to keep tabs on citizens, as is done in China and Singapore. The biometric data won’t be integrated into citizen identity databases as in those countries, it says. Rather, the data will disappear after validation occurs, they say. (Because the promise of ephemeral data has turned out so well for Snapchat and WhatsApp, et al., yes?)

But critics say that rushing to embrace the technology is a “major risk” at this point given that there are still questions about how it will eventually be used. Bloomberg quoted Didier Baichere, a governing-party lawmaker who sits on the Parliament’s “future technologies” commission and who authored a recent report on the subject.

He called it “ludicrous” to put facial recognition into mass use without first putting in place checks and balances.

France might be the first EU country to use a nationwide facial recognition ID app, but it’s far from alone in embracing the technology. According to a report from the Carnegie Endowment for International Peace, the use of AI surveillance technologies – including facial recognition – is spreading faster, to a wider range of countries, than experts have commonly understood.

The report found that at least 75 out of 176 countries globally are actively using AI technologies for surveillance purposes, including smart city/safe city platforms, now in use by 56 countries; facial recognition systems, being used by 64 countries; and smart policing, now used by law enforcement in 52 countries.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/dDDYjz1VwrY/

That was some of the best flying I’ve seen to date, right up to the part where you got hacked

US defence firm Raytheon is punting a security suite that apparently promises to harden military aircraft against “cyber anomalies”.

The company is reportedly developing “a new warning system that tells pilots when their planes are being hacked”.

“Basically, we’re trying to give the pilot the information about what’s happening internally on his aircraft in real time,” Amanda Buchanan, the project’s engineering lead, told American military news website Defense One.

The basic pitch is that most military aircraft electronics are relatively simple compared to modern ground-based systems. With even modern designs using serial data buses*, Raytheon reckons there’s a niche in the market for startling the hell out of pilots by giving them something else to worry about while flying over a warzone.

Defense One reported that during a sales demo, Raytheon engineers ran a simulation of a helicopter flight and injected “malicious code wirelessly from a tablet”, causing the simulated aircraft’s engines to shut down and crash, with the pilot at least getting to see a red caption titled “cyber anomaly” before his virtual demise. The attack vector was described as being one of the heli’s various wireless receivers.

A Raytheon marketing article notes that its CADS monitoring system can be retrofitted to monitor ARINC-429 buses, which are the civilian equivalent of MIL-STD-1553 and are used on airliners. The firm also says the system can be modded for automotive-grade CAN buses.

Another marketing feature mentions a highly specific use case: “Operational threats that can come either from an enemy or from a US soldier inadvertently causing a cyber intrusion to propagate by plugging his malware-infected cell phone into a USB port on a Stryker vehicle, for example.”

Frequent flyers

It has been the dream of certain hackers for years to compromise an in-flight airliner by using a laptop from the passenger seat. Infamously, back in 2015, Chris “Plane Hacker” Roberts claimed to have hacked an airliner by doing just that – though the rest of the world scoffed at his claims. A couple of years before that, some chancer claimed he had written an Android app that could completely compromise airliner flight control systems to the point of flying the aeroplane by tilting the hacker’s handset – all through the aviation equivalent of SMS messages.

It is notable that in the latter case, part of the proof-of-concept testing was carried out using the X-Plane flight simulator software. While X-Plane can be used as part of a professional-grade setup that can be certified for real-world pilot training – and that capability forms part of its vendors’ marketing spiel, quite rightly – if it isn’t installed on a certified system, it’s just a consumer-grade flight sim.

In addition, the danger of “proving” a hack against flight simulator software is that simulated systems do not always reflect real-world systems; the frontend might function identically to the user (make input, see same reaction as the real aeroplane) but the backend can be vastly different in how it achieves the same visual effects. Radio signals, for example, are simulated through defining origin points and ranges; they don’t degrade dirtily over distance as real-world signals do, nor can directional signals be bent or rebroadcast using real-world RF principles because the simulator engine simply doesn’t reproduce any of that.

In more recent years, aircraft security has become a bit more serious. The American Department of Homeland Security said in 2017 that it had successfully accessed some systems on a Boeing 757 as part of a “remote, non-cooperative penetration” testing exercise. Earlier this year an infosec pro poked around some general aviation-grade kit to see how vulnerable that was, but his efforts, while valuable, were a long way from an in-flight compromise. ®

Busnote

* This website lists aircraft using the US MIL-STD-1553 spec serial data bus. They range from the brand new F-35 Lighting II supersonic stealth fighter jet to – entirely implausibly – the 1950s vintage Hawker Hunter. The authors of that list were evidently a bit too keen.

Wikipedia, unfortunately, has a detailed breakdown of the standard itself.

Sponsored:
Your Guide to Becoming Truly Data-Driven with Unrivalled Data Analytics Performance

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/07/raytheon_aviation_security_bus_product/

Magecart Skimmers Spotted on 2M Websites

Researchers say supply chain attacks are responsible for the most significant spikes in Magecart detections.

Credit card-skimming threat Magecart has reportedly compromised more than 2 million victim websites and directly breached more than 18,000 hosts, RiskIQ researchers report.

Magecart is a rapidly growing cybercrime syndicate made up of several groups, all of which specialize in digital credit card theft via payment form skimmers. Attackers either gain direct access to websites or through other third-party services in supply chain attacks. Malicious JavaScript is implanted into the site’s code and lifts the data shoppers enter in payment forms.

RiskIQ’s first observation of Magecart was back in 2010, but the threat has recently been ramping up as more attackers learn how to compromise websites. Supply chain attacks are responsible for the biggest spikes in Magecart detections, the largest of which took place on June 27, 2018, when Ticketmaster reported a breach conducted by Magecart. Third-party shopping platforms like Magento and OpenCart remain popular Magecart targets; RiskIQ has detected 9,688 vulnerable Magento hosts, researchers say in a new report on the threat.

Some groups are taking advantage of digital advertising to bring more traffic to infected checkout pages: Seventeen percent of all malvertisements detected by RiskIQ are controlled by Magecart.

The average length of a Magecart breach is 22 days; however, many last years and some are indefinite. Its infrastructure contains 573 known command-and-control domains and 9,189 hosts observed loading C2 domains. There are now dozens of known Magecart groups, and researchers have begun connecting some of these groups with known cybercrime groups.

Read more details of the report, “Magecart: The State of a Growing Threat,” here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Rethinking Cybersecurity Hiring: Dumping Resumes Other ‘Garbage.’

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/magecart-skimmers-spotted-on-2m-websites/d/d-id/1336011?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

6 Active Directory Security Tips for Your Poor, Neglected AD

The unappreciated core of your enterprise IT network needs your security team’s TLC. Here are a few ways to give Active Directory the security love it needs.

Active Directory (AD) is Microsoft’s directory server — the software that ties servers, workstations, network components, and users into a unified whole. Keeping it secure and well-maintained is key to stopping attackers’ lateral movement and credential theft…and yet, AD management and security are struggles for many organizations.

With AD possibly sitting within any and every segment of the network, both users and hardware alike, what are the best practices for making sure that it doesn’t become an attack surface on which threat actors can play?

(image source: lucadp, via Adobe Stock)

While some have said that Active Directory is dead, it remains a powerful part of most enterprise networks even while the move to the cloud, services, and serverless applications has continued. Part of the Windows Server software suite, AD is nearly ubiquitous in on-premises network deployments and expanding into cloud deployments through integration with Microsoft Azure, Google Cloud, Amazon AWS, and other public cloud instances. Though AD’s use may well have peaked, it is far from an enterprise relic.

So what key practices will keep this not-relic from becoming a not-secure part of your network? While books could be (and have been) written about AD security, here are a half-dozen items multiple sources can agree on as key to a secure AD deployment.

None of these requires a specific piece of external hardware or software; many don’t require anything beyond AD itself. And each should be a reminder to check your AD deployment to make sure that you haven’t simply relied on default settings that might, or might not, be the best answer.

Keep Your Eye on Accounts

AD can support more than a million accounts, but most criminals truly care about a relative handful: the accounts with the most privileges. In general, these will be accounts with local admin or domain admin rights. And nobody should be using these accounts to log in on a regular basis.

Those who are administrators should have two accounts: One with admin privileges and one without, as proposed in this 2018 Active Directory Pro article.

All of the day-to-day work required of an IT staff member should be possible with a “regular” account. The admin-privilege account should be used only to do specific tasks requiring the higher privileges; after the tasks are complete, it should be back to the “mere mortals” account.

There’s one more account that bears mention: the Domain Administrator. This is the “god account” that is used only during initial deployment and then for disaster recovery. The best practice for this account is to give it a ridiculously long and complex password, secure the password in a safe, and then only use the account if things go disastrously off the rails.

Check Your Privilege

Within all accounts, even those of administrators, a “least privilege” model is the best practice, as Microsoft suggests. Every user account should come equipped with only those privileges required to do the job. No less, and no more.

As part of this, account privileges should be reviewed every time an individual changes position, or on an annual basis. “Privilege creep” is a real thing that is a real, huge, security vulnerability. Don’t let it happen in your AD deployment.

Run LAPS

One point of Active Directory attack is the administrator account used to manage individual workstations. In many organizations that provision workstations from a single “golden” image, the administrator password can be the same for every computer in the fleet. An attacker who gains the password for one computer can control every workstation in the organization. At Black Hat USA 2019, Nikhil Mittal, principal trainer at Pentester Academy, discussed just such an attack surface in an interview with Dark Reading.

Microsoft’s Local Administrator Password Solution (LAPS) is a tool, built on AD, that creates and manages the passwords for each workstation, storing them in Active Directory where an administrator can access them when required. LAPS makes sure that the password for each workstation is unique, limiting the damage that can result from compromising any single machine.

Guard Domain Controllers

Domain controllers are the servers where the actual databases for the directories are stored. A compromised domain controller (and there can be several levels of domain controllers in a large network) can open the entire network — and every workstation and server — to breach.

If domain controllers have been set up to allow access via external link — something that can be justified in order to make remote management and problem solving easier — then they are much easier for a criminal to attack. To prevent this, Microsoft suggests that domain controllers should be accessible only via single, local machines. It makes late-night or weekend problem solving harder, but the tradeoff for security is a best practice for AD.

Watch the Watchers — And Everyone Else

IT staff understand the network as it existed on the day it was deployed, but attackers win by understanding the network as it exists today. IT security staff can prevent this route of attack by constantly monitoring the network.

Never let someone outside the organization know the network better than you. In an interview at Black Hat 2017, Ty Miller, managing director, and Paul Kalinin, senior security consultant, from Threat Intelligence Pty. Ltd discussed the complicated nature of this monitoring, and some of the consequences of not staying on top of a network’s activities.

For example, Miller and Kalinin explained, this failure to monitor could make it possible to create an Active Directory botnet. Further, detecting such a botnet based on simple metrics like network performance would be almost impossible.

So what kinds of things should be monitored?

Systems should be constantly monitored for their state of patching and updating. User accounts should be monitored for excessive login attempts that could indicated an attempted hack. The network should be monitored for unusual traffic, or traffic destined for unusual recipients. Part of that, as Microsoft guidance explains further, is understanding what usual traffic looks like, especially from highly privileged users.

There’s a long list of “should be monitored for…” but it all boils down to not allowing anyone to have a more accurate picture than your staff of what’s happening on the network.

Upgrade from Passwords

This is a simple user training step that requires no additional hardware or software: Teach users to ditch passwords in favor of passphrases. An easy to remember phrase can make a user credential that is virtually impossible to crack via brute force or guessing.

So what’s the difference? A strong password might look like, “Xfj45!h4?HPS470*.” It’s strong, but impossible for most humans to remember. It begs for either a password manager or a sticky note stuck to the user’s monitor.

A passphrase, on the other hand, might be,”ILovethePizzaatBobby’son34thSt.” That’s strong, long, and memorable. Moving to passphrases shouldn’t require much training and no additional hardware or software — it’s the kind of best practice that every organization should be able to get behind.

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/6-active-directory-security-tips-for-your-poor-neglected-ad--/b/d-id/1336002?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Drupalgeddon2 Vulnerability Still Endangering CMSes

A new wave of attacks has been discovered on Drupal-based content management systems that weren’t patched for the older flaw.

A vulnerability that’s been patched is still a vulnerability if patches haven’t been applied. And unpatched vulnerabilities are catnip to criminals. That’s the case with Drupalgeddon2 (CVE-2018-7600), a critical vulnerability in CMS platform Drupal that was discovered and patched in 2018: new research show it’s still being attacked and exploited today.

According to Larry Cashdollar, lead security researcher at Akamai, attackers are embedding obfuscated exploit code in .gif files. If executed, the code uses IRC (Internet Relay Chat) channels to contact a command and control server and then execute any of a variety of RAT, credential skimming, or DDoS payloads.

The attacks, so far, do not seem to target any particular industry or market segment, instead probing a range of high-profile websites. “When the vulnerability’s exploitation is simple, which is the case with Drupalgeddon2, attackers will automate the process of scanning, exploitation, and infection when there are poorly maintained and forgotten systems,” Cashdollar wrote in a post about his findings. These systems are often forgotten or neglected but connected to critical systems that can then be attacked at the criminal’s leisure, he said.

For more, read here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Rethinking Cybersecurity Hiring: Dumping Resumes Other ‘Garbage.’

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/drupalgeddon2-vulnerability-still-endangering-cmses/d/d-id/1336015?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Lack of Role Models, Burnout & Pay Disparity Hold Women Back

New ISACA data emphasizes a gap between men and women who share their opinions on underrepresentation of women and equal pay in the tech industry.

Female representation in technology is in a tough spot: More than half (56%) of women who participated in a new ISACA survey point to a lack of female role models as the primary reason for underrepresentation of women in tech jobs. At the same time, pay disparity, career growth, and other systemic issues keep women from staying in their jobs and moving up in the ranks.

The process of bringing more women into technology moves slowly, but it is happening. An (ISC)² study published earlier this year found women made up 24% of its cybersecurity respondents, up from 11% in 2017. Women proportionately fill more leadership roles and are higher-ranking: 11% of women report to the vice president of IT, compared with 6% of men. 

ISACA recently polled more than 3,500 IT governance, risk, assurance, and security professionals as part of its new report “Tech Workforce 2020: The Age and Gender Perception Gap.” Twenty-two percent of respondents are in a cybersecurity role. Most agree women are underrepresented in tech roles around the world; however, men and women differ on why.

“The women in security who were surveyed for this project said the top barrier to women entering the tech industry is that most information technology role models and leaders are male,” says Melody Balcet, director of The AES Corporation’s Global Cybersecurity Program and former president of the ISACA Greater Washington, DC, chapter.

There are several potential root causes for the imbalance of men and women in technology, says Balcet. Lack of female role models is a key issue that affects the current and future workforce — or so women say. Only 34% of men surveyed think lack of female role models is problematic. Nearly one-third of men claim women find employment in the tech field “less appealing than other sectors,” a statement the women “overwhelmingly” disagreed with. 

This isn’t the only area with a gender disparity: 65% of men say their employers have a program to promote hiring more women; only 51% of women say the same. More than 70% of men say their employers have a program to encourage promotion of women; 59% of women agree. Nearly half of women say their employers have no program to hire more women for tech roles.

These perceptions, combined with stress, pay disparity, and other factors, makes it harder for women to build security and technology careers. Sixty-four percent of tech pros report burnout or stress in their roles due to heavy workloads (61%), long hours (50%), and lack of resources (48%). Women report this stress at a slightly higher rate of 67%, compared with men at 62%.

Who Gets the Promotion?
ISACA’s data also reflected disparities when considering salary negotiation and job promotions. Overall, men reported greater confidence in their understanding of how to advance their careers. Despite this, 74% of women claim to have been offered a salary increase or promotion in the past two years, compared with 64% of their male counterparts. ISACA points out that this could be attributed to organizations’ increased focus on addressing gender pay gaps.

(ISC)² found women still face an uphill battle when it comes to compensation. When asked about their salaries for the previous year, 17% of women reported earnings in the $50,000 to $99,999 range, a full 12 percentage points less than men (29%). The disparity is smaller with the next generation: Globally, younger women face a smaller pay discrepancy than older women.

Money isn’t the only factor in women’s career decisions. Balcet encourages businesses to create opportunities to maintain and advance skills and find ways to make projects meaningful. “Like any person, male or female, women need to see that there is a career path available to them,” she says. “If a company has one laid out, but there are other systemic issues that are preventing women from advancing — for example, a lack of inclusion — then women will leave.”

All this considered, women tend to stay in security roles longer than their male counterparts, Balcet says. In her experience, male colleagues have been more likely to reach out for exploratory conversations and “network with intent” to scope out potential job opportunities.

In terms of bringing more women into leadership roles, Balcet says more needs to be done to educate experts along the pipeline: Teachers, guidance counselors, career advisers, and human resources professionals can all play a role in bringing more women into technology roles. 

Job Hopping: A Next-Gen Problem
The reasons women in security are likely to change jobs are similar to the reasons of women in IT audit, risk, governance, and other roles, Balcet explains. Primary reasons include better career prospects, higher compensation, an upward career path not available at their current organization, more interesting work, and a better organizational culture.

Seventy percent of the technology workforce is considered “in-play,” meaning they “may” or “definitely” anticipate changing jobs within the next two years. This is especially an issue with younger employees, ISACA found. Nearly half of respondents under 30 have changed jobs within the last two years; almost 40% think they’ll change jobs or employers in the next two.

This is partly because younger workers are less likely to tolerate stress and burnout than their older counterparts. Data shows employees under 30 are more likely to leave a position if they find another job in a less-stressful environment. ISACA advises learning why employees leave and why they stay, and offer opportunities for advancement and skill training to retain tech pros.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Rethinking Cybersecurity Hiring: Dumping Resumes Other ‘Garbage.'”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/lack-of-role-models-burnout-and-pay-disparity-hold-women-back-/d/d-id/1336016?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Wi-Fi signals let researchers ID people through walls from their gait

Yasamin Mostofi asks us to imagine this scenario: police have video footage of a robbery. They suspect that one of the robbers is hiding in a house nearby.

Can a pair of off-the-shelf Wi-Fi transceivers, located outside the house, look through the walls to see who’s inside?

That’s easy to answer, since we’ve seen it done before.

In 2015, MIT researchers created a device that can discern where you are and who you are, detecting gestures and body movements as subtle as the rise and fall of a person’s chest, from the other side of a house, through a wall, even though subjects were invisible to the naked eye, by using the human body’s reflections of wireless transmissions.

Then, 11 months ago, a team of researchers at University of California Santa Barbara demonstrated using a streamlined set of technologies – just a smartphone and some clever computation – how to see through walls and successfully track people in 11 real-world locations, with high accuracy.

But here’s a new question: Can Wi-Fi signals be used to identify the person in the house? Can off-the-shelf hardware determine if whoever’s in the house is one of the people in the video surveillance footage police are scrutinizing?

Yes. UC Santa Barbara researchers are back again to show that they’ve built on their previous work: It can be done by analyzing people’s walking gaits and comparing them to the gait of whoever’s in the CCTV footage.

Mostofi, a professor of electrical and computer engineering at UC Santa Barbara, is the research lead behind XModal-ID: a proposed approach to using Wi-Fi signals to identify people from their walking gait.

The methodology and experimental results from Mostofi’s team, which will be presented at the 25th International Conference on Mobile Computing and Networking (MobiCom) on 22 October, show that Wi-Fi signals can be used to detect the gait of people through walls and to then match it to previously captured video footage in order to identify individuals.

As she describes in a YouTube video, XModal-ID uses only the power of a pair of Wi-Fi transceivers located outside a building.

It needs neither prior Wi-Fi or video training data of the person under surveillance, nor any knowledge of the area in which it’s going to be set up.

How it works

To identify individuals’ unique gaits, the researchers had to translate video into the wireless domain. To do so, they used what’s called human 3D mesh extraction. That’s an algorithm that extracts a 3D mesh that describes the outer surface of the human body as a function of time. Then, they added electromagnetic wave approximation to simulate the RF signal that would have been generated if the person was walking in a Wi-Fi area.

Next, they used a time-frequency processing pipeline to extract key gait features from both the “real” WiFi signal – i.e., the one that was measured behind the wall – and the video-based one. They compared the two spectrograms, which carry the person’s gait information, extracting a set of 12 key features from both spectrograms, and calculated the distance between the corresponding feature to determine if they matched.

In experiments conducted on campus in three different areas, they hit accuracy rates of between 82% and 89%.

For more details, check out the team’s project summary page, or you can read their paper.

Applications: surveillance, security and smart homes

The team suggests that there are several potential applications for XModal-ID:

  • Security and Surveillance. The technology could be used by police who are searching for a suspect captured in video footage of a crime scene. They’ll need to position a pair of Wi-Fi transceivers outside a suspected hide-out building and then use XModal-ID to determine if they’re observing an individual who matches the one from the crime video. They can also use the existing Wi-Fi infrastructure of public places to detect the presence of the suspect, the researchers said… all of which would raise questions about warrantless search, I imagine…?
  • Personalized Services. The researchers suggest that XModal-ID could also come in handy in a smart home, where the home Wi-Fi network could use XModal-ID and one-time video samples of each resident to identify who just walked into a room. Presto, switch on that person’s preferred music, adjust the temperature to suit, and set the lighting to his or her preferences.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/uthcWVmlUvo/

Social media platforms can be forced to delete illegal content worldwide

Individual countries can order Facebook and similar content providers to take down posts, photos and videos worldwide, not just in their own countries, Europe’s top court said on Thursday.

Facebook can’t challenge this decision, which extends the EU’s internet-related laws beyond its own borders.

In Thursday’s decision, the EU Court of Justice said that platforms can be ordered to remove not just a copy of illegal content that somebody’s complained about. They can also be ordered to proactively seek out all identical copies of the content and scrub them too, rather than sitting back and waiting for every instance to be reported.

What it means: copies of defamatory or other illegal content that’s posted to secret places – private groups on Facebook, for example – can’t hide away from the scrub brush.

The ruling stemmed from a case filed in 2016. It involved a comment made on Facebook about an Austrian politician – Eva Glawischnig-Piesczek, former leader of the Austrian Green Party – that an Austrian court decreed was insulting and defamatory. As the New York Times reports, she sued the social network to expunge online comments that called her a “lousy traitor,” “corrupt oaf” and member of a “fascist party.”

Facebook initially refused to take down the post. Glawischnig-Piesczek started in Austrian courts, suing Facebook over the matter. After Austrian courts concluded that the comments were defamatory and reputation-damaging, Glawischnig-Piesczek demanded that Facebook erase the original comments worldwide, not just within the country, as well as posts with “equivalent” remarks.

She took the case on up to the top EU court, the European Court of Justice.

The Austrian supreme court asked the European Court of Justice to weigh in on the case – specifically, to interpret a directive on electronic commerce. Existing law held that a host provider such as Facebook isn’t liable for user-posted content that the platform doesn’t know is illegal, or if it acts expeditiously to remove or to disable access to the content as soon as it’s made aware of it.

Up until Thursday’s ruling, the directive hadn’t required host providers to actively seek out illegal content.

In Thursday’s decision, the court said that Facebook isn’t liable for the insulting comments posted about Glawischnig-Piesczek, but the platform still had an obligation to take them down after the Austrian court found them to be defamatory. Facebook, the court said, “did not act expeditiously to remove or to disable access to that information.”

It’s now up to the courts in EU nations to decide which cases merit forcing an internet company to take down content in foreign countries. The ruling also raises questions: for example, besides defamation laws, with what other laws can EU countries force Facebook and other internet companies to comply?

A challenge to freedom of expression?

Facebook said in a statement that the decision “undermines the longstanding principle that one country does not have the right to impose its laws on speech on another country.”

It also said that the ruling raised questions about freedom of expression and “the role that internet companies should play in monitoring, interpreting and removing speech that might be illegal in any particular country.”

The decision is just the latest example of the nuances that EU courts are working to clarify when it comes to how to regulate the internet and how far a reach their laws can have, be it in-country only or worldwide. At the end of September, Google won a landmark case in which the European Court of Justice ruled that EU citizens’ Right to be Forgotten (RTBF) is EU-only.

The court said that Google couldn’t be ordered to remove links to websites globally, except in certain circumstances when weighed against the rights to free expression and the public’s right to information.

Thursday’s decision that Facebook and similar online content hubs can be ordered to stamp out identical content copies globally likely won’t lead to a flood of orders telling Facebook to do so, according to David Erdos, deputy director of the Center for Intellectual Property and Information Law at Cambridge University. He told the Times that the opinion had been narrowly crafted, and that EU countries’ courts should be carefully weighing any bans against international laws:

Courts will be feeling their way for years to come.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/oaBb-PIYY-o/

Facebook urged by governments to halt end-to-end encryption plans

Tensions between Facebook and three governments escalated last week after the US, the UK, and Australia officially urged Facebook to halt its plans for end-to-end encryption.

The row concerned Facebook CEO Mark Zuckerberg’s publication of a privacy manifesto in March this year, in which he promised to extend the company’s end-to-end encryption work and introduce the technology into its core Facebook Messenger product.

A thorn in their sides

An online messaging service can encrypt your data in two ways. It can store the encryption key on the provider’s own servers, enabling law enforcement to subpoena it and unlock your messages. Alternatively, end-to-end encryption stores the key to a messaging session exclusively on the participating computers, meaning that the tech company has nothing to give the authorities. This means that even if law enforcement accesses a person’s messages, they wouldn’t be able to read the contents.

End-to-end encryption is a thorn in the side of governments who want to track criminals. On Friday, US Attorney General William Barr published an open letter to Zuckerberg, cosigned by UK Home Secretary Priti Patel, acting United States Secretary of Homeland Security Kevin McAleenan, and Australian Home Affairs Minister Peter Dutton. It laid out its demands clearly in the first paragraph:

We are writing to request that Facebook does not proceed with its plan to implement end-to-end encryption across its messaging services without ensuring that there is no reduction to user safety and without including a means for lawful access to the content of communications to protect our citizens.

Lawful access

The letter called upon Facebook to embed the ability to see message content in the design of its systems, and to give law enforcement lawful access (meaning access to message content on production of a warrant). The company should consult with governments when taking these measures, and avoid going forward with its proposed changes until it can be sure that it is following these principles, the letter warned.

The signatories also warned that Facebook’s proposed encrypted messaging system would be especially vulnerable to abuse:

Risks to public safety from Facebook’s proposals are exacerbated in the context of a single platform that would combine inaccessible messaging services with open profiles, providing unique routes for prospective offenders to identify and groom our children.

The Department of Justice published the letter just as it signed a landmark agreement with the UK government under the US Clarifying Lawful Overseas Use of Data (CLOUD) Act. The legislation, enabled in March 2018, lets the US demand data from technology companies harbouring that data on foreign soil. It’s a response to the 2013 United States vs Microsoft case in which Microsoft refused to give the Feds access to data stored on an Irish server.

The agreement signed between the US and the UK this week is the first under a provision in the CLOUD Act enabling other countries to demand US-based data from tech companies, leapfrogging time-consuming US legal processes. However, this agreement still doesn’t let countries get at end-to-end encrypted messages, hence the open letter.

Tradeoff between privacy and preventing harm

Tech companies such as Telegram and Facebook-owned WhatsApp support end-to-end encryption and include it as a feature in their products. In his March privacy post, Zuckerberg defended the technology, arguing that it protected dissidents in oppressive regimes. Instead of reading message content, he said Facebook looked for patterns of activity that might point to bad actors.

However, he admitted a tradeoff between privacy and preventing online harm, adding:

… we will never find all of the potential harm we do today when our security systems can see the messages themselves.

This tradeoff – and how much each side is willing to give – is the basis for a long-standing crypto-war between government and tech advocates stemming back to the formation of the cypherpunk movement in the mid-eighties. On one side, many tech privacy advocates don’t want to give up their basic privacy rights. On the other, law enforcement officials point to terrorists and child abusers using encryption to “go dark” as examples of the need for lawful access.

Barr has followed governments in the UK and Australia in backing technical measures to try and find a compromise. At the FBI’s International Conference on Cybersecurity in July, he said:

It is well past time for some in the tech community to abandon the indefensible posture that a technical solution is not worth exploring and instead turn their considerable talent and ingenuity to developing products that will reconcile good cybersecurity to the imperative of public safety and national security.

The standoff is likely to continue. Facebook reportedly responded to Barr’s open letter on Friday with its own statement.

We believe people have the right to have a private conversation online, wherever they are in the world. As the US and UK governments acknowledge, the CLOUD Act allows for companies to provide available information when they receive valid legal requests and does not require companies to build back doors.

We respect and support the role law enforcement has in keeping people safe. Ahead of our plans to bring more security and privacy to our messaging apps, we are consulting closely with child safety experts, governments and technology companies and devoting new teams and sophisticated technology so we can use all the information available to us to help keep people safe.

End-to-end encryption already protects the messages of over a billion people every day. It is increasingly used across the communications industry and in many other important sectors of the economy. We strongly oppose government attempts to build backdoors because they would undermine the privacy and security of people everywhere.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/K9Rl32jdNKE/

Android devices hit by zero-day exploit Google thought it had patched

Google has admitted that some Android smartphones have recently become vulnerable to a serious zero-day exploit that the company thought it had patched for good almost two years ago.

The issue came to light recently when the Google’s Threat Analysis Group (TAG) got wind that an exploit for an unknown flaw, attributed to the Israeli NSO Group, was being used in real-world attacks.

Digging deeper into the exploit’s behaviour, Project Zero researcher Maddie Stone said she was able to connect it to a flaw in Android kernel versions 3.18, 4.14, 4.4, and 4.9 that was fixed in December 2017 without a CVE being assigned.

Somehow, that good work was undone in some later models – or never applied in the first place – leaving a list of vulnerable smartphones running Android 8.x, 9.x and the preview version of 10.

The flaw is now identified as CVE-2019-2215 and described as a:

Kernel privilege escalation using a use-after-free vulnerability, accessible from inside the Chrome sandbox.

The result? Full compromise of unpatched devices, probably served from a malicious website without the need for user interaction, in conjunction with one or more other exploits. It also requires that the attacker has installed a malicious app.

Affected models:

  • Google – Pixel 1, Pixel 1 XL, Pixel 2, Pixel 2 XL
  • Samsung – S7, S8, S9
  • Xiaomi – Redmi 5A, Xiaomi Redmi Note 5, Xiaomi A1
  • Huawei – P20
  • Oppo – A3
  • Motorola – Moto Z3
  • LG – Oreo LG phones

This official list is probably not exhaustive, so just because your phone isn’t on the list doesn’t mean it isn’t vulnerable. However, Google has confirmed that the Pixel 3 and Pixel 3a are not affected. Google added:

We have evidence that this bug is being used in the wild. Therefore, this bug is subject to a 7 day disclosure deadline. After 7 days elapse or a patch has been made broadly available (whichever is earlier), the bug report will become visible to the public.

For most users, a fix will ship with the October Android security update next week after phone makers have checked it works on their devices.

The unusual element of this story is the alleged involvement of the NSO Group, a commercial organisation connected to an attack in May affecting Facebook’s WhatsApp.

Many of the attacks involve campaigns against Non-Governmental Organisations (NGOs) using a spyware tool called Pegasus popular with nation-state intelligence services.

NSO has, of course, claimed that its tool is used legitimately although how it can be certain it hasn’t fallen into the wrong hands has never been made clear.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/9TX4gOYOCsQ/