STE WILLIAMS

Facebook and Twitter may be forced to identify bots

Twitter and Facebook are all too aware that they’ve been infiltrated by Russia-backed bots.

Twitter, for its part, has purged tens of thousands of accounts associated with Russia’s meddling in the 2016 US presidential election. The company also said it would email notifications to hundreds of thousands of US users that followed any of the accounts created by the Russian government-linked propaganda factory known as the Internet Research Agency (IRA), and has said that it’s trying to get better at detecting and blocking suspicious accounts. (As of January, it said it was detecting and blocking approximately 523,000 suspicious logins daily for being automatically generated).

That’s not good enough, according to California lawmakers. They’ve introduced a bill that would give online platforms such as Facebook and Twitter three days to investigate whether a given account is a bot, to disclose that it’s a bot if it is in fact auto-generated, or to remove the bot outright.

The bill would make it illegal for anyone to use an automated account to mislead the citizens of California or to interact with them without disclosing that they’re dealing with a bot. Once somebody reports an illegally undisclosed bot, the clock would start ticking for the social media platform on which it’s found. The platforms would also be required to submit a bimonthly report to the state’s Attorney General detailing bot activity and what corrective actions were taken.

According to Bloomberg, the legislation is slated to run through a pair of California committees later this month.

Bloomberg quoted Shum Preston, the national director of advocacy and communications at Common Sense Media and a major supporter of the bill. Preston said that California’s on a bit of a guilt trip, given how the social media platforms that have been used as springboards to stir up political and social unrest are parked in its front yard:

California feels a bit guilty about how our hometown companies have had a negative impact on society as a whole. We are looking to regulate in the absence of the federal government. We don’t think anything is coming from Washington.

New York is also tired of waiting for the Feds to push social media companies into fixing the bot problem. Governor Andrew Cuomo is backing a bill that would require transparency on who pays for political ads on social media.

Proposed legislation at the Federal level includes the bipartisan-supported Honest Ads Act, a proposal to regulate online political ads the same way as television, radio and print, with disclaimers from sponsors.

California’s proposed bill steps it back to the processes that disseminate the content in the first place, but the online platforms say it can be tough to tell human from bot accounts run by ever more sophisticated technologies.

But there are signs to look out for. Twitter has said it’s developed techniques for identifying malicious automation, such as near-instantaneous replies to tweets, non-random tweet timing, and coordinated engagement. It’s also improved the phone verification process and introduced new challenges, including reCAPTCHAs, to validate that a human is in control of an account.

In January, Twitter said that its other plans for 2018 included:

  • Investing further in machine-learning capabilities that help detect and mitigate the effect on users of fake, coordinated, and automated account activity.
  • Limiting the ability of users to perform coordinated actions across multiple accounts in TweetDeck and via the Twitter API.
  • Continuing the expansion of its developer onboarding process to better manage the use cases for developers building on Twitter’s API. This, Twitter said, will help improve how it enforces policies on restricted uses of developer products, including rules on the appropriate use of bots and automation.

Researchers have also been working to come up with a set of tell-tale signs that indicate when non-humans are posting. A 2017 study estimated that as many as 15% of Twitter accounts are bots.

That paper, from researchers at Indiana University and the University of Southern California, also outlines a proposed framework to detect bot-like behavior with the help of machine learning. The data and metadata they took into consideration included social media users’ friends, tweet content and sentiment, and network patterns. One behavioral characteristic they noticed, for example, was that humans tend to interact more with human-like accounts than they do with bot-like ones, on average. Humans also tend to friend each other at a higher rate than bot accounts.

Mind you, not all bots are bad. Take Emoji Aquarium: it’s a bot that shows you a tiny aquarium “full of interesting fishies” every few hours.

Good bots are also useful: they help keep weather, sports, and other news updated in real-time, and they can help find the best price on a product or track down stolen content.

And then too, there’s Bot Hertzberg: the bot created by California Senator Bob Hertzberg to highlight the issue. Hertzberg introduced the pending California bot bill.

Here’s what human Senator Hertzberg, as quoted by Bloomberg, said about his bill:

We need to know if we are having debates with real people or if we’re being manipulated. Right now, we have no law, and it’s just the Wild West.

And here’s what his bot says in its bio:

I am a bot. Automated accounts like mine are made to misinform exploit users. But unlike most bots, I’m transparent about being a bot! #SB1001 #BotHertzberg


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/f4OGvuQeQ6s/

US spanks EU businesses in race to detect p0wned servers

European organisations are taking longer to detect breaches than their counterparts in North America, according to a study by FireEye.

Organisations in EMEA are taking almost six months (175 days) to detect an intruder in their networks, which is rather more than the 102 days that the firm found when asking the same questions last year. In contrast, the median dwell time in the Americas has improved from  at 76 days in 2017, compared with 99 in 2016. Globally it stands at 101 days.

The findings about European breach detection are a particular concern because of the looming GDPR deadline, which will introduce tougher breach disclosure guidelines for organisations that hold Europeans citizens’ data. GDPR can also mean fines of €20 million, or four per cent of global turnover, whichever is higher.

FireEye’s report also records a growing trend of repeat attacks by hackers looking for a second bite of the cherry. A majority (56 per cent) of global organisations that received incident response support were targeted again by the same of a similarly motivated attack group, FireEye reports.

FireEye has historically blamed China for many of the breaches its incident response teams detected. But as the geo-political landscape has changed Russia and North Korea are getting more and more “credit” for alleged cyber-nasties.

But a different country – Iran – features predominantly in attacks tracked by FireEye last year. Throughout 2017, Iran grew more capable from an offensive perspective. FireEye said that it “observed a significant increase in the number of cyber-attacks originating from Iran-sponsored threat actors”.

FireEye’s latest annual M-Trends report (pdf) is based on information gathered during investigations conducted by its security analysts in 2017 and uncovers emerging trends and tactics that threat actors used to compromise organisations. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/05/fireeye_breach_report/

Brain monitor had remote code execution and DoS flaw

Cisco’s Talos security limb has warned that specialist medical hardware has remote code execution and denial of service bugs.

Talos researchers say Natus Xltek EEG medical products are susceptible to “A specially crafted network packet” that “can cause a stack buffer overflow resulting in code execution.”

Which is rather scary because the Xltek EEG range includes the Xltek EEG32U Electroencephalography (EEG) recorder and the Xltek Brain Monitor.

As Talos explains, the vulnerabilities create two risks. One is that bad code running on the devices could see someone mess with the data they produce, which is heavily sub-optimal as they’re designed as diagnostic tools. The other is that hacking a brain monitor or EEG device offers a route into other parts of a healthcare facility. Which is also bad because they’re chock full of confidential records.

The good news is that Talos diagnosed the problems and Natus inoculated its kit against the threats. So if Natus users have done their patching, this should be no more serious than a dose of man-flu.

As messes like the Equifax horror demonstrate, it’s best not to assume patches have been done properly. So if your next brain scan produces some bad vibrations, please tell your doctor it’s not a sign of a sick mind, you’re just worried about proper patching! ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/05/netus_eeg_vulnerabilities/

They forked this one up: Microsoft modifies open-source code, blows hole in Windows Defender

A remote-code execution vulnerability in Windows Defender – a flaw that can be exploited by malicious .rar files to run malware on PCs – has been traced back to an open-source archiving tool Microsoft adopted for its own use.

The bug, CVE-2018-0986, was patched on Tuesday in the latest version of the Microsoft Malware Protection Engine (1.1.14700.5) in Windows Defender, Security Essentials, Exchange Server, Forefront Endpoint Protection, and Intune Endpoint Protection. This update should be installed, or may have been automatically installed already on your device.

The vulnerability can be leveraged by an attacker to achieve remote code execution on a victim’s machine simply by getting the mark to download a specially crafted .rar file while the anti-malware engine’s scanning feature is on – in many cases this is set to happen automatically.

Embarrassed/exhausted man sits in front of laptop in hipstery office. Photo by Shutterstock

Microsoft’s Windows 7 Meltdown fixes from January, February made PCs MORE INSECURE

READ MORE

When the malware engine scans the malicious archive, it triggers a memory corruption bug that leads to the execution of evil code smuggled within the file with powerful LocalSystem rights, granting total control over the computer.

The screwup was discovered and reported to Microsoft by legendary security researcher Halvar Flake, now working for Google. Flake was able to trace the vulnerability back to an older version of unrar, an open-source archiving utility used to unpack .rar archives.

Apparently, Microsoft forked that version of unrar and incorporated the component into its operating system’s antivirus engine. That forked code was then modified so that all signed integer variables were converted to unsigned variables, causing knock-on problems with mathematical comparisons. This in turn left the software vulnerable to memory corruption errors, which can crash the antivirus package or allow malicious code to potentially execute.

In other words, Redmond pulled a fork-and-bork.

Among those marveling at the bug was Flake’s fellow Google researcher Tavis Ormandy:

Needless to say, users and admins should be looking to update their copy of Windows Defender and the Malware Protection Engine as soon as possible. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/04/microsoft_windows_defender_rar_bug/

Microsoft Patches Critical Flaw in Malware Protection Engine

The emergency update addressed CVE-2018-0986, which would let an attacker execute malicious code on a Windows machine.

Microsoft has issued an emergency patch for CVE-2018-0986, a remote code execution vulnerability in the Microsoft Malware Protection Engine (MMPE). Security researcher Thomas Dullien, with Google’s Project Zero, is credited with finding the bug, Microsoft reports.

MMPE, or mpengine.dll, provides scanning, detection, and cleaning capabilities for Microsoft’s antivirus and antispyware software. Microsoft typically issues MMPE updates once a month, or as needed, to protect against new threats.

This critical vulnerability exists when MMPE doesn’t properly scan a specially crafted file, which leads to memory corruption. If successfully exploited, this could let an attacker execute malicious code on a target machine; take control of the system and install programs; view, change, or delete data; or create new accounts with full user rights.

An affected version of MMPE needs to scan a specially crafted file in order for the bug to be exploited. There are a few ways an attacker could make this happen, Microsoft explains in a security advisory, and he or she doesn’t need to be technically advanced to do it.

One way is to conceal the files on a website the victim visits. Another is to send the file via email or instant messenger. Alternatively, an attacker could abuse a website that hosts user-provided content by uploading the specially crafted file to a shared location.

MMPE will automatically scan files if the user’s anti-malware software has real-time protection enabled, so the vulnerability can be exploited without the user doing anything. If real-time scanning is not enabled, an attacker would have to wait for a scheduled scan in order to exploit.

“All systems running an affected version of antimalware software are primarily at risk,” Microsoft says. This update fixes the bug in MMPE version 1.1.14700.5 by adjusting how MMPE scans specially crafted files. In addition to the changes for this particular flaw, the patch also includes defense-in-depth updates “to help improve security-related features,” Microsoft says.

Administrators and users don’t need to take action to install MMPE updates because they’re automatically applied within 48 hours of the patch’s release. The exact time of deployment will depend on your software, Internet connection, and infrastructure configuration.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/microsoft-patches-critical-flaw-in-malware-protection-engine/d/d-id/1331453?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Report: White House Email Domains Poorly Protected from Fraud

Only one Executive Office of the President email domain has fully implemented DMARC, according to a new report.

If you want to stop email-based phishing, the Domain Message Authentication Reporting Conformance (DMARC) protocol is a recognized tool for the job. According to DMARC.org it’s a tool being used by nearly 200,000 organizations to secure their email. But according to a report from the Global Cyber Alliance, it’s a tool that’s not being used very effectively by the White House.

The Alliance surveyed the domains under the control of the Executive Office of the President (EOP) and found that only one – Max.gov – has implemented the protocol at the highest level, which protects most completely against delivery of spoofed email. Seven other domains, including whitehouse.gov and eop.gov have implemented the protocol at the lowest level, which includes only monitoring.

The other 18 domains under the office’s control have not implemented any level of DMARC at all. This could be important for those in government and the general public because these government domains are frequent choices for spoofed addresses in phishing campaigns.

Last year, the US Department of Homeland Security mandated that all federal agencies implement DMARC. The Global Cyber Alliance report indicates that not all agencies have embraced the mandate. The private sector has not fully embraced DMARC, either: A recent survey by Agari Data shows that only 8% of businesses have implemented the protocol.

For more, read here, here, and here

Interop ITX 2018

Join Dark Reading LIVE for an intensive Security Pro Summit at Interop IT X and learn from the industry’s most knowledgeable IT security experts. Check out the agenda here.Register with Promo Code DR200 and save $200.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/authentication/report-white-house-email-domains-poorly-protected-from-fraud/d/d-id/1331454?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How Gamers Could Save the Cybersecurity Skills Gap

McAfee shares its firsthand experience on training in-house cybersecurity pros and publishes new data on how other organizations deal with filling security jobs.

Grant Bourzikas, McAfee’s chief information security officer (CISO), swears by gamification as one of the key ways to invest in and retain security talent. It’s a strategy his own company has adopted in building out its security operations center in the wake of its spin-off from Intel, and new data from a study by Vanson Bourne on behalf of McAfee found that nearly three-fourths of organizations believe hiring experienced video gamers is a solid option for filling cybersecurity skills and jobs in their organizations.

Since much of the challenge of staffing a stable and successful security operations center (SOC) is retaining talent, the happier and more skilled the staffers, the better they operate and the longer they stay, according to the study, which polled 950 cybersecurity managers and professionals in organizations with 500 or more employees in the US, UK, Germany, France, Singapore, Australia, and Japan.

Some 54% of security pros who say they are “extremely” satisfied in their jobs engage in capture-the-flag games one or more times a year; 14% of pros who are unhappy in their jobs participate in those exercises.

Bourzikas says McAfee hosts tabletop exercises for its staff every two weeks, as well as monthly red exercises. “Gamification, I think, is about how I get people to think about the bigger picture” of their day-to-day security tasks, he says. “People that are new to cybersecurity want to focus on the shiny new threats and attacks and attack vectors. Most don’t like [just] doing the basic operations stuff.”

Gaming exercises help security pros improve and hone their skills, he says, and McAfee offers them to all levels of SOC staffers, for instance. “It gets them to think differently about the problem,” he says. “On the gamer side, they can learn from their mistakes, how to beat [their] opponent.”

As part of McAfee’s tabletop exercises, the participants learn to understand the type of a breach and what to do when it hits, for example. “It’s a way to think about present conditions and coming up with new ways” to add to the playbook, he says. “How do we understand and challenge the assumptions we have today?”

Some 52% of the organizations in the survey say they experience turnover of their full staff on a yearly basis. Nearly 85% find it difficult to get the talent they need, yet 31% say they don’t actively work to attract new blood.

“My view is that it’s more of a skills shortage than a people shortage,” Bourzikas says. “It’s critical to have a talent program for attracting, retaining, and developing” people, he says. “How do you give people who come in a career path where they feel rewarded and feel they are compensated and taken care of?”

In McAfee’s new study, close to 90% of security pros said they would consider leaving their jobs and going elsewhere with the right incentives, while 35% say they are “extremely satisfied” and staying put.

According to Dark Reading’s “Surviving the IT Security Skills Shortage” survey last year, more than half of organizations claim to have some highly skilled staffers but also have some who “need a lot more training.” Fewer than one in four say their teams are well trained and up to date on the latest technologies and threats, according to the report.

Automation
Automating mundane SOC and other security tasks is the Holy Grail, of course. More than 80% say automation would make security defenses work better. Bourzikas points to the promise of machine learning, neural networks, artificial intelligence, and human-machine teaming as the key to happier security pros and more-secure organizations. “If we can automate those mundane tasks we face, then we can focus on the rest of it,” he says.

Bill Woods, director of information security for McAfee’s converged physical and cybersecurity operations, says there’s still no such thing as a perfectly secure system.

“You have to accept the fact that you are never going to have impenetrable systems. It’s always going to be a game of chess. The opposer is always going to be making moves, some of which will hurt you,” he says. “It’s always going to be a battle. But that is what keeps the job interesting.”

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here. Register with Promo Code DR200 and save $200.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/how-gamers-could-save-the-cybersecurity-skills-gap/d/d-id/1331455?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Misconfigured Clouds Compromise 424% More Records in 2017

Cybercriminals are increasingly aware of misconfigured systems and they’re taking advantage, report IBM X-Force researchers.

Insider mistakes like networked backup incidents and misconfigured cloud servers caused nearly 70% of all compromised records in 2017, according to new data from IBM X-Force. These types of incidents affected 424% more records last year than the year prior, they report.

It wasn’t all bad news from the IBM X-Force Threat Intelligence Index, which pulls insights on data from millions of endpoints across hundreds of countries. Researchers found 2.9 billion records were reported breached, nearly 25% less than the 4B reported in 2016. Frequently targeted industries saw a decline in attacks (18%) and security incidents (22%) since 2016, a drop that can be primarily attributed to a decline in Shellshock attacks throughout 2017.

Hackers aren’t slowing down but they are changing their strategies, researchers say, swapping data breaches for ransomware. Instead of compromising large amounts of data, they decided it was more lucrative to lock down data access and demand ransom in return.

“Attackers are pretty much following the money,” says Paul Griswold, director of strategy and product management at IBM X-Force. The shift to ransomware “wasn’t super surprising,” he says, since ransomware can be more profitable than stealing data. This idea extends to attacks like WannaCry and NotPetya, where the goal was seemingly destruction, not financial gain.

“Chances are, those guys were being paid by somebody,” says Griswold of these attacks. While they didn’t profit from the ransomware directly, he anticipates the threat actors didn’t launch global ransomware campaigns “just for fun.” They still earned money for the attacks.

The most common class of attack vector between 2016-2017 was injection attacks, which accounted for 79% of malicious activty on enterprise networks – nearly double what it was last year. Researchers say the reason injection attacks increased is because both botnet-based command injection local file inclusion attacks and command injection attacks used embedded coin-mining tools.

Still Foggy on Cloud Configuration

Businesses struggle to properly configure cloud servers, and cybercriminals know it. Inadvertent mistakes are costing companies big-time as attackers discover and target misconfigured cloud environments, IBM researchers report, and poorly configured systems were responsible for exposing more than 2 billion records that X-Force tracked in 2017.

Cloud misconfigurations are split into three categories: misconfigured cloud databases, which caused 566.4M breached records, publicly accessible cloud storage (345.8M), and improperly secured rsync backups or open Internet-connected network area storage devices (393.4M).

“I think this just goes to show the inexperience in doing that,” says Griswold of moving to the cloud. “Chances are with on-prem, people understand how the data is stored and how the server is configured because they’re the ones who did it … with cloud, it’s a little bit different.”

Several teams, DevOps and operations for example, put pressure on businesses to move to the cloud. “There’s a whole bunch of desire to move things up to the cloud, and that’s where things might be rushed,” he says. “It’s a learning curve, definitely.”

Companies can better secure their cloud environments by involving the security teams as they move workloads to the cloud; it can’t be limited to dev and IT. Because misconfigurations are often easy to detect, it helps to regularly conduct pentests and app code scans.

Low Grades for Incident Response

“When organizations got breached, we found a lot of times the response plans just weren’t in place,” says Griswold, explaining how the rise in ransomware highlighted companies’ inability to cope with attacks.

An IBM Security study conducted last year found slow response times lead to more expensive attacks. Incidents that took longer than 30 days to contain cost $1M more than those contained in less than 30 days, an added incentive for businesses to shape their response strategies.

Many companies don’t have any sort of incident response plan at all, and many of those who do have outdated plans and/or don’t know how to execute on them. “Just because you have a plan in place doesn’t mean you’re going to know the ins and outs of it,” says Griswold.

Researchers anticipate destructive ransomworms will continue to spread in 2018, as well as wide-spread vulnerabilities and sophisticated exploits targeting the public and private sectors. As they build incident response plans, Griswold urges businesses to ensure both technical controls and PR processes are in place, and have both PR and law firms on retainer.

“You need to think about those legal aspects,” he cautions.  

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/misconfigured-clouds-compromise-424--more-records-in-2017/d/d-id/1331457?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Grindr was sharing HIV status of users, but now it’s not

In the last 24 hours, LGBTQ social networking app Grindr has found itself with the uncomfortable job of explaining why it has quietly been sharing the HIV status of its users with third parties.

According to research by Norwegian non-profit SINTEF, originally published in the Swedish media two weeks ago, Grindr sends analytics companies Apptimize and Localytics a swathe of user data.

This includes not only HIV status and the time since the last test, but GPS location data, phone ID and email address, more than enough to identify individual users.

Apptimize and Localytics are services used to monitor apps as they are being developed, to optimise how they work for users.

Although their use is not unusual in the industry, it begs the question of whether data transfers that happen during this process pose a privacy risk, especially when that data is as sensitive as someone’s HIV status.

The answer seems to be part technical, part operational and – to the growing unease of app developers everywhere – uncomfortably philosophical.

Interviewed in Buzzfeed, SINTEF researcher Antoine Pultier suggested this was more a case of app makers not thinking through what they were doing:

The HIV status is linked to all the other information. That’s the main issue. I think this is the incompetence of some developers that just send everything, including HIV status.

In response, Grindr has reportedly decided to stop working with both Apptimize and Localytics.

Sensitive data such as HIV status was always encrypted during transfer, and no personally identifiable data was shared with advertisers, the company announced.

But in other comments, its CEO Bryce Case struck a more defiant tone, saying what had happened was “unfair” to Grindr and that the company had been “singled out.”

In his mind there is a distinction to be drawn between Grindr’s data transfers and the sort of relationship that exists between, say, Facebook and Cambridge Analytica:

It’s conflating an issue and trying to put us in the same camp where we really don’t belong.

In a limited but important sense, Case has a point: users are not compelled to share their HIV status on their Grindr profiles and when they do so this information becomes accessible in public to anyone viewing it.

As for advertisers, while they have access to some user data, this would not include HIV status.

However, it is not entirely true to say that Grindr is not like Facebook because in an important way it is.

Both are based on the idea that fuels much of the internet economy: users are invited to hand over commercially-valuable personal data without there being many rules governing how it might be processed, analysed or sold on.

Users are told they are in control, that they choose what gets shared and what doesn’t. But when things go wrong, it tends to be whistleblowing, accident, or research effort that pulls back the cover to reveal another unexpected grey area. This is why people are shocked by Cambridge Analytica.

Users have numerous choices about their data but little visibility or understanding of its value or the risk it poses. Until that changes, Grindr’s bad week is unlikely to be the last one we hear about.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zPtLkuHqHqM/

Hand over your social media history before you enter the US

Since 2016, the US government has been hitting up travelers for their social media details.

Now, it wants more. A lot more, as in five years of social media history.

The request is now an optional field concocted by the Department of Homeland Security’s (DHS) Customs and Border Protection (CBP) agency in spite of being scorned/loathed/ridiculed by those who’ve pointed out that…

  • “Nefarious” people don’t share their cunning plans for terrorist attacks on social media (with at least one notable exception).
  • Nothing would stop evil-doers from lying about their social media presence or providing fake account names or even framing others by providing their targets’ social media handles (besides the fact that lying to the federal government is illegal).
  • Rights groups and civil liberties organizations call it “highly invasive” and “ineffective”.
  • Agents don’t typically mention that filling out something on a form is “optional”.
  • Many travelers are intimidated enough to assume they’ll look suspicious if they don’t fill everything out, and/or come from countries where “optional” really means “mandatory”.
  • “Optional” is just a stepping stone to “mandatory.”

So yes, about that last item, the stepping stone. The Trump administration hasn’t gotten rid of the “optional” part, though current Trump chief of staff John F. Kelly told Congress last year that he wanted DHS (he was DHS secretary at the time) to demand social media logins and passwords from potential immigrants coming from seven Muslim-majority nations.

Kelly’s request hasn’t gone anywhere, but now the Trump administration has proposed to make that stepping stone a whole lot bigger.

On Friday, the State Department proposed expanding the current request for social media information, currently required to apply for an immigrant visa. You can read its proposal here on the Federal Register.

If, after at least one 60-day public comment period, the proposal does go into effect, an estimated 14 million non-immigrant visa applicants per year would be asked to list their social media “identifiers” from multiple popular social media platforms during the five years preceding the date they apply for a non-immigrant visa.

They’ll also be given the “option” of providing information from social media platforms that they’ve used in the past five years besides those on the State Department’s list. The department is also looking to collect telephone numbers, email addresses and international travel for the previous five years, whether the applicant has been deported or removed from any country, and whether specified family members have been involved in terrorist activities.

The New York Times claims that the State Department’s list has 20 social platforms, including US-based Facebook, Flickr, Google+, Instagram, LinkedIn, Myspace, Pinterest, Reddit, Tumblr, Twitter, Vine and YouTube. It also lists platforms based overseas: the Chinese sites Douban, QQ, Sina Weibo, Tencent Weibo and Youku; the Russian social network VK; Twoo, which was created in Belgium; and Ask.fm, a question-and-answer platform based in Latvia.

Citizens from those countries to which the United States ordinarily grants visa-free travel, including Australia, Britain, Canada, France, Germany, Japan and South Korea, would be exempt from the new rules. In addition, visitors traveling on diplomatic and official visas will “mostly” be exempted, according to the NYT.

The plan was greeted, once again, with criticism.

Anil Kalhan, an associate professor of law at Drexel University who works on immigration and international human rights, called it “unnecessarily intrusive and beyond ridiculous” on Twitter.

The NYT quoted Hina Shamsi, director of the American Civil Liberties Union’s National Security Project:

This attempt to collect a massive amount of information on the social media activity of millions of visa applicants is yet another ineffective and deeply problematic Trump administration plan. It will infringe on the rights of immigrants and U.S. citizens by chilling freedom of speech and association, particularly because people will now have to wonder if what they say online will be misconstrued or misunderstood by a government official.

The State Department said in a statement that the proposal is one way to fight “emerging threats.”

Maintaining robust screening standards for visa applicants is a dynamic practice that must adapt to emerging threats. We already request limited contact information, travel history, family member information, and previous addresses from all visa applicants. Collecting this additional information from visa applicants will strengthen our process for vetting these applicants and confirming their identity.

The State Department is accepting comments up until 29 May. If you’d like to give the government your thoughts on the proposal, you can share them and your rationale at the regulations comment page.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/m-42R9mCfzc/