STE WILLIAMS

The Mobile Threat: 4 out of 10 Businesses Report ‘Significant’ Risk

Organizations put efficiency and profit before security, leading to system downtime and data loss, according to inaugural research from Verizon.

When you prioritize speed and profit over mobile security, the business suffers — yet 32% of 600 surveyed professionals continue to make the sacrifice and compromise their information. Of these, 38% say their business is “at significant risk” from mobile threats, according to new research from Verizon.

As part of its inaugural Mobile Security Index 2018, Verizon’s Wireless Business Group conducted an independent study of people responsible for buying and managing mobile devices for their organizations. Participating businesses ranged from 250 to 10,000+ employees.

Generally, respondents are very aware that mobile is dangerous: 85% report their business faces at least a moderate risk from mobile security threats and 74% say the risks of mobile devices have increased over the past year. Only 1% said the mobile risk had gone down.

Overall, 27% of participants report that in the past year their company experienced a security incident resulting in data loss or system downtime where mobile devices played a key role. Eight percent say that if their company didn’t experience an incident like this, one of their suppliers had. Companies were more likely to suffer data loss or downtime if they had sacrificed security, respondents say.

“Most agree that there is a serious and growing threat,” says Justin Blair, executive director of Business Wireless Services at Verizon. “The key thing we’ve seen is companies don’t have best practices in place.”

The problem is, according to Blair, organizations aren’t taking even basic steps to protect themselves. Survey data indicates less than half (49%) of respondents say their company has a policy for workers’ public wifi use, and 47% encrypt sensitive data moving across open, public networks. Less than 40% change all their default passwords, and only 59% place limitations on which mobile apps their employees can download from the Internet.

Fear of Rogue Insiders is High

Employees are the greatest risk,” Blair notes. Nearly 80% say they considered their own employees a significant threat. It’s more than fear of them losing devices; more than half (58%) of respondents worry employees will do something bad for personal or financial gain.

Businesses are most worried about losing sensitive internal information, classified company information, financial data, or personally identifiable information. Employees can compromise these resources through their company’s failure to adopt basic practices, respondents report, adding that they access work programs on insecure networks, download dangerous apps, or use weak passwords.

Overall, the majority of respondents say they lack full control over the devices their employees use. Twenty-eight percent say employee-owned laptops with wifi or mobile data are used in their business. Only 61% say they own all mobile phones used for work. Those with BYOD policies, which are still popular in the workplace, say employee-owned devices are their biggest concern.

Security training is popular but not consistent. Most (86%) of respondents train employees on security, but 59% of them only give training when someone joins the company or gets a new device. Of those who are most worried about employees, 35% give no training at all.

The Growth of Mobile and IoT Threats

Nearly 60% of respondents use IoT. Those who do are more likely to say downtime is a bigger threat than data loss. The majority (79%) say IoT is the greatest risk facing organizations. “For the most part, those IoT devices are machine-to-machine communication. Most of the time there’s no one involved in the operation of that data,” Blair points out, adding that on a smartphone, someone is more likely to recognize abnormal behavior, like if it slows down or shuts off. Because IoT devices communicate with each other, he says, it takes far longer to pick up on the signs of a potential cyberattack.

“People may not understand the magnitude of how powerful some of those devices are, but at the same time many of them go unmanned,” he continues. “In many cases it’s still new, we’re seeing IoT as a space that continues to grow.”

Blair says there is “a little bit of unknown” when it comes to mobile security threats and solutions. Businesses know the risk is there, he says, but aren’t entirely sure what do to about it. Many struggle with a lack of C-level support, perceived low threat level, lack of skills and resources, lack of budget, and lack of device user awareness, which ranked the highest as a significant barrier.

Budget is less of a problem: 61% of respondents anticipate their mobile security budget will increase in the next 12 months. Less than 40% report it will stay the same. As mobile devices become increasingly integral to peoples’ jobs, Blair anticipates the enterprise focus on mobile security will continue to grow.

“It’s always been my feeling that smartphones, tablets, and IoT devices are more and more becoming business-critical endpoints,” he says. “If every employee has a smartphone and not a laptop, the number of mobile devices will outweigh the number of non mobile devices.”

Related Content:

 

 Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/the-mobile-threat-4-out-of-10-businesses-report-significant-risk/d/d-id/1331105?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Global Cybercrime Costs Top $600 Billion

More than 50% of attacks result in damages of over $500K, two reports show.

In cybersecurity it can sometimes be hard seeing the forest for the trees. Constant reports about new attacks, breaches, exploits and threats can make it hard for stakeholders to get a picture of the full impact of cybercrime.

Two reports this week are the latest to take a crack at it.

One of the reports is from McAfee in collaboration with the Center for Strategic and International Studies (CSIS). It shows that cybercrime currently costs the global economy a startling $600 billion annually, or 0.8% of the global GDP. The figure represents a 20% jump from the $500 billion that cybercrime cost in 2014.

The other report from Cisco is based on interviews with 3,600 CISOs and shows among several other things that nearly half of all attacks these days end up costing the victim at least $500,000. Eight percent of companies in the Cisco report said cyber attacks had cost them over $5 million; for 11% the costs ranged between $2.5 million and $4.9 million. The figures include direct and indirect costs such as those associated with lost revenue, customers, and lost opportunities.

Together, the two reports paint a picture of a landscape that is getting from bad to worse in a hurry.

“Cybercrime impacts economic growth. This is not an IT issue but something much bigger,” says Raj Samani, chief scientist at McAfee. “Nearly every breach focuses on attribution or the technique, but we rarely ever discuss what the real impact is,” Samani says. The net result is that many organizations continue to view cybercrime as a somewhat abstract issue. “I am constantly told ‘this does not impact me,'” Samani says. “Yet cybercrime impacts every one of us.”

As with many other reports that have attempted to calculate total cybercrime costs, the $600 billion figure in the McAfee/CSIS report is based on estimates. It represents total estimated losses due to theft of intellectual property and business confidential information, online fraud and financial crimes, personally identifiable information, financial fraud using stolen sensitive business information and other factors. Other estimates have put the number much higher, some far lower.

As the report makes clear, underreporting by victims and the overall paucity of real data surrounding cybercrime incidents worldwide have made it extremely hard to get a truly precise estimate of cybercrime costs. In many cases, organizations only report a fraction of their actual losses from cybercrime to avoid reputational damage and liability risks. So to calculate cybercrime costs, McAfee and CSIS borrowed modeling techniques that have been used previously to estimate costs associated with other criminal activities such as maritime piracy, drug trafficking, and transnational crime by organized groups.

The exercise showed that costs of cybercrime have increased significantly in recent years as the result of state-sponsored online bank heists, ransomware, cybercrime-as-a-service, and the growing use of anonymity-enabling technologies like Tor and Bitcoin, McAfee and CSIS said. Malicious activity on the Internet is at an all-time high, with some vendors reporting 80 billion malicious scans, 4,000 ransomware attacks, 300,000 new malware samples and 780,000 records lost to hacking on a daily basis, the report said.

The theft of intellectual property and business confidential information has been a huge reason for the higher cybercrime costs globally. According to McAfee and CSIS, intellectual property theft accounts for at least 25% of overall cybercrime costs. Such theft can include everything from patented formulas for paints to designs for rockets and other military technology. Over the years, the theft of IP has become a huge problem for many industries and has impacted the ability of companies to compete and to profit from their innovations. Yet, it remains one of the most underreported forms of cybercrime.

“[IP theft] is probably the most surreptitious form of data theft,” Samani says. For example, a ransomware infection is clearly obvious, and with other forms of data theft or breaches there is an obligation to report. “However IP theft and calculating the cost becomes invisible to the victim, particularly since proving that a competing product was derived from a historical breach is very difficult,” he says.

Europe appears to be the region most impacted by cybercrime, but that is likely also in part due to the maturity of the breach reporting habits of organizations there compared to other regions, Samani says.

Cisco’s report meanwhile showed that in addition to increasing financial costs, organizations are also becoming more vulnerable to attacks on their supply chain. Supply chain attacks, according to the company, have increased in complexity and frequency and have heightened the need for organizations to pay close attention to their hardware and software sources.

Enterprise security environments have become much more complex as well. Twenty-five percent of the security executives Cisco interviewed said their organizations used security products from between 11 and 20 vendors. Sixteen percent said their organizations were using between 21 and 50 products. The complexity has begun impacting enterprises’ ability to defend against threats, Cisco said.

Franc Artes, an architect in the security business group at Cisco says the new report marks the first time the company asked respondents to indicate a range of their financial loss from a security incident. In last year’s report, one-third of those who suffered a breach reported a revenue loss of 20%, he says.

Cisco’s latest survey shows that attackers are evolving their techniques faster than the ability of defenders to keep up. Troublingly, as organizations continue to leverage their operational technology (OT) infrastructure and create connectivity to these systems, the recognition of it being a vital attack vector has grown as well, Artes says.

“Nearly 70% of the respondents stated they see their OT infrastructure as an attack vector; 20% stated that while it wasn’t currently, they expected it would be in the next few years.”

Related content:

 

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/global-cybercrime-costs-top-$600-billion-/d/d-id/1331106?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cyber Aware – are passwords past it? (Hint: no.) [VIDEO]

Cyber Aware (@cyberawaregov), a government initiative in the UK, is today promoting what it calls #OneReset – urging us all to make sure we have a decent email password, even if that’s all the cybersecurity we’re ready for right now.

The idea is that you have to start somewhere, and of all the online accounts you have, your email account is almost certainly the most far-reaching in your digital life – not least because anyone with access to your email can probably reset the passwords on many of your other accounts.

We agree, but we think you can do way better that just #OneReset, so we took to Facebook Live to encourage you to go for it!

(Can’t see the video directly above this line, or getting an error such as “no longer available”? Watch on Facebook instead.)

Note. With most browsers, you don’t need a Facebook account to watch the video, and if you do have an account you don’t need to be logged in. If you can’t hear the sound, try clicking on the speaker icon in the bottom right corner of the video player to unmute.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OVLhL-3Nzwg/

Intel hurls Spectre 2 microcode patch fix at world

For the second time of asking, Intel has issued microcode updates to OEMs that it prays says will mitigate the Spectre variant two design flaw impacting generations of CPUs spewed out over previous decades.

Yep, old Chipzilla has turned up at the scene of the metaphorical IT industry earthquake with a dustpan and brush*: the firmware updates are for the sixth generation (Skylake), the seventh generation (Kaby Lake) and the eight generation (Coffee Lake), the X-series line and the Xeon scalable and Xeon D processors.

Since 2 January, when The Register exposed the existence of the Meltdown and then Spectre, Intel has been working at mitigating the flaws.

frightened looking woman with hands over eyes

You can’t ignore Spectre. Look, it’s pressing its nose against your screen

READ MORE

The 12 January release of the firmware updates for Meltdown and Spectre made PCs and servers less stable, and so vendors including Lenovo, VMware and Red Hat delayed rolling out patches.

“We have now released production microcode updates to our OEM customers and partners,” said Navin Shenoy, veep and GM for mobile client platforms at Intel. “The microcode will be made available in most cases through OEM firmware updates”.

Intel said the firmware is in beta mode for Sandy Bridge, Ivy Bridge, Haswell and Broadwell. The microcode patch update schedules for the chips are here.

Shenoy said there are “multiple mitigation techniques available that may provide protection against these exploits”, including Google-developed binary modification technique Retpoline (white paper here).

According to Google: “Retpoline sequences are a software construct which allow indirect branches to be isolated from speculative execution.

“This may be applied to protect sensitive binaries (such as operating system or hypervisor implementations) from branch target injection attacks against their indirect branches”.

Retpoline is a portmanteau of return and trampoline: it is a trampoline construct built using return operations which “also figuratively ensures that any associated speculative execution will ‘bounce’ endlessly.”

Intel, which is facing 32 separate lawsuits in the US over Spectre and Meltdown – from both customers and investors – extended its “appreciation” to the rest of the industry for their “ongoing support”.

Some hard pressed techies dealing with the fallout are not yet convinced of Intel’s latest microcode update, at least ones that expressed doubts on Reddit.

“Don’t patch yet,” was the advice from one, “MS had to revert one of Intel’s fixes already. Best to wait until it’s verified not to cause issues with the OS.”

Another said, “I would imagine… at least hope, that the second time around they’d make sure they get it right. But probably still a good call.”

A third said he was “cautiously optimistic” as it will still be “up to the motherboard manufacturers to provide BIOS updates”.

And therein lies the problem, pessimists are rarely disappointed, but for optimists… it is the hope that gets them in the end. El Reg suspects Linus supremo Linus Torvalds, based on experience, fits into the former bracket where Intel is concerned. ®

* Sorry Sean Lock, couldn’t resist pinching your joke. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/21/intel_spectre_2_microcode_patch/

7 Cryptominers & Cryptomining Botnets You Can’t Ignore

Cryptominers have emerged as a major threat to organizations worldwide. Here are seven you cannot afford to ignore.PreviousNext

Image Source: WICHAI WONGJONGJAIHAN via Shutterstock

Image Source: WICHAI WONGJONGJAIHAN via Shutterstock

Cryptocurrency mining has emerged as the new big threat for organizations worldwide.

Many cybercriminals, looking to cash in on the crypto-craze, have begun hijacking computers and using their resources secretly to mine for cryptocurrencies.

One tactic has been to install miners for popular cryptocurrencies—especially Monero—on host systems and add them to massive cryptomining botnets. Another common tactic has been to embed mining tools in websites and secretly use the computing resources of visitors to these sites to mine for Monero and other digital currencies. Research released by Imperva Tuesday also reported that 88% of all remote code execution attacks in December 2017 drove targets to cryptomining malware download sites. 

The trend has impacted individuals and business severely. Vendors have reported numerous businesses suffering major operational disruptions as a result of mining tools being installed on servers and other business systems. In a report this week, Check Point Software Technologies estimated that a staggering 23% of organizations worldwide appear to have been impacted by the Coinhive mining tool alone. The company’s list of top 10 malware threats for January 2018 includes three cryptomining tools.

Here, in no particular order, are seven of the most prolific cryptocurrency miners and botnets currently plaguing users globally.

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full BioPreviousNext

Article source: https://www.darkreading.com/attacks-breaches/7-cryptominers-and-cryptomining-botnets-you-cant-ignore/d/d-id/1331099?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Takeaways from the Russia-Linked US Senate Phishing Attacks

The Zero Trust Security approach could empower organizations and protect their customers in ways that go far beyond typical security concerns.

On January 12, 2018, cybersecurity firm Trend Micro revealed that Russia-linked hackers tried to infiltrate the US Senate, leveraging phishing attacks to harvest access credentials. These tactics suggest that the hackers were laying the groundwork for a widespread compromise of Senate employees. And while these findings might further bolster the public view that the Kremlin is trying to influence our democracy, security professionals should not get distracted by the media frenzy that these revelations created and instead focus on the real lessons.

By creating fake websites that were mimicking the login page for the Senate’s email server, the attackers followed a common blueprint to harvest access credentials that can later be used for lateral attacks and extraction of sensitive information. Taking advantage of Active Directory Federation Services (ADFS), hackers ultimately tried to gain access to systems and applications located across organizational boundaries. It’s well known, but often overlooked, that identity is the top attack vector for cybercriminals and state-sponsored attackers alike. According to Verizon’s 2017 Data Breach Investigations Report, 81% of hacker-initiated data breaches involved weak, default, or stolen passwords.

These statistics confirm that widely accepted security approaches based on bolstering a trusted network do not work. And they never will. The proof? According to analyst firm Gartner, organizations spent a combined $150 billion on cybersecurity in 2015 and 2016. Meanwhile, we’re experiencing a continuous increase in security incidents, which raises doubts about the effectiveness of these investments. During approximately the same period, 66% of organizations surveyed by analyst firm Forrester reported five or more data breaches. When conducting post-mortem analysis of the data breaches that occurred in 2017, it becomes apparent that many of these big breaches can be attributed to a longstanding failure to implement basic cybersecurity measures (e.g., multifactor authentication, or MFA), botched usage of existing security tools to streamline the mitigation of known vulnerabilities, and lack of security measures for protecting sensitive data.

Return to the Essentials of Cybersecurity
Instead of earmarking security investments for bolstering traditional perimeter defenses, which is a futile exercise, organizations must return to the essentials of cybersecurity. In doing so, they can improve their security posture and limit exposure to data breaches.

As the U.S. Senate phishing attack illustrates, weak or stolen user credentials remain the primary entry point for hackers, which is why access control is still the Achilles’ heel of many security programs, since practitioners must balance data availability with measures that prevent unauthorized usage (e.g., theft, disclosure, modification, destructions). Meanwhile, hackers often target privileged users because their accounts provide a beachhead into the entire network. Therefore, strict enforcement of well-defined access control policies and continuous monitoring of access paths to ensure they are working as intended are essential for the success of data integrity initiatives.

In this context, MFA plays an essential role in minimizing the risk of falling victim to phishing attacks. When leveraging MFA, knowing someone’s user name and password is no longer enough to assume the victim’s identity. The likelihood for the hacker to gain access to something their victim knows, something they have, and something they are, is very limited.

Rethink Security: Never Trust, Always Verify
However, MFA is just a first step in securing your organization more effectively. Today’s dynamic threat landscape requires a broader shift in an organization’s security strategy. Instead of the old adage “trust but verify,” the new paradigm is “never trust, always verify.” This Zero Trust Security model is championed by many industry leaders including Google, Forrester, and Gartner.

The basic concept of Zero Trust is that users inside a network are no more trustworthy than users outside a network. Therefore, traditional perimeter-based security approaches are inadequate. Zero Trust Security assumes that everything — including users, endpoints, networks, and resources — is always untrusted and must be verified to decrease the chance of a major breach.

Zero Trust Security assumes that untrusted actors already exist both inside and outside the network. Trust must therefore be entirely removed from the equation. Zero Trust Security requires powerful identity services to secure every user’s access to apps and infrastructure. Once identity is authenticated and the integrity of the device is proven, authorization and access to resources is granted — but with the just enough privilege necessary to perform the task at hand.

Effective Zero Trust Security requires a unified identity platform consisting of four key elements within a single security model. Combined, these elements help to ensure secure access to resources while they significantly reduce the possibility of access by bad actors. The model includes:

  • Verifying the user
  • Validating their device
  • Limiting access and privilege
  • Learning and adapting

This approach must be implemented across the entire organization. Whether giving users access to apps or administrators access to servers, it all comes down to a person, an endpoint, and a protected resource. Users include not only employees but also contractors and business partners who have access to your systems.

Ultimately, it takes Zero Trust Security to trust. But once security professionals embrace this new paradigm, it empowers their organizations and protects their customers in ways that go far beyond typical security concerns, such as enabling digital transformation, increasing data awareness and insights, and reducing existing internal silos between security, IT, DevOps, SecOps, which often lead to the blame game.

Related Content:

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Tom Kemp is co-founder and Chief Executive Officer of Centrify Corporation, a software and cloud security provider that delivers solutions that centrally control, secure, and audit access to on-premises and cloud-based systems, applications, and devices for both end and … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/takeaways-from-the-russia-linked-us-senate-phishing-attacks/a/d-id/1331082?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Trucking Industry Launches Info Sharing, Cybercrime Reporting Service

American Trucking Associations developed new Fleet CyWatch threat reporting, information sharing service in conjunction with FBI.

The American Trucking Associations (ATA) announced Wednesday the launch of Fleet CyWatch, a new service for members of the trucking industry to share threat information and report cybercrimes affecting fleet operations. The Fleet CyWatch program is open to motor carrier and council members of ATA, the US’s largest national trade association for the trucking industry. 

“As the industry responsible for delivering America’s food, fuel and other essentials, security is of paramount importance, particularly in an increasingly technologically connected world,” said Chris Spear, ATA president and CEO, in a statement. “Fleet CyWatch is the next logical step in our association’s and our industry’s commitment to working with law enforcement and national security agencies to keep our supply chain safe and secure.”

Fleet CyWatch was developed in conjunction with the US Federal Bureau of Investigations to improve cybercrime reporting and response and to improve motor carriers’ cybersecurity awareness and threat prevention.   

See more here.  

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/trucking-industry-launches-info-sharing-cybercrime-reporting-service-/d/d-id/1331104?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Read the 200,000 Russian Troll tweets Twitter deleted

Twitter announced last month that it would email notifications to 677,775 users in the US: that’s how many people it says followed one of the accounts created by the Russian government-linked propaganda factory known as the Internet Research Agency (IRA).

Less than two weeks later, Twitter announced that the number had more than doubled.

The number included those of us who retweeted or liked a tweet from Russian accounts during the 2016 US presidential election. The accounts had already been suspended, Twitter said, meaning that the relevant content is no longer publicly available on the platform.

But it is, in fact, available somewhere: Last week, NBC News published 200,000 Russian troll tweets that Twitter had deleted.

NBC News says that the accounts worked in concert as part of large networks that posted hundreds of thousands of inflammatory tweets, “from fictitious tales of Democrats practicing witchcraft to hardline posts from users masquerading as Black Lives Matter activists.”

The US intelligence community has determined that the IRA is part of a Russian state-run effort to influence the 2016 election, and all signs are pointing to the organization gearing up to do the same to the November mid-term elections.

Director of National Intelligence Dan Coats told the Senate Intelligence Committee last Tuesday that the US is “under attack,” adding that Russia is attempting to “degrade our democratic values and weaken our alliances.”

Coats said that Russian President Vladimir Putin considers Russia’s interference in the 2016 presidential elections a success and that he’s targeting the midterms:

There should be no doubt that [Putin] views the past effort as successful and views the 2018 US midterm elections as a potential target for Russian influence operations.

Twitter trolls and their seeds of discord are great tools for the Russians, Coats said: they’re cheap, low-risk and effective:

The Russians utilize this tool because it’s relatively cheap, it’s low risk, it offers what they perceive as plausible deniability and it’s proven to be effective at sowing division. We expect Russia to continue using propaganda, social media, false flag personas, sympathetic spokesmen, and other means of influence to try to build on its wide range of operations and exacerbate social and political fissures in the United States.

Twitter handed over to Congress a list of 3,814 IRA-connected account names and, as is its practice, has since suspended those accounts. That means deletion of the accounts’ tweets from public view, both on Twitter and from third parties. Unfortunately, erasing the evidence of foreign election meddling isn’t helpful for the investigation into that meddling – an investigation that resulted in a federal indictment on Friday, accusing 13 Russians and three Russian companies of conducting a criminal and espionage conspiracy using social media to interfere in the election.

To retrieve the evidence that Twitter deleted, NBC News asked three sources familiar with Twitter’s data systems to cross-reference the partial list of names released by Congress to create a database of tweets that could be recovered. The sources requested anonymity to avoid politicization of their work and to stay out of trouble regarding possible violation of Twitter’s developer policy.

The news outlet said it’s already analyzed the data to expose “how Russian accounts impersonated everyday Americans and drew hundreds of millions of followers, exploiting terrorist attacks, the debates and other breaking news events.”

Our investigations revealed how the accounts pushed graphic, racist and conspiracy theory-filled disinformation, while flattering, arguing and cajoling more than 40 U.S. politicians, media figures and celebrities into interacting with and amplifying their propaganda.

Now, to fend off the continuing onslaught of Russian Twitter trolls and bots, and to “help shine a light on this persistent threat to democracy,” NBC News has open-sourced the 200,000 tweets.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Ua5PsPqZi8w/

Artificial intelligence reads privacy policies so you don’t have to

We can think of privacy policies as fortresses made out of thick bricks of gobbledygook: impenetrable, sprawling documents that do little beyond legally protect companies.

Nobody reads them. Or, to be more precise, 98% of people don’t read them, according to one study, which led to 98% of volunteers signing away their firstborns and agreeing to have all their personal data handed over to the National Security Agency (NSA), in exchange for signing up to a fictional new social networking site.

And here’s the thing: if you’re one of the ~everybody~ who doesn’t read privacy policies, don’t feel bad: it’s not your fault. Online privacy policies are so cumbersome that it would take the average person about 250 working hours – about 30 full working days – to actually read all the privacy policies of the websites they visit in a year, according to one analysis.

So how do we keep from signing away our unsuspecting tots? Machine learning to the rescue!

A new project launched earlier this month – an artificial intelligence (AI) tool called Polisis – suggests that visualizing the policies would make them easier to understand. The tool uses machine learning to analyze online privacy policies and then creates colorful flow charts that trace what types of information sites collect, what they intend to do with it, and whatever options users have about it.

Here’s what LinkedIn’s privacy policy looks like after Polisis sliced it up for the flowchart:

As you can see, you can point to one of the flowchart streams to drill down into details from the privacy policy:


I was particularly interested in seeing how the tool would present LinkedIn’s privacy policy, given the class action brought by people who were driven nuts by repeated emails that looked like they’d been sent by unwitting friends on LinkedIn but were actually sent by LinkedIn’s “we’re just going to keep nagging you about connecting” algorithms. What privacy policy allowed users’ contact lists to be used in this manner and for all that spam to crawl out of the petri dish?

That suit was settled in 2015. It would have been interesting to apply AI to the old privacy policy, but this is a sample of what you get out of the current LinkedIn privacy policy:

Polisis paints a pretty, easy to navigate chart of what parties receive the data a given site collects and what options users have about it. But the larger goal is to create an entirely new interface for privacy policies.

Polisis is just the first, generic framework, meant to provide automatic analysis of privacy policies that can scale, to save work for researchers, users and regulators. It isn’t meant to replace privacy policies. Rather, the tool is meant to make them less of a slog to get through.

One of the researchers from the Swiss university EPFL, Hamza Harkous, told Fast Co Design that Polisis is the result of an AI system he first created when making a chatbot, called Polibot, that could answer any questions you might have about a service’s privacy policy.

To train the bot, Harkous and his team captured all the policies from the Google Play Store – about 130,000 of them – and fed them into a machine learning algorithm that could learn to distinguish different parts of the policies.

A second dataset came from the Usable Privacy Policy Project – consisting of 115 policies annotated by law students – which was used to train more algorithms to distinguish more granular details, like financial data that the company uses and financial data the company shares with third parties.

Harkous soon realized that the chatbot interface was only useful for those with a specific question about a specific company. So he and his team set about creating Polisis, which uses the same underlying system but represents the data visually.

The project – which includes researchers from the US universities of Wisconsin and Michigan as well as EPFL – has also resulted in a chatbot that answers user questions about privacy policies in real-time. That tool is called PriBot.

Harkous said that he’s hoping for a future in which these types of interfaces can knock down the walls of privacy policy legalese. Not to say that Polisis is perfect, by any means, but Harkous has plans to improve it. As Fast Co Design notes, Polisis at this point doesn’t give any indications of what to pay attention to: all data sharing is treated equally, and there are no heads-up regarding shifty practices.

One example: The free email unsubscribe service Unroll.me reads all your emails and sells information it finds there to third parties. Its privacy policy visualization doesn’t necessarily make that plain to see. Harkous says he’s working on a tool that will flag abnormal parts of egregious policy details that could lead to examples like that.

He told Fast Co Design that out of more than 17,000 privacy policies the system has analyzed so far, the most interesting insights have come from those of Apple and Pokemon Go. Both companies suck up users’ location data, of course: not surprising, given that they both offer location-based services.

But the Polisis visualizations show just how many things the companies use that location data for. Think extremely granular advertising. From Fast Co Design:

You might not realize it, but when you catch a Pokemon in a certain area, the company is likely using your location to sell you things.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/sKADFVhM5e0/

Flight simulator comes bundled with password stealing stowaway

How far should a software company be able to go to protect its products from piracy?

Not, one would assume, as far as deploying a Chrome password capture tool in its downloads. Yet this was the extraordinary accusation levelled at Flight Sim Labs (FSLabs) last weekend by a perplexed Reddit user.

The company makes flight simulation mods, one of which – an Airbus A320X add-on for Lockheed Martin’s pro-level Prepar3D – was setting off antivirus security software during installation.

As the user suspected – subsequently confirmed by pen-testing company Fidus Information Security –  the offending file, test.exe, was an executable for something called SecurityXploded. Explains Fidus:

The command line-based tool allows users to extract saved usernames and passwords from the Google Chrome browser and have them displayed in a readable format.

Under pressure, FSLabs quickly owned up to what it was doing and, moreover, why it was doing it.

According to founder and CEO, Lefteris Kalamaras, the tool captured passwords but not indiscriminately (FSLabs’ emphasis):

There are no tools used to reveal any sensitive information of any customer who has legitimately purchased our products.

The tool only activated if the user had installed the software using a pirated serial number believed to be circulating on the internet.

That program is only extracted temporarily and is never under any circumstances used in legitimate copies of the product.

It was so narrowly targeted, in fact, that the whole scheme was intended to gather evidence against a single individual believed to be circulating license keys for FSLabs’ software.

The company has since set out its side of events in more detail, which hasn’t stopped its behaviour going down badly.

With the Reddit negativity into the red zone, the company then backtracked, uploading a new version of the installer with the problem test.exe removed.

Legitimate action to stop pirates ripping off software or digital rights management (DRM) overreach?

This is easily answered: installing a tool designed to capture user data without consent, however narrowly configured, is hard to justify, ethically or technically.

Even ignoring the hypothetical possibility of misuse of such a capability, Fidus discovered the routine was designed to send data across an unencrypted HTTP channel encoded in nothing more secure than Base64.

The company appears to have been using the tool for months, having reportedly admitted in a forum post that users should turn off their antivirus in case test.exe set it off. This was, and is, bad advice that unnecessarily exposes customers to risk.

Undocumented DRM has a history of leading to trouble when customers find out about it – just ask Sony BMG, hauled over the coals in 2005 for using CD protection that behaved like a rootkit.

As with Sony before it, FSLabs should have asked themselves how it would look if its users ever found it, what legal and regulatory bodies might do if they found it, and what hay criminals might make with it if they found it.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/LX0TjSW4m44/