STE WILLIAMS

Yes, you can remotely hack factory, building site cranes. Wait, what?

Did you know that the manufacturing and construction industries use radio-frequency remote controllers to operate cranes, drilling rigs, and other heavy machinery? Doesn’t matter: they’re alarmingly vulnerable to being hacked, according to Trend Micro.

Available attack vectors for mischief-makers include the ability to inject commands, malicious re-pairing and even the ability to create one’s own custom havoc-wreaking commands to remotely controlled equipment.

“Our findings show that current industrial remote controllers are less secure than garage door openers,” said Trend Micro in its report – “A security analysis of radio remote controllers” – published today.

As a relatively obscure field, from the IT world’s point of view at any rate, remotely controlled industrial equipment appears to be surprisingly insecure by design, according to Trend: “One of the vendors that we contacted specifically mentioned multiple inquiries from its clients, which wanted to remove the need for physically pressing the buttons on the hand-held remote, replacing this with a computer, connected to the very same remote that will issue commands as part of a more complex automation process, with no humans in the loop.”

Even the pairing mechanisms between radio frequency (RF) controllers and their associated plant are only present “to prevent protocol-level interferences and allow multiple devices to operate simultaneously in a safe way,” Trend said.

Yes, by design some of these pieces of industrial gear allow one operator to issue simultaneous commands to multiple pieces of equipment.

In addition to basic replay attacks, where commands broadcast by a legitimate operator are recorded by an attacker and rebroadcast in order to take over a targeted plant, attack vectors also included command injection, “e-stop abuse” (where miscreants can induce a denial-of-service condition by continually broadcasting emergency stop commands) and even malicious reprogramming. During detailed testing of one controller/receiver pair, Trend Micro researchers found that forged e-stop commands drowned out legitimate operator commands to the target device.

People working with a crane

What a crane in the ass: Bug leaves construction machinery vulnerable to evil command injection

READ MORE

One vendor’s equipment used identical checksum values in all of its RF packets, making it much easier for mischievous folk to sniff and successfully reverse-engineer those particular protocols. Another target device did not even implement a rolling code mechanism, meaning the receiving device did not authenticate received code in any way prior to executing it, like how a naughty child with an infrared signal recorder/transmitter could turn off the neighbour’s telly through the living room window.

Trend Micro also found that of the user-reprogrammable devices it tested, “none of them had implemented any protection mechanism to prevent unattended reprogramming (e.g. operator authentication)”.

While the latter may sound scary, factories and construction sites do enjoy a measure of physical security; while this is (notoriously) far from perfect, it does at least dissuade a casual hacker from climbing up a crane on a site to pair his laptop or home-made controller with it, or to try and reflash it with malicious firmware. Yet this is no substitute for proper device security.

Just to keep site managers’ blood pressure high, Trend Micro highlighted that not only could script kiddies carry out some of these types of attack against industrial plants, a remote attacker could achieve persistent access by using a battery-powered cellular modem dropped off at a quiet part of a site with a drone.

Trend Micro pointed out: “Generally, there is a friction in patching because of the high downtime costs and business continuity constraints. Also, there’s no such thing as ‘forensics’ in this field. Incidents are scrutinized in the ‘physical world’, and parts are just replaced to restore normal operations as quickly as possible. In other words, digital attacks are not considered a possibility in this field.”

The infosec firm advised system integrators to be on high alert for potential vulns in customer-specified kit. In the long term, the infosec research firm said companies ought to abandon “proprietary RF protocols” in favour of open standards, highlighting Bluetooth Low Energy as having a tad more baked-in security than some of the standards they reverse-engineered, some of which they said had “none at all”.

Just three months ago, US-CERT advised some customers of Telecrane gear to patch their control systems – after the disclosure of a security bug that could allow a nearby attacker to wirelessly hijack equipment. The vuln in the Telecrane F25 series of controllers, if left unpatched, would have allowed miscreants to remotely operate cranes via radio signals.

Ken Tindell, CTO of Canis Automotive Labs, mused to El Reg: “It’s really a philosophical issue rather than a technical one. On one hand, you don’t want to load something down with security implementations when it’s a strictly private offline network. On the other, you don’t want to put such a lethal thing into the hands of customers that don’t appreciate the issues and will naturally do the equivalent of sticking a wet finger into a mains socket.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/15/even_cranes_are_hackable_trend_micro/

Former IBM Security Execs Launch Cloud Data Security Startup

Sonrai Security, the brainchild of two execs from IBM Security and Q1 Labs, debuts with $18.5 million in Series A funding.

Sonrai Security, a new startup focused on cloud data security, today launched with $18.5 million in Series A funding and rolled out a new service designed to enable data and identity control across cloud accounts and within data stores.

Sonrai co-founders Brendan Hannigan and Sandy Bird met at Q1 Labs, where Bird was co-founder and Hannigan was CEO. The duo helped create its security analytics platform QRadar, and both joined IBM when it acquired Q1 Labs in 2011. Hannigan helped establish IBM Security and served as a general manager for the division, while Bird became IBM Security’s CTO.

Hannigan left IBM toward the end of 2015, and Bird a bit later, but both continued to work on issues related to data security as the space evolved. “What’s happening with cloud, and what I’d call agile development, is [they’re] clearly completely changing how companies deploy and run software,” Hannigan says. “[It’s] now happening at a rate which is intensifying.”

He references Gartner data from July 2018, when researchers predicted by 2025, 80% of enterprises will have shut down their traditional data centers, versus 10% today. The change demands companies to rethink their approach to security as data and infrastructure spans cloud platforms.

“In new infrastructure, you cannot take old solutions and try to jam them into the new world,” Hannigan says. The “old world,” he says, was very focused on perimeter security. In this new cloud-focused world, many of the controls are built into the infrastructure cloud providers deliver.

As they move into this world, Hannigan adds, businesses don’t have one cloud – they have multiple providers. Most use Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP), but they can have hundreds of cloud accounts for any one provider and multiple data stores within those accounts. Organizations need to know whether they have confidential data in the cloud, where it’s located, and who has access to it. Cloud accounts are packed with data spread across machine stores and accessed by people, virtual machines, and containers.

People need to understand their data and its associated risks, he continues. The idea prompted the idea for Sonrai, which Hannigan says “is about building a complete model of our identity relationships and access to that data across cloud accounts and cloud providers.” Traditional network-centric security controls don’t work in the cloud, its founders argue, so they wanted to prioritize cloud data control and build a risk mitigation model focused on data.

“Our focus across these clouds is the data – where is it, who has access, what type of stores are the data in,” he says. “There’s many different places people can store data in cloud accounts.”

How It Works
Cloud Data Control, Sonrai’s service, is a native cloud tool designed to track data and users across cloud services and third-party data sources. Hannigan and Bird used their experience in developing flexible platform solutions to create a service that lets users view identity and data relationships across cloud environments. A security pro trying to understand risk across AWS, Azure, and GPC would need only one query line to see, for example, whether encryption is automated for a data store housing confidential information, Bird explains.

The service is built on three values: continuous monitoring for unusual activity, audit and compliance, and driving efficiency for SecOps and DevOps teams, Hannigan says. Both groups need a broad view of cloud data and activity but have different use cases for that information. Sonrai had to bridge the gap between the data security teams want and the agility DevOp teams need.

“It’s a different view for both sides,” Bird says. The CISO and security team will be focused on whether a company’s hundreds of AWS accounts and Azure subscriptions have met policies and controls. DevOps want an API to call to check the state of compliance before production. Both teams can see dashboards, APIs, and a compliance framework, but with unique interfaces.

Sonrai’s Series A funding was led by TenEleven Ventures and Polaris Partners, where Hannigan was most recently an entrepreneur partner.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/former-ibm-security-execs-launch-cloud-data-security-startup/d/d-id/1333653?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Why Cyberattacks Are the No. 1 Risk

The paradigm shift toward always-on IT requires business leaders to rethink their defense strategy.

With the world going digital, the dependence on the availability of IT infrastructure keeps exponentially growing, and many people don’t comprehend the true scope of the implications. The recent cyberattack on the Los Angeles Times is a prominent example, disrupting the delivery of the Los Angeles Times and Tribune newspapers across the entire US. And in May 2018, a number of distributed-denial-of-service (DDoS) attacks were launched targeting the Netherlands, affecting and temporarily shutting down the online banking of three of the country’s largest financial institutions.

Thanks to the emergence of the darknet, cybercrime has become widely accessible and procurable, blurring the lines between legitimate e-commerce and illicit trade. In the Netherlands, an 18-year-old man was arrested in connection with the DDoS attacks who apparently hired a cybercriminal through one of the various marketplaces in the darknet and who “wanted to show that a teenager can simply crash all banks” with a few clicks — unfortunately, he was right.

Society Is More Vulnerable to Cyberthreats
Indeed, society has become much more vulnerable to such attacks. The World Economic Forum (WEF) says business leaders in advanced economies see cyberattacks as their single biggest threat, even more so than terrorist attacks (No. 2), an asset bubble (No. 3), a new financial crisis (No. 4), or failure to adapt to climate change (No. 5).

This is no surprise because the business risks associated with cybercrime are growing along with companies’ ever-increasing dependence on technology. Moreover, the massive growth in the use of smart devices has opened up a universe of new ways for cybercriminals to launch attacks through large-scale botnets. By 2025, the number of smart devices in the world is projected to exceed 75 billion, outnumbering the global population by a factor of 10. Meanwhile, geopolitical rivalries are engendering larger and more sophisticated cyberattacks by smart, well-resourced IT teams with generous state backing. Particularly, large organizations need to take into account a whole range of cyber threats — including business interruption, theft, and extortion — reputational damage, economic espionage, and the infiltration of critical infrastructure and services. The evolving threat landscape combined with a mixture of highly sophisticated adversaries makes cyber-risk very challenging to manage.

An Under-Resourced Risk
Awareness of this risk is growing, and more organizations are directing efforts toward cyber-risk management. However, as the WEF highlights, cybersecurity is still under-resourced when measured against the sheer scale of the threat.

Cybercriminals are now estimated to pocket $1.5 trillion annually — a staggering amount equal to Russia’s gross domestic product, and five times the cost of approximately $300 billion resulting from natural disasters in 2017. Some studies predict that the takedown of a single cloud provider could result in $50 billion to $120 billion in economic damage — similar to the financial carnage stemming from Hurricane Sandy and Hurricane Katrina. 

Cyber Issues Reduce Value
Cyberattacks can wreak havoc on a company, and severe financial and legal blowback are only the start. Equifax’s stock dropped more than 31% after the firm revealed that it had been the victim of a breach. The disclosure erased $5 billion in market value, as reported by MarketWatch. After Yahoo disclosed two large-scale breaches, Verizon cut its buy offer by $350 million, or about 7% of the original price. The breach almost scuttled the deal. Yahoo had to pay a $35 million penalty to settle securities fraud charges levied by the US Securities and Exchange Commission (SEC), and another $80 million to settle lawsuits launched by irate shareholders.

When the Marriott breach hit the news, Sen. Charles E. Schumer (D-NY) called on the hotel chain to foot the bill and replace the passports for as many as 327 million people whose passport numbers might have been exposed in the attack. Marriott pledged to cover the cost, but at $110 per passport — the standard fee — it would have had to fork out an incredible $36 billion, an amount equivalent to the firm’s entire market capitalization.

New Risk Imperatives
Other factors influence the consequences of cybercrime. For instance, firms are more heavily leveraged than they were a few years ago. Since 2010, the debt-to-equity ratio for the median SP 1500 company has nearly doubled. Consequently, according to the WEF, their stability is even more threatened by cybercrime skullduggery.

In response, regulatory frameworks are being tightening up around the globe — witness the General Data Protection Regulation in Europe and the new SEC directives in the US. The authorities want to see better preparation that will mitigate risk, and more transparency after cyberattacks. In a press release, SEC Chairman Jay Clayton urged public companies to “examine their controls and procedures, with not only their securities law disclosure obligations in mind, but also reputational considerations around sales of securities by executives.”

Businesses need to focus on their resilience to cyber events and generally need to put emphasis on prevention and response. Research suggests that only about half (52%) of organizations have a CISO on their payroll, and only 44% say their corporate boards actively participate in their companies’ overall security strategy. In the digital age, this is no longer good enough and needs rethinking.

Because virtually every business is going digital in one way or another, it’s naive to think that today’s cyberattacks primarily affect technology companies. In fact, cybercrime is setting its sights on industries across the board, many of which were left alone in the pre-digital era. Hotels, airlines, and banks, for example, are now squarely in the cybercriminals’ crosshairs.

The upshot is that modern corporate innovation and growth must be balanced against cyber-risk and IT stability. More than ever, business leaders must create strategic plans that pave the road to emerging opportunities but also outline how their companies will ensure business continuity and deal with the complex set of cyber threats blighting the global digital landscape.

Related Content:

 

Marc Wilczek is a columnist and recognized thought leader, geared toward helping organizations drive their digital agenda and achieve higher levels of innovation and productivity through technology. Over the past 20 years, he has held various senior leadership roles across … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/why-cyberattacks-are-the-no-1-risk-/a/d-id/1333616?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

SEC Issues Charges in ‘Edgar’ Database Hack

One defendant is still facing charges issued in 2015 for a $30 million hacking and securities fraud scheme.

UPDATED 2:20 P.M. E.T. — A Ukrainian hacker was charged by US authorities, today, for offenses related to a hacking and illegal trading scheme; he still faces similar charges brought against him four years ago.  

Oleksandr Ieremenko, 26, and Artem Radchenko, 27, both of Kiev, Ukraine, were charged today for their roles in a 2016 attack on the US Securities and Exchange Commission (SEC). The SEC discovered and revealed the attack in September 2017.

Attackers invaded the SEC’s Electronic Data Gathering, Analysis and Retrieval (EDGAR) corporate filing database used by publicly traded companies and money managers. They were thus able to access these businesses’ non-public earnings information. Financial traders, recruited by Rachenko, traded illegally on this privileged inside information, according to the US Department of Justice.

Ieremenko is still at-large and facing charges brought against him in 2015 related to a $30 million securities fraud and hacking scheme. In that case, the attackers compromised newswire services—including PR Newswire, Marketwired, and Business Wire—accessed thousands of business press releases before publication, then illegally traded on that privileged insider information.

Read more here and here

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/sec-issues-charges-in-edgar-database-hack/d/d-id/1333655?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

7 Privacy Mistakes That Keep Security Pros on Their Toes

When it comes to privacy, it’s the little things that can lead to big mishaps.PreviousNext

Image Source: Adobe Stock: deagreez

Image Source: Adobe Stock: deagreez

Privacy and security are often thought of as one and the same. While they are related, privacy has become its own discipline, which means security pros need to become more familiar with the subtle types of mistakes that can lead to some dangerous privacy snafus.

With GDPR going live last spring in Europe and the California privacy law becoming effective in 2020, companies should expect privacy to become more of an issue in the years ahead. Colorado and Vermont have passed privacy laws, as has Brazil, and India is well on its way to passingone of its own.  

First and foremost, companies have to think of privacy by design, says Mark Bower, general manager and chief revenue officer at Egress Software Technologies.

Privacy by design requires companies to ask the following questions: What type of data are we storing? For what business purposes? Does the data need to be encrypted? How will the data be destroyed when it becomes obsolete, and how long a period will that be? Are there compliance regulations that stipulate data destruction requirements? How will the company protect personally identifiable information for credit cards and medical information?

“Companies can’t understand risk if they don’t know where the data resides,” says Debra Farber, senior director of privacy strategy at BigID. “Privacy should be by default. Companies want to make sure that personal data is protected.”

Read on for the most common privacy mishaps. 

 

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full BioPreviousNext

Article source: https://www.darkreading.com/endpoint/privacy/7-privacy-mistakes-that-keep-security-pros-on-their-toes/d/d-id/1333643?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

US Judge: Police Can’t Force Biometric Authentication

Law enforcement cannot order individuals to unlock devices using facial or fingerprint scans, a California judge says.

American law enforcement cannot force people to unlock devices using a facial or fingerprint scan, as stated in a new ruling intended to protect individuals from intrusive federal searches.

US judges had previously given authorities power to force people to unlock devices using biometric scans, even though they couldn’t force them to share passcodes. A new ruling, which says all passcodes are equal, has been called a “potentially landmark” verdict, Forbes reports.

It comes from the US District Court for the Northern District of California, where a search warrant for an Oakland property was rejected. As part of a Facebook extortion crime investigation, police wanted to access phones on the property with biometric scans. Magistrate judge Kandis Westmore ruled this was “overbroad” as it didn’t specify a person or device.

Even with a warrant, the judge said, government officials could not force people to incriminate themselves by using facial, fingerprint, or iris scans to unlock mobile devices. Passcodes and biometric scans can all be used to log into devices and should be treated the same. If someone cannot be forced to provide a passcode, they also cannot be forced into biometric scans.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/us-judge-police-cant-force-biometric-authentication/d/d-id/1333656?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Online Fraud: Now a Major Application Layer Security Problem

The explosion of consumer-facing online services and applications is making it easier and cheaper for cybercriminals to host malicious content and launch attacks.

Online fraud is a subset of cybercrime that typically takes place at the application layer. Although fraud was commonly associated with scams (for instance, Nigerian fraud), fraudulent transactions, and identity theft in the past, its “potential” has exploded in recent years — thanks to the many new ways to cash in on consumer-facing online services and applications.

Many of these new fraud attacks have already made headlines:

  • Fake reviews and purchases to artificially boost a product or seller’s ranking
  • Fake accounts created to take advantage of sign-up promotions/bonuses
  • Fraudulent listings with counterfeit products or attractive prices to lure buyers off the platform into under-the-table (and potentially unsafe) transactions
  • Bots generating artificial clicks, installations, and app engagement
  • Virtual items in online games traded or resold for profit
  • Fraudulent transactions
  • Fraudulent credit card and bank account openings from stolen and/or fake identities.

The list goes on. But unlike other types of cybercrime that “hack” into a network or system by obtaining unauthorized access, these new fraud attacks can be launched by simply registering user accounts and abusing available product features offered by online services and applications. The online services have become a part of the attack platform. For cybercriminals, why pay for bulletproof hosting when you can freely and anonymously put up content on social networks and peer-to-peer marketplaces?

This shift away from specialized attack infrastructure means that blacklists and reputation lists traditionally used for detection are becoming ineffective. Fraudsters no longer need to maintain dedicated servers for hosting malicious content or launching attacks, and they can afford to switch up their operation frequently. In DataVisor’s recent Fraud Index Report, the median lifetime of a fraudulent IP address is reported to be only 3.5 days. As long as cybercriminals can access the online services and applications — either through anonymous proxies, peer-to-peer community VPNs, or even directly from their home network — the attack is possible.

Attacking at the Application Layer
Attacking at the application layer gives fraudsters a greater chance of blending in with normal users. It is difficult to tell whether an HTTP connection is generated by a human or a script, just as it is difficult to distinguish between a fake user account and a real one.

The application layer, which supports a variety of communication protocols, interfaces, and access by end users, has the widest attack surface. In addition to the application code, vulnerabilities could also exist in access control and web/mobile APIs. Attacks involving authorized users that have already logged in — such as the fraud attacks that leverage user accounts on consumer-facing online services — are the most difficult to prevent and detect.

Depending on the actions and features available on the online service or application, fraudulent accounts can perform a variety of benign actions to stay under the radar. Many lie in wait for weeks, months, or even years before launching the attack. For example, financial fraudsters open multiple credit cards using synthetic identities and accumulate credit history over time, only to cash out their credit limit and disappear. In another example, we have observed fake accounts created on social network sites becoming active after three years to update their profile information with phishing URLs.

These attacks are challenging to detect even for machine learning models. One aspect of this is due to how models “learn” to identify fraudulent and malicious activities. In many popular machine learning applications, such as image recognition or natural language processing, the labels are well-defined and unambiguous; an image of a chicken shows a chicken, not a duck. By giving many examples of “chicken” to the model, we can have pretty good confidence that it will learn to recognize chickens.

However, there is no single definition of fraud or fraudulent behavior. Thus, when applying machine learning to fraud, the labels are noisier. 

Changing Attack Dynamics
A second challenge is due to the dynamic nature of attacks. Without constraints on dedicated attack infrastructure, fraudsters can adapt their operation in a much faster manner to exploit loopholes in the applications. Relying on historical examples of attacks means that the model is always operating based on outdated information, limiting its effectiveness against future attacks.

To deal with sophisticated, fast-evolving online attacks, a robust solution should incorporate multiple layers of defenses. Adopting a strong authentication system, reviewing all API accesses, and performing automated code testing helps to establish a solid baseline defense. Also, organizations must vet developers and third-party apps, be aware of access given on nonstandard interfaces, and understand the types of attacks happening on your service or application to make an educated choice about the type of solution to implement.

To further address abuse involving authorized users, adopt advanced behavior profiling for a holistic analysis of user activities. Online fraud attacks are often performed at scale, involving hundreds to thousands of fraudulent accounts. These “bot” accounts are likely to exhibit behaviors that are very different from those of normal users. Explore technology solutions that focus on data analytics and uncovering new insights rather than the detection of known, recurring attack patterns alone.

It’s no longer enough to keep up with online fraud. In fact, if you are just keeping up, you’re already behind.

Related Content:

Ting-Fang Yen is a director of research at DataVisor, a company providing big data security analytics for online services and financial institutions. Her work focuses on network and information security data analysis, where she combines data science with security domain … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/online-fraud-now-a-major-application-layer-security-problem/a/d-id/1333633?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Is fake-news sharing driven by age, not politics?

Who shares fake news?

Following the 2016 US presidential campaign, studies have found that conservatives were more likely to share articles from fake-news domains than were moderates or liberals.

But in a new study of misinformation-sharing on social media during the 2016 campaign, published in Science Advances on Wednesday, researchers say that political leaning doesn’t correlate nearly so strongly with fake-news sharing as does age.

Specifically, it’s old people who are sharing the most fake news.

Researchers at New York and Princeton Universities found that users over 65 shared nearly seven times as many articles from fake news domains as did those in the youngest age group (18-29). The tendency to share fake news steadily increases with age: Facebook users over 65 shared about 2.3 times as many such articles as those in the second-oldest age group (45-65).

Age, in fact, is the best predictor of how Facebook users’ interact with fake news, above and beyond sex, race, income, education, or how many links they share, the researchers found.

Sharing fake news is actually quite rare

One silver lining: sharing fake news was “quite rare” during the 2016 campaign, the researchers found:

The vast majority of Facebook users in our data did not share any articles from fake news domains in 2016 at all.

This isn’t because survey respondents didn’t share links in general; it’s just that they overwhelmingly chose not to specifically share fake news. Of their respondents, 3.4% of those who opened up their Facebook profile data to the researchers shared 10 or fewer links of any kind during the period of data collection. Far more – 26.1% – shared 10 to 100 links, and even more  – 61.3% – shared 100 to 1000 links.

From the report:

Sharing of stories from fake news domains is a much rarer event than sharing links overall.

Across all ages, only 8.5% of study participants shared at least one link from a fake news site.

Their findings did echo those of earlier studies (such as this one) in that conservative respondents were more likely to share articles from fake news-spreading domains.

They found that of those users who identified themselves as being Republicans, 18.1% shared fake news, compared with 3.5% of Democrats. The researchers attributed the finding largely to studies that have shown that fake news overwhelmingly served to promote Trump’s candidacy during the 2016 election.

But again, regardless of political leanings, such sharing increases with age, the researchers found.

Why older people?

The researchers suggested that there are a few factors that could help to explain why older people are more likely to share fake news. For one thing, it could be that those who are over the age of 60 don’t have the level of digital media literacy that younger people do. It’s known as the divide between “digital natives” who grew up with technology and the older “digital immigrants” who’ve had to adopt it. Could it be that digital natives have a better ability to recognize and sidestep dubious content than do older users?

Another theory puts it down to cognitive deterioration with age. As memory goes, so too goes the ability to resist “illusions of truth,” according to such theories. Memory decline is just one of the factors cited by the FBI in a page devoted to fraud against seniors. From that page:

When an elderly victim does report the crime, they often make poor witnesses. Con artists know the effects of age on memory, and they are counting on elderly victims not being able to supply enough detailed information to investigators.

Methodology

The researchers noted that they didn’t rely on users’ self-reporting of their Facebook actions. That’s because self-reported measures of exposure to political media have been shown to be biased, or, in the words of one study, “plagued with error and questions about validity.”

Rather, the New York and Princeton Universities researchers used what they said was a novel new dataset combining survey responses and “digital trace data” that “overcomes well-known biases in sample selection and self-reports of online behavior.”

In plain English, that means that starting in November 2016, they asked respondents to share information from their Facebook profiles via a Facebook web application that enabled respondents to select what type of information they were willing to share: fields from their public profile, including religious and political views; their own timeline posts, including external links; and what pages they followed.

Out of a panel of 3,500 people, about 49% of the study participants who used Facebook agreed to share their profile data.

Researchers could then check links posted to participants’ timelines against a list of web domains known to have historically shared fake news, compiled by BuzzFeed News reporter Craig Silverman. The researchers also cross-referenced those links against other lists of fake news stories and domains to see whether the results would be consistent.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/KymF2cVqzvw/

Facebook to start fact-checking fake news in the UK

Facebook has hired an independent UK fact-checking organization, Full Fact, to help it rate what users report as fake news.

Since 2016, Facebook has been working worldwide with fact checkers on the issue of misinformation in people’s news feeds. This recent agreement with Full Fact marks the launch of the battle in the UK.

On Friday, Full Fact said in a blog post that started this month, it’s going to be checking the veracity of public pictures, videos or stories flagged as potentially bogus by Facebook users.

The charity, which was founded in 2010, will grade content as true, false or a mixture of both. The full rating system offers nine options, including utterly false, having a factually inaccurate headline, or being satirical or prank news, among other ratings.

Full Fact is only going to check content presented as fact-based reporting. It’s not going to check satire or opinion, which will be exempt from ratings and labeled as such.

None of this will stop users from sharing or reading content, regardless of whether it’s been rated as inaccurate. That’s in keeping with Facebook’s current approach to the difficult task of balancing its fight against misinformation on one hand with the principles of free speech on the other.

As it said in July 2018, Facebook won’t remove fake news, choosing instead to demote it. Facebook said at the time that demotion translates into an 80% loss of future views and that such punishment extends to Pages and domains that repeatedly share bogus news.

Facebook’s recent history with fake news

In April 2018, Facebook started putting some context around the sources of news stories. That includes all news stories: both the sources with good reputations, the junk factories, and the junk-churning bot-armies making money from it.

Prior to that, in March 2017, Facebook started slapping “disputed” flags on what its panel of fact-checkers deemed fishy news.

As it happened, these flags just made things worse. The flags did nothing to stop the spread of fake news, instead only causing traffic to some disputed stories to skyrocket as a backlash to what some groups saw as an attempt to bury “the truth”.

In keeping with Facebook’s refusal to remove content marked as false or inaccurate, users will be warned when they’re about to share what’s been deemed to be false. They won’t be prevented from doing so, however. But given that Facebook’s algorithms demote false content, fewer people will see it in the first place.

Sarah Brown, training and news literacy manager at Facebook, said that bringing Full Fact in on fact-checking will help push false news ever further down so that fewer people will see it:

People don’t want to see false news on Facebook, and nor do we. We’re delighted to be working with an organization as reputable and respected as Full Fact to tackle this issue.

Full Fact will, at least initially, be the only UK-based fact checker. Given the enormous volume of Facebook traffic, the charity isn’t promising it can catch everything. It says it’s going to prioritize “misinformation that could damage people’s health or safety, or undermine democratic processes”. That includes…

…everything from dangerous cancer ‘cures’ to false stories spreading after terrorist attacks or fake content about voting processes ahead of elections.

“This isn’t a magic pill,” Full Fact said. It’s doing grindingly slow grunt work in the face of a fake news tsunami, it said:

Factchecking is slow, careful, pretty unglamorous work – and realistically we know we can’t possibly review all the potentially false claims that appear on Facebook every day.

”Facebook will have no control”

Facebook is footing the bill for this: a change from the early days of its fact-checking foray, when it was apparently unwilling to pay for the service of non-partisan fact-checking organizations or volunteer users.

Full Fact says that it plans to publish full funding details here.

Whatever Facebook forks over won’t buy it the rights to tinker with the product, Full Fact said:

Organizations we work with – including Facebook – never contribute to our editorial policy or influence who and what we fact-check. We have rigorous safeguards in place at every level to ensure our neutrality and independence, including fundraising safeguards.

And while Full Fact can’t promise to scrub Facebook entirely clean of gunk, this is at least a step in the right direction, according to Full Fact Director Will Moy. The Guardian quoted Moy:

Fact-checking can take hours, days or weeks, so nobody has time to properly check everything they see online. But it’s important somebody’s doing it because online misinformation, at its worst, can seriously damage people’s safety or health.

There’s no magic pill to instantly cure the problem, but this is a step in the right direction. It will let us give Facebook users the information they need to scrutinize false or misleading stories themselves and hopefully limit their spread – without stopping them sharing anything they want to.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/pp8i-G4xHPI/

Blockchain burglar returns some of $1m crypto-swag

It isn’t often that the villains show their soft side, but a blockchain burglar apparently did just that last week. An unidentified thief who stole over $1 million from the Ethereum Classic blockchain has given some of it back.

The thief exploited a loophole that exists in Ethereum Classic along with several other cryptocurrencies called a “51% attack”, which enables attackers to rewrite the blockchain and spend cryptocurrency twice. They used the technique to attack several cryptocurrency exchanges with fraudulent transactions.

Then, less than a week later, they returned some of the cash, said affected exchange Gate.io in a statement:

On Jan.10, we found that the recent ETC 51% attacker returned 100k USD value of ETC back to Gate.io.

Cryptocurrencies like Ethereum Classic are based on a proof-of-work algorithm, in which many different computers compete to solve a mathematical problem. The computer that wins the competition gets to seal the last few minutes’ transactions into a block (a little like a page in an accounting ledger).

If the computer that wins the competition tries to falsify those transactions, it will normally be found out because other computers that are also checking the transactions will report the discrepancy.

However, if one person gains access to more than half of the computing power across the whole blockchain, they can falsify transactions and get all of their computers to agree that the fake transactions are real. Because more than half of the blockchain agrees on the transactions, they are written into the blockchain as real.

This gives them effective control of the blockchain, enabling them to rewrite transactions as they see fit. They could pay someone else in cryptocurrency, receive the goods or services, and then rewrite the blockchain’s ledger to eradicate the payment and get their money back.

This is what happened early this month. Cryptocurrency exchange Coinbase detected anomalies in the Ethereum Classic blockchain as early as 5 January, suggesting several double spends.

The official Ethereum Classic Twitter account confirmed on 7 January that it was working with people in the community after finding that one private mining pool’s hash rate had hit over 50% of the entire blockchain’s capacity.

On 8 January, cryptocurrency exchange Gate.io confirmed that it had suffered from a double-spending attack as a result of the 51% situation. Attackers made seven double-spending transactions involving the exchange stealing 40,000 Ethereum Classic tokens, it said, adding that it would swallow the loss on its users’ behalf.

Beijing-based cryptocurrency security team SlowMist subsequently released an analysis of the attack but was still none the wiser about the attacker’s identity.

Other exchanges were also hit:

Overall, around 219,000 tokens were stolen amounting to around $1.1m, CoinDesk said in its analysis.

Market data from CoinMarketCap shows how the price of Ethereum Classic fell in reaction to the news:

Then, the thieves gave back some of the cash, but not all, and it’s unclear why they gave any back at all. According to Gate.io:

We were trying to contact the attacker but we haven’t got any reply until now.

We still don’t know the reason. If the attacker didn’t run it for profit, he might be a white [hat – sic] hacker who wanted to remind people the risks in blockchain consensus and hashing power security

The return of some Ethereum Classic tokens is a positive step, but Gate.io said that Ethereum Classic users should still be wary:

Based on our analysis, the hashing power of ETC network is still not strong enough and it’s still possible to rent enough hashing power to launch another 51% attack. Gate.io has raised the ETC confirmation number to 4000 and launched a strict 51% detect for enhanced protection. We also suggest other ETC exchanges to take actions to protect the trader from blockchain rollback/reorg.

Ethereum Classic isn’t the only cryptocurrency to be stymied by a 51% attack.

Monacoin, Bitcoin Gold, Zencash and Litecoin Cash have all been hit with similar attacks in the past according to cryptocurrency-watching site CoinDesk, which suggests that they are becoming more of a problem.

CoinDesk cites research released by NYU computer science academic Joseph Bonneau in 2017 that estimated how much money it will cost to launch 51% attacks on top blockchains by simply renting power. It was all-too feasible, he suggested.

Ethereum Classic is a separate cryptocurrency to Ethereum, which was not affected by the attack.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MzbjQEqDHw4/