STE WILLIAMS

Trump Dismisses Russian Interference Indictments in Presser with Putin

Russian President Vladamir Putin ‘just said it’s not Russia,’ US President Trump said.

After a highly anticipated and controversial meeting with Russian President Vladimir Putin today in Helsinki, President Donald Trump doubled down on his doubts that Russia interfered with the 2016 US election.

In a joint news conference after their one-one-one meeting, Trump said that he believed Putin’s denial of Russia’s involvement. “So I have great confidence in my intelligence people, but I will tell you that President Putin was extremely strong and powerful in his denial today,” Trump said.

His statements come just days after Special Counsel Robert Mueller’s investigation led to the bombshell indictment of 12 Russian military officials for hacking into US systems in a widespread, orchestrated election-interference operation aimed at helping Trump’s candicacy over Democratic rival Hillary Clinton.

Said Trump, “I don’t see any reason” for Russia to interfere with the US presential election. 

But Trump’s own intel chief stood by the intelligence community’s findings that Russia meddled in the 2016 election. 

“The role of the Intelligence Community is to provide the best information and fact-based assessments possible for the President and policymakers. We have been clear in our assessments of Russian meddling in the 2016 election and their ongoing, pervasive efforts to undermine our democracy, and we will continue to provide unvarnished and objective intelligence in support of our national security,” said director of national intelligence Daniel R. Coats, in a press statement released after the Trump-Putin press event.

Read the full transcript from the Helsinki press conference here

 

 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/trump-dismisses-russian-interference-indictments-in-presser-with-putin/d/d-id/1332306?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Less Than Half of Cyberattacks Detected via Antivirus: SANS

Companies are buying next-gen antivirus and fileless attack detection tools but few have the resources to use them, researchers report.

Businesses are investing in more advanced endpoint security tools but don’t have the means to properly implement and use them, according to a new report from the SANS Institute.

The SANS 2018 Survey on Endpoint Protection and Response polled 277 IT professionals on endpoint security concerns and practices. In this year’s survey, 42% of respondents reported endpoint exploits, down from 53% in 2017. However, the number of those who didn’t know they had been breached jumped from 10% in 2017 to 20% in 2018.

Traditional tools are no longer sufficient to detect cyberattacks, the data shows: Antivirus systems only detected endpoint compromise 47% of the time; other attacks were caught through automated SIEM alerts (32%) and endpoint detection and response platforms (26%).

Most endpoint attacks are intended to exploit users. More than 50% of respondents reported Web drive-by incidents, 53% pointed to social engineering and phishing attacks, and half cited ransomware. Credential theft was used in 40% of compromises reported, researchers state.

The majority (84%) of endpoint breaches involve more than one device, experts report. Desktops and laptops are still the top devices of concern, but attackers are also compromising server endpoints, cloud-based endpoints, SCADA, and other industrial IoT devices. Cloud-based endpoints are increasingly popular, going from just over 40% in 2017 to 60% in 2018.

Given the commonality and effectiveness of user-targeted attacks, it’s worth noting that detection technologies designed to look at user and system behavior, or provide context awareness, were less involved in detecting breaches. Only 23% of breaches were found with attack behavior-modeling and only 11% were detected with behavior analytics.

Businesses aren’t using these technologies as often because they lack the means, SANS reports. Many IT and security pros report investing in next-gen capabilities but not installing them. For example, half have acquired next-gen AV tools but 37% have not implemented them. Forty-nine percent have fileless attack detection tools but 38% haven’t implemented the tech.

When breaches do occur it seems many businesses can trace them to the source. Nearly 80% of respondents report they can tie a user to endpoints and servers at least half the time (34% always, 45% at least half), which adds an identity when making decisions about user behavior.

Data collection makes a major difference in data breach remediation, but  organizations don’t always have access to the data they needed. Most (84%) respondents want more network access and user data, 74% want more network security data from firewall/IPS/unified threat management systems, and 69% want better network traffic analysis.

Related Content:

 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/less-than-half-of-cyberattacks-detected-via-antivirus-sans/d/d-id/1332309?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Russian National Vulnerability Database Operation Raises Suspicions

Recorded Future says Russia’s Federal Service for Technical and Export Control has ability to find, weaponize vulnerabilities under cover of doing technology inspections.

The official mission of the organization in charge of maintaining Russia’s national vulnerability database gives it legitimate cover for inspecting foreign technologies and products for security vulnerabilities that can later be weaponized.

That’s according to Recorded Future, which Monday released a report summarizing its analysis of the vulnerability disclosure practices and mission of the Federal Service for Technical and Export Control of Russia (FSTEC), the military organization responsible for BDU, the nation’s official vulnerability database.

The analysis revealed that the FSTEC’s extensive list of responsibilities includes the authority to test and inspect proprietary products and services for issues that could pose a risk to state and critical infrastructure security. That mission is troubling, says Priscilla Moriuchi, director of strategic threat development at Recorded Future.

“The primary threat to Western companies is from the technology licensing process,” Moriuchi says. “During these inspections the Russian military could discover and operationalize vulnerabilities in proprietary products or services,” she says.

The threat from having to work with the FSTEC — and by extension the Russian military — is not to the companies directly or to their intellectual property. Rather, what is concerning is the derivative risk for computer users around the world.

“Russia has demonstrated during at least two incidents in the past year a willingness to exploit western technologies, companies, and accesses in an attempt to obtain the information or communications of their customers,” Moriuchi says.

The two incidents are the April targeting of network devices and the more recent attacks involving VPNFilter. “The [national vulnerability] database provides a legitimate cover under which the Russian government can demand reviews of foreign technologies and products,” she notes.

Recorded Future performed a similar analysis of China’s vulnerability disclosure practices last November. The report concluded that China’s Ministry of State Security likely influences security vulnerability disclosures in the country especially in the case of high-value security flaws that could be used for surveillance and other offensive purposes.

Russia’s FSTEC publishes only about 10% of the vulnerabilities it knows about and that too about 50 days after the data has been published in the U.S. and 83 days after it appears in China’s NVD, according to Recorded Future.

A majority of the vulnerabilities in BDU are those that primarily present a threat to Russian state-owned information systems and automated systems for managing technical processes and production and critical infrastructure facilities. The data is publicly accessible and is designed for use by a wide range of people including security professionals, operators of critical infrastructure, and developers.

Unlike China’s Ministry of State Security, which has a penchant for delaying or hiding data on vulnerabilities that the state can exploit for surveillance and other offensive purposes, Russia’s FSTC over-reports on vulnerabilities that have been exploited by Russian state-sponsored threat groups. “Our analysis reveals that the BDU actually publishes 61% of vulnerabilities utilized by Russian military intelligence groups and does not seek to hide these vulnerabilities.”

The number is noteworthy because it is significantly larger than the 10% of other vulnerabilities that the FSTC normally discloses. One reason could be to ensure that owners and operators of government and critical infrastructure systems are properly informed of the threats so they can protect against them.

The FSTEC started publishing vulnerability data only in 2014, about 15 years after the US started the practice. Somewhat unsurprisingly, the BDU contains data on just about 11,000 vulnerabilities compared to the 107,901 in the U.S. NVD — though that could also be the result of the FSTEC’s habit of occasionally lumping multiple vulnerabilities under a single identifier. Among the vulnerabilities the organization published fastest were those related to browsers and industrial control systems.

Recorded Future’s analysis showed that the FSTEC reports on vulnerabilities in some technologies relatively extensively while it under-reports flaws in the case of some other technologies. For instance, the FSTEC discloses a substantially greater proportion of flaws in Adobe, Linux, Microsoft, and Apple than it does with flaws in content management systems and technologies from IBM and Huawei.

What is unclear, however, is why FSTEC is even publishing the data considering just how delayed, state-focused and sparse the data is, Recorded Future noted in its report. In fact, the vulnerability data in the BDU reveals more about Russia’s state information systems and the FSTEC’s mission itself than anything else, the vendor said.

Related Content:

 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/russian-national-vulnerability-database-operation-raises-suspicions/d/d-id/1332310?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ex-Apple engineer charged with stealing self-driving car secrets

A former Apple engineer who worked on driverless car technology was arrested on his way to start a new job in China with autonomous vehicle start-up Xiaopeng Motors – a Guangzhou-based company also known as XMotors – Apple charged in federal court on Monday.

A criminal complaint charged the former employee, Xiaolang Zhang, with stealing trade secrets and accused him of downloading a blueprint related to autonomous cars to a personal laptop before trying to board a last-minute flight.

Zhang was arrested on 7 July after he passed through a security checkpoint at the San Jose airport.

According to the complaint, he was hired at Apple on 7 December 2015 to work on its autonomous car project – RD that Apple’s kept very hush-hush. His most recent work was on the compute team, designing and testing circuit boards to analyze sensor data.

That role gave him access to all sorts of juicy, and confidential, databases.

According to the complaint, information about the project “is a closely guarded secret that has never been publicly revealed.”

Apple has been cagey about its research, making general comments about its interest in developing self-driving technology but keeping mum about just what, exactly, the company’s working on. According to the complaint, information has even been kept away from most of its employees. Some 5,000 staff, out of more than 135,000, have been “disclosed” on the project, meaning that they’re working on it directly or know something about it. Fewer people, about 2,700 “core employees,” have access to the project’s databases.

From 1 to 28 April 2018, Zhang took paternity leave following the birth of a child. During his leave, he and his family traveled to China. When he got back, he met with his immediate supervisor, as the complaint tells it, and told him that he planned to resign and move back to China in order to be closer to his ailing mother. Zhang allegedly also told his supervisor that he planned to take a job with XMotors: a Chinese start-up in the driverless car space.

Well, that didn’t particularly reassure his supervisor. Zhang’s boss felt that Zhang was being evasive during the meeting and so he requested the presence of Apple security. By the time the meeting wound up, Zhang had handed over all his Apple-owned devices – two iPhones and a MacBook – and was then walked off the campus.

Apple security then immediately revoked Zhang’s remote network access, badge privileges and all other employee access. He was also reminded about Apple’s intellectual property (IP) policy.

This all went down on 30 April. On 1 May, Apple security asked staff overseeing the company’s juicy autonomous vehicle IP databases for a review of Zhang’s historical network user activity, while a security-focused attorney began to review Zhang’s history of building access and activities on the Apple campus. He also requested a forensic exam of Zhang’s devices.

The complaint alleges that in the days just prior to that meeting with his supervisor, Zhang’s network activity spiked – “exponentially.” He allegedly performed a slew of bulk searches and targeted downloading of “copious” data from one database: 581 database rows one day, 28 rows another day. That’s a lot, compared with the 610 rows of user activity Zhang allegedly generated during the entire previous month. On another database, Zhang allegedly got his hands on a whopping 3,390 database rows: nearly triple the database activity on 1,484 rows he generated between July 2017 and March 2018.

In the two days prior to his announcement that he was off to China and XMotors, Zhang allegedly downloaded PDFs with confidential material such as blueprints for prototype cars and their requirements regarding power, low voltage, battery system, drivetrain suspension mounts, etc. CCTV footage from Apple campus security cameras allegedly showed the soon-to-be ex-engineer entering the driverless car software and hardware labs on 28 April – that would be during Zhang’s paternity leave – and then walking out with a computer keyboard, cables and a big box.

Apple security beckoned Zhang back in for another chat on 2 May. The complaint says that Zhang admitted that he was going after a job with XMotors while he was still working at Apple. Initially, he allegedly denied being on the campus during his paternity leave, but Apple security presented proof that he was. Zhang allegedly admitted he was there and had taken two circuit boards and a Linux server from the hardware lab. Zhang’s rationale, according to the complaint: he thought it would come in handy at a new job – at Apple. He never transferred to that job, though.

He also allegedly admitted to being shown a proprietary chip by colleagues while he was on campus that day, as well as transferring data to his wife’s laptop. Apple’s forensics on that laptop are still ongoing, but the company describes 60% of the data that it’s turned up as being “highly problematic.”

The criminal charge is based on only one of the PDF files that forensics found: a 25-page document containing electrical schematics for one of the circuit boards. The court document contained a sample of the big, prominent, full-caps NOTICE OF PROPRIETARY PROPERTY at the top of the document’s table of contents.

The FBI had a chat with Zhang on 27 June. On 7 July, FBI agents found that Zhang had picked up a last-minute, round-trip, solo-traveler ticket to China.

According to Reuters, XMotors said on Wednesday that there was no indication that Zhang communicated sensitive information from Apple. XMotors also said that it was informed of the case late last month and that it’s been working with local authorities on the probe.

Apple issued this statement:

We’re working with authorities on this matter and will do everything possible to make sure this individual and any other individuals involved are held accountable for their actions.

Zhang’s lawyer, a federal public defender appointed by the court, hadn’t been reached as of Thursday. No plea has yet been entered.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/S32vki0aG7w/

USB Restricted Mode in iOS 11.4.1 now available to all iPhone users

The latest version of iOS is now available to all iOS users with eligible devices (iPhone 5s and up). This release not only brings bug fixes, but also includes at least one new feature that might be of interest to security-minded users.

The new feature is called “USB Restricted Mode,” and it lives quietly in the security settings of your iPhone (look for it under “Touch ID Passcode”). Apple’s description of this new feature toggle:

If you don’t first unlock your password-protected iOS device – or you haven’t unlocked and connected it to a USB accessory within the past hour – your iOS device won’t communicate with the accessory or computer, and in some cases, it might not charge. You might also see an alert asking you to unlock your device to use accessories.

Upon updating to iOS 11.4.1, the default setting for this feature is to not allow USB accessories to work with the iPhone or iPad when locked for more than an hour.

To understand why this feature now exists, let’s review how USB accessories generally work with iPhones and iPads. When you plug a USB accessory into your iPhone or iPad, that item will not work unless the iDevice is unlocked first and the user answers a prompt on their iDevice to recognize the new USB device.

After completing this prompt successfully, that USB device will be able to work with the iDevice without issue in the future even when the phone is locked.

This is helpful for users (who no doubt wouldn’t want to go through this process every single time they plug in a device) and a neat little backdoor for hackers or anyone else who might want to access a locked iPhone with hacking tools like GrayKey.

Though we don’t know all the internal workings of hacking tools like GrayKey (the makers keep the details for law enforcement’s ears only), it’s purported to benefit from this USB-lock-bypassing behavior.

So by enabling a feature that requires a user to unlock the phone to use any USB accessories again, Apple seem to be making a new attempt to keep both hackers and government agencies out of their users’ iPhones, though it’s not clear if GrayKey would be deterred by this new feature. (You may remember that the heat was on Apple after the San Bernardino mass murder when the US government wanted access to the terrorists’ locked iPhones.)

Perhaps this is a better-than-nothing feature for the security minded; however, it’s already been shown to be rather easily circumvented with a USB accessory. Was this an oversight or is this feature working as intended? No word from Apple just yet, but keep an eye on the next iOS update to see if there’s a fix for this.

On the bug front, one of the flaws fixed in this update may be a curious after-effect of Chinese government censorship. In some versions of iPhones with specific region settings in place, just typing the word “Taiwan” or using the Taiwan flag emoji would cause the phone to completely crash. This flaw was discovered and written up by security researcher Patrick Wardle, who examined the code and found that this phone-crashing behavior was not the intent of the censorship code (it should merely not render the censored emoji or text), and disclosed the flaw to Apple.

The bug was then assigned CVE-2018-4290 and addressed in this patch – not to remove the censorship, but to fix the flaw in the code that was causing it to crash phones in certain configurations. Writes Wardle:

Though [this bug’s] impact was limited to a denial of service (NULL-pointer dereference) it made for an interesting case study of analyzing iOS code …and if Apple hadn’t tried to appease the Chinese government in the first place, there would be no bug!

The full details of the security updates for iOS 11.4.1 can be found in Apple’s article about the security content of iOS 11.4.1. Several of the CVEs are for issues found in WebKit, used by Safari and iOS mail, including denial of service and arbitrary code execution flaws.

Apple release iOS 11.4.1 on 9 July, so if you have an iPhone 5s or iPad Air or newer, you’ll find the update via “Software Updates” in the Settings app.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jwiOWfw5pe0/

Twitter pops a lot of famous people’s follower bubbles

Twitter has wiped out accounts that have been locked due to misbehavior, obliterating an average of about four followers each for us earth-bound mortals and millions for its twinkliest stars.

Vijaya Gadde, head of the company’s legal team, said that the move was taken as part of Twitter’s “ongoing and global effort to build trust and encourage healthy conversation on Twitter.”

In other words, it’s another salvo in the fight against fake news – or in the fight against the type of accounts that most Twitter users hold their noses over when they enter a conversation.

The locked-account purge had its biggest impact on the top Twitter accounts, of course.

The more followers, the bigger the gouge: Gadde said that most accounts would lose four or fewer followers, while the more popular accounts would “experience a more significant drop”.

Musician Katy Perry tops Twitter’s list of 50 most-followed accounts. Perry – along with Lady Gaga, who’s at No. 6 – both lost about 2.5m followers, according to the BBC. Ex-President Barack Obama, at No. 3, lost 2.1m followers.

But the biggest hit was to Twitter itself: according to the BBC, Twitter (No. 16 on the list) lost 7.7m followers.

How does an account get locked?

Gadde said that accounts have been locked over the years when they suddenly start acting differently. Twitter will first reach out to the account owners. Unless they validate the account and reset their passwords, Twitter locks them so nobody can log in.

For the most part, these accounts are created by real people, not by spammers. Twitter says it can tell because spam accounts – also referred to as bots – act spammy from the get-go. They’re “increasingly predictable” by Twitter’s system, she said, and can be automatically shut down.

Twitter’s purge affects only follower counts, not tweets, likes or retweets, Gadde said. That makes sense: locked accounts can’t tweet, like or retweet anything. Nor are they served ads. This particular, follower-focused move was taken because follower lists are “one of the most visible features on our service and often associated with account credibility,” she said.

The locked-account crackdown will also not affect users’ Monthly Active User (MAU) or Daily Active User (DAU) metrics. Those metrics don’t include locked accounts that haven’t reset their passwords in more than a month. However, some of the removed locked accounts do have the potential to affect publicly reported metrics, Gadde said.

This is just the latest step Twitter’s taking to improve itself and ensure that “everyone can have confidence in their followers,” Gadde said.

Fake accounts have seriously eroded that confidence. One recent example was the flawed comment process for net neutrality, in which 2m stolen identities were used to make fake comments …including two identities stolen from senators.

In December, New York Attorney General Eric Schneiderman called the process “deeply corrupted.” In January, he said that the state had launched an investigation into a company that allegedly sold millions of fake followers to social media users.

Here’s hoping the locked-account purge makes such a scenario much less likely to happen again in the future.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/4Z9vRW-m8hg/

Facebook refuses to remove fake news, but will demote it

Forget about getting rid of fake news, Facebook said on Thursday. It might be raw sewage, but hey, even raw sewage has a right to flow, right?

In the name of free speech, Facebook said, it’s keeping all the bilge water, be it pumped out by the right or left… though the platform intends to push fakery down deeper into the holding tank by demoting it.

As Facebook said in its tweet, demotion translates into an 80% loss of future views, and the punishment extends to Pages and domains that repeatedly share bogus news.

This latest fake-news spasm comes on the heels of an event Facebook held in New York on Wednesday that blew up in its face. Journalists got to feed on shrimp cocktail, listen to a short presentation, and then engage in a question-and-answer session, all in the name of convincing the press that the social media network has finally reached some kind of beachhead in the war against disinformation.

Facebook’s effort fell apart when CNN reporter Oliver Darcy began to grill Facebook Head of News Feed John Hegeman about its decision to allow Alex Jones’ conspiracy news site InfoWars on its platform.

How, Darcy asked, can the company claim to be serious about tackling the problem of misinformation online while simultaneously allowing InfoWars to maintain a page with nearly one million followers on its website?

Hegeman’s reply: the company…

…does not take down false news.

CNN quoted Hegeman’s rationalization:

I guess just for being false that doesn’t violate the community standards. [InfoWars hasn’t] violated something that would result in them being taken down.

I think part of the fundamental thing here is that we created Facebook to be a place where different people can have a voice. And different publishers have very different points of view.

InfoWars is the site where conspiracy theorist Alex Jones airs his notions: notions that include labelling as “liars” the grieving families of children gunned down in school shootings such as that at Sandy Hook Elementary School. In YouTube videos, Jones has over the years said that the Sandy Hook shooting has “inside job written all over it,” has called the shooting “synthetic, completely fake, with actors, in my view, manufactured,” has claimed “the whole thing was fake,” said that the massacre was “staged,” called it a “giant hoax,” and suggested that some victims’ parents lied about seeing their dead children.

Sandy Hook is only one of his many focuses: earlier this year, InfoWars smeared student survivors of the Parkland, Florida shooting with baseless attacks, portraying them in one video as actors, just as he’s classified Sandy Hook victims as child actors. Most recently, InfoWars has pushed an unfounded conspiracy theory about how Democrats, “infuriated” by President Trump “bringing America back,” planned to start a civil war on 4 July.

Facebook isn’t the only social media platform that publishes this type of gunk, declaring that it passes muster with regards to community standards. Google, just like Facebook, considers Jones’ YouTube rants to be kosher as far as community standards go.

That, in spite of multiple defamation lawsuits having recently been filed against Jones by Sandy Hook parents. Those parents claim that Jones’s “repeated lies and conspiratorial ravings” have led to death threats, among other trauma. Another lawsuit has been filed against Jones by a man whom InfoWars incorrectly identified as the Parkland school shooter.

A bit of recent history regarding Facebook and its wrangling with fake news: In April, Facebook started putting some context around the sources of news stories. That includes all news stories: both the sources with good reputations, the junk factories, and the junk-churning bot-armies making money from it.

You might also recall that in March 2017, Facebook started slapping “disputed” flags on what its panel of fact-checkers deemed fishy news.

As it happened, these flags just made things worse. The flags did nothing to stop the spread of fake news, instead only causing traffic to some disputed stories to skyrocket as a backlash to what some groups saw as an attempt to bury “the truth”.

When Darcy pressed Facebook’s reps with more questions about the company’s tolerance of InfoWars at the press event on Wednesday, Sara Su, a Facebook product specialist for News Feed, said that Facebook is choosing to focus on tackling posts that can be proven beyond a doubt to be demonstrably false:

There’s a ton of stuff – conspiracy theories, misleading claims, cherry picking – that we know can be really problematic, and it bugs me, too. But we need to figure out a way to really define that in a clear way, and then figure out what our policy and our product positions are about that.

Facebook spokeswoman Lauren Svensson followed up with Darcy after the event, telling him that questions about InfoWars hit “on a very real tension” at Facebook, and that demoting fakery seems to strike the right kind of balance:

In other words, we allow people to post it as a form of expression, but we’re not going to show it at the top of News Feed.

That said, while sharing fake news doesn’t violate our Community Standards set of policies, we do have strategies in place to deal with actors who repeatedly share false news. If content from a Page or domain is repeatedly given a ‘false’ rating from our third-party fact-checkers …we remove their monetization and advertising privileges to cut off financial incentives, and dramatically reduce the distribution of all of their Page-level or domain-level content on Facebook.

It’s not as if the platforms aren’t retaliating at all to outrageous material such as that doled out by InfoWars.

At YouTube, Jones’s channel got its first strike on 23 February for a video that suggested that David Hogg and other student survivors of the Parkland mass shooting were crisis actors. The video, “David Hogg Can’t Remember His Lines In TV Interview,” was removed for violating YouTube’s policies on bullying and harassment.

The second strike was on a video that was also about the Parkland shooting. The consequences of getting two strikes within three months was a two-week suspension during which the account couldn’t post new content. A third strike within three months would mean InfoWars would get banned from YouTube. At the time, InfoWars had more than two million YouTube subscribers.

It is easy to make the argument that Facebook and Google are reluctant to poke the hornets’ nest when it comes to groups of users known to be volatile – that has certainly applied to InfoWars followers in the past – but at the end of the day, we have to accept that the social media companies are still just at the beginning of trying to figure out how to police their massive amounts of user-generated content.

With any set of community guidelines that have to play catch-up with current events – after all, “Sandy Hook parents” aren’t a named category when it comes to protected groups in hate speech guidelines – we’re going to have to suffer the consequences of Facebook, et al., scrambling to make it up as it goes along.

The will is undoubtedly there. But so are the ad dollars. Demoting content, Pages and domains is in service of the truth over reader engagement and marketing numbers. Is it enough to turn the tide?

Readers, your thoughts: does content demotion have any chance of making headway in this battle?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/WChWNERn6Rg/

Who is the weakest link in software security?

Study In the early years of software development, you would often design it, build it, and only then think about how to secure it.

This was arguably fine in the in the days of monolithic applications and closed networks, when good perimeter-based protection and effective identity and access management would get you a long way towards minimising the risk. In today’s highly connected, API-driven application environments, though, any given software component or service can be invoked and potentially abused in so many different ways.

Add to this the increasing pace of change through iterative “DevOps-style” delivery and ever-faster release cycles, and many understandably assert that security management and assurance should nowadays be an ongoing and embedded part of the development and delivery processes.

But what are the practicalities of this? Do developers – ie, those writing the code – need to take more responsibility for software security? If so, then what do they need to step up, without killing their productivity, destroying their morale, and risking them walking off to the competition? Perhaps security is best left to the specialists and operations teams after all?

We know you have a view on this discussion, so let us know what you think in our latest Reg reader study.

CLICK HERE TO HAVE YOUR SAY.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/16/who_is_the_weakest_link_in_software_security/

Time to Yank Cybercrime into the Light

Too many organizations are still operating blindfolded, research finds.

At a time when the public and governments are watching their every move, today’s organizations are up against an unprecedented wave of crime and fraud-related risks that affect their internal and external relationships, regulatory status, and reputation. Unfortunately, not enough companies are truly aware of the fraud threats they face.

According to PricewaterhouseCooper’s 2018 Global Economic Crime and Fraud (GECF) Survey, a poll of some 7,200 respondents across 123 different countries, 49% say their companies had been victimized by fraud or economic crime, up from 36% in 2016. This uptick can be attributed to a greater global awareness of fraud, more survey responses, and better understanding of what constitutes “fraud.” But every company — no matter how vigilant — can have blind spots.

Some 44% of poll respondents indicate that they intend to increase spending in the next two years. Great — but where? These days, organizations are harnessing some seriously powerful technology and data analytics tools to battle the fraudsters. On top of these tech-based controls, many firms are also expanding whistleblower programs and taking care to keep leadership informed about real and potential breaches.

Despite the increased spending, many organizations are still trying to prevent fraud through a reactive, defensive approach. Only 54% of global organizations indicate that they have completed a general fraud or economic crime risk assessment in the past two years. Less than half had conducted a risk assessment to assess their vulnerability to cybercrime. Even worse, one in 10 performed zero risk assessments in the past two years.

According to PwC’s CEO Survey 2018, a majority (59%) of CEOs agree or strongly agree that organizations are feeling more pressure to hold leaders accountable for any misconduct perpetrated on their watch. That may be why some 71% of CEOs measure the levels of trust between their workers and their organization’s senior leadership.

The Perpetrators
As highlighted in PwC’s GECF report, some 68% of external fraudsters are agents, vendors, shared service providers, and customers. Troublingly, 52% of all frauds are committed by people inside the organization, and, astonishingly, in almost a quarter (24%) of reported internal frauds, senior management are the bad guys

Cybercrime has grown up. Cybercriminals are estimated to rake in $1.5 trillion in annual cybercrime-related revenues, which means that detecting and warding off threats has necessarily become a core business issue.

No doubt much to their chagrin, 41% of executives surveyed say they spent at least twice as much on investigations and attack prevention as they lost to cybercrime itself. Because today’s bad-guy geeks are as smart — and sometimes smarter — as the companies they attack, the business world is crying out for a new perspective on the diverse reality of cyber threats and related frauds.

Often, the first indication an organization gets that something major is happening is when they detect a cyber-enabled attack, such as phishing, malware, a distributed denial-of-service attack or a traditional brute-force attack. The increasing frequency, sophistication, and lethality of such assaults are prompting firms to seek ways to beat the bad guys at their own game, before they can do any damage. This is smart, but it also leads inevitably to a deeper look at fraud prevention.

Consequences Can Be Devastating
Over a third of all respondents have been targeted by cyberattacks. These attacks can severely disrupt business processes and lead to substantive losses: 24% of respondents who were attacked suffered asset misappropriation, and 21% were digitally extorted. It can be hard for companies to accurately gauge the bottom-line impact of cyberattacks, but 14% of survey respondents who said cybercrime was the most disruptive fraud said they lost over $1 million as a result. One percent lost over $100 million.

Overall, cybercrime was over twice as likely than any other fraud to be named as the most disruptive and serious economic crime expected to impact organizations in the next two years. Twenty-six percent of respondents said a cyberattack in the next two years would be the most disruptive to their business; 12% said they expected bribery and corruption to be most disruptive; while 11% said the same about asset misappropriation. In reality, cyberattacks have become so widespread that measuring their occurrences and effects is becoming less strategically productive than figuring out how the fraudsters did it.

Invest in People, Not Just Machines
To battle cyber threats in a meaningful way, organizations can harness a universe of sophisticated technologies they can use to protect themselves against fraud. These tools — including machine learning, predictive analytics, and other artificial intelligence (AI) techniques — aim to monitor, analyze, learn, and predict human behavior.

Only 14% of organizations are using AI to protect against threats. The majority continue to depend on manual, old-school processes and tools. In turn, 34% of respondents say they thought their organization’s use of technology to fight fraud and/or economic crime is creating too many false positives. To minimize the rate, it’s critically important to rely on much stronger on analytics and AI.

Besides tech, the human mind is far harder to influence. Research has found that few organizations have fully wrapped all the relevant risks and threats into their digital strategy. The first way to prevent rationalization is to zero in on the climate that rules employee behavior — the organizational culture. Companies should make full use of surveys, focus groups, and in-depth interviews to assess the strengths and weaknesses of that culture. Consistent training is also key. That way, potential weak cultural spots — ones that may lead a disgruntled employee to exact expensive revenge — can be identified.

Related Content:

 

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info

Marc Wilczek is a columnist and recognized thought leader, geared toward helping organizations drive their digital agenda and achieve higher levels of innovation and productivity through technology. Over the past 20 years, he has held various senior leadership roles across … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/time-to-yank-cybercrime-into-the-light/a/d-id/1332231?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

GitHub to Pythonistas: Let us save you from vulnerable code

GitHub’s added Python to the list of programming languages it can auto-scan for known vulnerabilities.

In March, the social code-host added Ruby and Javascript libraries to the dependency graph service it announced last year.

Afraid of the dark, image via Shutterstock

Your code is RUBBISH, says GitHub. Good thing we’re here to save you

READ MORE

At the time, GitHub claimed those two languages alone yielded “over four million vulnerabilities in 500,000 repositories”, and said alerting the repositories’ owners resulted in a 30 per cent fix-rate within a week of detection.

Now, Python developers have the same lack of excuse for fixing flawed code. In this post, GitHub quality engineer Robert Schultheis explained that “a few recent vulnerabilities” are covered in the current version of the scanner.

It’s hard to work out which vulnerabilities, if they’re public, have spurred GitHub to action. Python generates only light traffic in the Mitre CVE (Common Vulnerabilities and Exposures) database: four entries so far this year, and one of those is disputed.

“Over the coming weeks, we will be adding more historical Python vulnerabilities to our database,” he wrote. “Going forward, we will continue to monitor the NVD feed and other sources, and will send alerts on any newly disclosed vulnerabilities in Python packages.”

The Python scanner is enabled by default on public repositories.

Owners of private repositories need to opt into security alerts (in security settings), or by giving the dependency graph access to the repo (in the “Insights” tab). ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/16/github_to_pythonistas_let_us_save_you_from_vulnerable_code/