STE WILLIAMS

4 Practical Questions to Ask Before Investing in AI

A pragmatic, risk-based approach can help CISOs plan for an efficient, effective, and economically sound implementation of AI for cybersecurity.

Artificial intelligence (AI) could contribute up to $15.7 trillion to the global economy by 2030, according to PwC. That’s the good news. Meanwhile, Forrester has warned that cybercriminals can weaponize and exploit AI to attack businesses. And we’ve all seen the worrisome headlines about how AI is going to take over our jobs. Toss in references to machine learning, artificial neural networks (ANN), and multilayer ANN (aka deep learning), and it’s difficult to know what to think about AI and how CISOs can assess whether the emerging technology is right for their organizations.

Gartner offers some suggestions on how to fight the FUD, as do Gartner security analysts Dr. Anton Chuvakin and Augusto Barros, who help demystifying AI in their blogs (not without a good note of sarcasm). In this piece, we will cover four practical questions a CISO should consider when investing AI-based products and solutions for cybersecurity.

Question 1: Do you have a risk-based, coherent, and long-term cybersecurity strategy?
Investing in AI without a crystal-clear, well-established, and mature cybersecurity program is like pouring money down the drain. You may fix one single problem but create two new ones or, even worse, overlook more dangerous and urgent issues.

A holistic inventory of your digital assets (i.e., software, hardware, data, users, and licenses) is the indispensable first step of any cybersecurity strategy. In the era of cloud containers, the proliferation of IoT, outsourcing, and decentralization, it is challenging to maintain an up-to-date and comprehensive inventory. However, most of your efforts and concomitant cybersecurity spending will likely be in vain if you omit this crucial step.

Every company should maintain a long-term, risk-based cybersecurity strategy with measurable objectives and consistent intermediary milestones; mitigating isolated risks or rooting out individual threats will not bring much long-term success. Cybersecurity teams should have a well-defined scope of tasks and responsibilities paired with the authority and resources required to attain the goals. This does not mean you should pencil implausibly picture-perfect goals, but rather agree with your board on its risk appetite and ensure incremental implementation of corporate cybersecurity strategy in accordance with it.

Question 2: Can a holistic AI benchmark prove ROI and other measurable benefits?
The primary rule of machine learning, a subset of AI, says to avoid using machine learning whenever possible. Joking aside, machine learning, capable of solving highly sophisticated problems that may have an indefinite number of inputs and thus outputs, is often prone to unreliability and unpredictability. It can also be quite expensive, with a return on investment years away – by which time the entire business model of a company could be obsolete.

For example, training datasets (discussed in the next question) may be costly and time-consuming to obtain, structure, and maintain. And, of course, the more untrivial and intricate the task, the more burdensome and costly it is to build, train, and maintain an AI model free from false positives and false negatives. In addition, businesses may face a vicious cycle when AI-based technology visibly reduces costs but requires disproportionally high investment for maintenance, which often exceeds the saved costs.

Finally, AI may be unsuitable for some tasks and processes where a decision requires a traceable explanation – for example, to prevent discrimination or to comply with law. Therefore, make sure you have a holistic estimate of whether implementation of AI will be economically practical both in short- and long-term scenarios.

Question 3. How much will it cost to maintain an up-to-date and effective AI product?
Cash is king on financial markets. In the business of AI, the royal regalia rightly belongs to the training datasets used to train a machine-learning model to perform different tasks.

The source, reliability, and sufficient volume of the training datasets are the primary issues for most AI products; after all, AI systems are only as good as the data we put into them. Often, a security product may require a considerable training period on-premises, assuming, among other things, that you have a riskless network segment that will serve as an example of the normal state of affairs for training purposes. A generic model, trained outside of your company, may simply be inadaptable for your processes and IT architecture without some complementary training in your network. Thus, make sure that training and the related time commitment are settled prior to product acquisition.

For most purposes in cybersecurity, AI products require regular updates to stay in line with emerging threats and attack vectors, or just with some novelties in your corporate network. Therefore, inquire how frequently updates are required, how long will they take to run, and who will manage the process. This may preclude a bitter surprise of supplementary maintenance fees.

Question 4. Who will bear the legal and privacy risks?
Machine learning may be a huge privacy peril. GDPR financial penalties are just a tip of the iceberg; groups and individuals whose data is unlawfully stored or processed may have a course of action against your company and claim damages. Additionally, many other applicable laws and regulations that may trigger penalties beyond GDPR’s 4% revenue cap must be considered. Also keep in mind that most training datasets inevitably contain a considerable volume of PII, probably gathered without necessary consent or other valid basis. Worse, even if the PII is lawfully collected and processed, a data subject’s request to exercise one of the rights granted under the GDPR, such as right of access or right of erasure, can be unfeasible and the PII itself non-extractable.

Nearly 8,000 patents were filed in the United States between 2010 and 2018, 18% of which come from the cybersecurity industry. Hewlett Packard Enterprise warns about the legal and business risks related to unlicensed usage of patented AI technologies. So it might be a good idea to shift the legal risks of infringement to your vendors, for example, by adding an indemnification clause into your contract.

Few executives realize that their employers may be liable for multimillion-dollar damages for secondary infringement if a technology they use infringes an existing patent. The body of intellectual property law is pretty complicated, and many issues are still unsettled in some jurisdictions, bringing an additional layer of uncertainty about the possible outcomes of litigation. Therefore, make sure you talk to corporate counsel or a licensed attorney to get a legal advice on how to minimize your legal risks.

Last but not least, ascertain that your own data will not be transferred anywhere for “threat intelligence” or “training” purposes, whether legitimate or not.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Ilia Kolochenko is a Swiss application security expert and entrepreneur. Starting his career as a penetration tester, Ilia founded High-Tech Bridge to incarnate his application security ideas. Ilia invented the concept of hybrid security assessment for Web applications that … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/4-practical-questions-to-ask-before-investing-in-ai--/a/d-id/1333772?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Consumers Care About Security

New RSA Security survey shows a generation gap in concerns over cybersecurity and privacy.

Consumer concern about cybersecurity and privacy is very real but not evenly distributed, a new report shows: while passwords and financial information are worrying for everyone, concern about other information varies widely depending on the individual’s age, gender, and national origin.

The RSA Data Privacy Security Survey 2019 of more than 6,000 adults contains results that could be concerning for many modern organizations, including the revelation that less than half (48%) of consumers believe there are ethical ways companies can use their data, while 57% would blame the company above anyone else — even a hacker — in the event of a data incident.

Survey respondents, who came from across western Europe and the US, provided answers that are as contradictory as they are diverse. For example, even though it has been widely demonstrated that personalization is effective in bringing consumers to a site on a repeat basis, only 17% say they see tailored ads as an ethical use of their personal data.

For more, read here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/consumers-care-about-security---sometimes/d/d-id/1333805?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Tackles Gmail Spam with Tensorflow

Tensorflow, Google’s open-source machine learning framework, has been used to block 100 million spam messages.

Google reports Gmail is blocking 100 million extra spam emails per day following the implementation of Tensorflow, its open source, machine-learning framework, to supplement existing spam detection.

Machine learning isn’t new to Gmail: Google has long been using machine-learning models and rule-based filters to detect spam, and its current protections have reportedly prevented more than 99.9% of spam, phishing, and malware from landing in Gmail inboxes. Today’s attackers seek new ways to hit Gmail’s 1.5 billion users and 5 million business clients with advanced threats.

Considering the size of Gmail’s user base, 100 million extra messages doesn’t seem like much. However, since it already blocks so much, the last remaining threats are toughest to identify.

Enter TensorFlow, an open source software library that developers can use to build artificial intelligence (AI) tools. It was developed by researchers and engineers from the Google Brain team within its AI division in 2015, and is used among companies including Google, Intel, SAP, Airbnb, and Qualcomm.

“We’re now blocking spam categories that used to be very hard to detect,” said Neil Kumaran, product manager for counter-abuse technology, in a blog post on the news.

TensorFlow protections complement Google’s machine learning and rule-based protections to try and block the last 0.1% of spam emails from getting through. It supplements current detection by finding image-based messages, emails with hidden embedded content, and messages from newly created domains that may try to hide a low volume of spam emails within legitimate traffic.

Unlike rule-based spam filters, machine-learning models hunt for patterns in unwanted emails that people may not catch. Every email has thousands of defining signals, each of which can help determine whether it’s legitimate. TensorFlow helps weed through the chaos and spot spammy emails that seem real, as well as emails that have spam-like qualities but are authentic.

Kumaran says TensorFlow also helps with personalizing spam protections for each user. The same email could be considered spam to one person but important information to another.

Applying machine learning at scale can be complex and time-consuming. Google is aiming to simplify the process with TensorFlow, which also adds the flexibility to train and experiment with different models at the same time in order to choose the most effective, instead of doing so one at a time.

Still, Gmail security will continue to pose a major challenge for Google. A new report shows how attackers are abusing “dots don’t matter,” a longstanding Gmail security feature, to create fraudulent accounts on websites and use variations of the same email address.

Confidential Computing: Google Buckles Down on Asylo
Google reports it’s investing in confidential computing, which aims to secure applications and data in use, even from privileged access and cloud providers. In addition to today’s Gmail news, Google has published an update on Asylo, an open source framework it introduced in May 2018 to simplify the process of creating and using enclaves on Google Cloud and other platforms.

The adoption of confidential computing has been slow going due to dependence on specific hardware, complexity around deployment, and lack of development tools to create and run applications in these environments. Asylo makes it easier to build applications that run in trusted execution environments (TEEs) with different platforms – for example, Intel SGX.

Google anticipates in the future Aslo will be integrated into developer pipelines, and users will able to launch Asylo apps directly from commercial marketplaces. However, confidential computing is still an emerging technology and enclaves lack established design practices.

To accelerate its use, Google is starting a Confidential Computing Challenge, a contest in which developers can create new use cases. Applicants have until April 1 to submit essays describing a novel use case for the tech.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/google-tackles-gmail-spam-with-tensorflow/d/d-id/1333807?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Some Airline Flight Online Check-in Links Expose Passenger Data

Several airlines send unencrypted links to passengers for flight check-in that could be intercepted by attackers to view passenger and other data, researchers found.

Several major airlines are putting passenger data at risk by sending unencrypted links for performing online check-ins to their flights.

Opportunistic attackers can intercept the links to view and, in some cases, to change a passenger’s flight booking details and to print their boarding passes, according to security vendor Wandera.

Data at risk includes passenger names, boarding pass and flight details, passport and travel document data, email addresses, phone numbers, and other information.

Researchers from Wandera recently investigated e-ticketing systems in use by over 40 global airlines in the US, Europe, and Asia Pacific region. The company initiated the investigation after observing one airline sending passenger details belonging to a company customer in unencrypted fashion.

Wandera’s sleuthing showed multiple airlines are sending insecure links for passenger check-in. The links typically direct passengers to an airline site where they are logged-in automatically to check-in for their flight and to make changes to their booking if needed. 

In a report Wednesday, Wandera listed eight airlines in total that it says are putting different types of passenger data at risk via unencrypted links. The list only includes airlines that Wandera says had an opportunity to respond after being notified about the vulnerability.

Among them are Southwest in the US; KLM, Transavia and Vueling in Europe; and Jetstar in Australia.

In an emailed statement, a Jetstar spokesman said the company has no evidence of customers’ booking details or data being misused by unauthorized parties via the booking link. “To ensure our customers’ information remains protected we have multiple layers of security in place and are continuously implementing further cyber safeguards for emails, itineraries and our systems,” the statement noted. “Sensitive customer information such as payment details [is] not accessible through a customer’s booking link.”

None of the other airlines mentioned above responded immediately to a Dark Reading request for comment. The request for comment was sent after office hours in the case of the European airline companies.

Wi-Fi Attack

The data at risk differs by airline, with some e-ticketing systems providing access to a lot more data than others. One airline’s check-in link (identified in Wandera’s report simply as Airline 8) for instance provides access only to the passenger’s last name and booking reference number. Links from other carriers provide access to full names, phone numbers, seat assignments, passport details, nationality, gender, date of birth, and full home address.

In order to intercept a vulnerable check-in link, an attacker would need to be on the same Wi-Fi network at as the potential victim. Even so, Wandera’s vice president of product management Michael Covington, believes the vulnerability is significant. “The threat is a real problem for travelers because of the amount of sensitive information that is inadequately protected from hackers,” he says.  

An attacker who manages to intercept a link can impersonate the passenger at anytime — before or after the actual check-in process begins — to make changes on the traveler’s account or to obtain a valid boarding pass, he says.

In addition to passenger details, an attacker with access to a unencrypted check-in link would in some cases potentially be to view information on all the companions associated with a traveler on the same booking, including family and work colleagues. “This isn’t just about changing a passenger’s seating assignment, it’s about disrupting their entire booking,” Covington says.

Most exploits of this vulnerability will likely be opportunistic because it requires an attacker to be on he same network as the victim, he says. But targeted attacks cannot be ruled out: “Our research does show that most people have a fairly consistent pattern they follow each day,” he says. “Public Wi-Fi access points in cities, airports, and coffee shops make it fairly easy to listen in on the network sessions of a targeted individual.”

Covington says the response for the most part has been “minimal” from airlines Wandera has notified about the issue. Some, including Southwest and Jetstar, have asked for additional details and confirmed that fixes are in progress. Wandera has also notified the TSA and the European Aviation Safety Agency, but both have indicated that this issue is outside their jurisdiction, Covington says.

He theorizes the reason why several airlines are using unencrypted links is because they want to make online check-in easy. “The entire problem goes away if they simply made the e-mail/SMS links one-time use” or encrypt the links, he notes.

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/some-airline-flight-online-check-in-links-expose-passenger-data-/d/d-id/1333806?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

HelpSystems Buys Core Security Assets to Grow Infosec Portfolio

Acquisition will enable it to provide threat detection, pen testing, and other security tools to customers.

IT security firm HelpSystems has purchased assets of Core Security from SecureAuth in an effort to increase its cybersecurity offerings. Terms of the deal were not disclosed.

The technologies acquired include tools for identity governance and administration, penetration testing, threat detection, and vulnerability management. HelpSystems users will benefit from a broader set of security tools they can use to defend against internal and external threats, remain compliant with industry regulations, and accelerate operational efficiencies.

More than 900 businesses already use Core Security’s applications; now HelpSystem’s customers can consolidate their security tools to a single vendor.

In a statement, HelpSystems CEO Chris Heim noted how companies lack resources and tools they need to protect their assets around the clock. “We’re now in a better position to fully support their initiatives across any platform with a multi-pronged approach to cybersecurity that gives busy IT/security professionals peace of mind,” he said.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/helpsystems-buys-core-security-assets-to-grow-infosec-portfolio/d/d-id/1333809?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Digital signs left wide open with default password

Security researcher Drew Green has pried open an internet-connected digital signage system thanks to a default admin web interface password: an easily changeable password that allowed him into the web interface, from where he stumbled onto a chain of vulnerabilities that could allow a malicious attacker to upload whatever unsavories they’d like to display on people’s signage screens.

On Friday, 90 days after Green says he disclosed the vulnerabilities to the digital signage system maker, he published the specifics.

He had pulled apart the signage system for a client during a full-scope penetration test, and this system happened to be on the network. He couldn’t find anything else to dig into, so Green sunk his hooks into the signage system, named Carousel, which comes from Tightrope Media Systems (TRMS) and which his client was running on a TRMS-supplied device that Green says is “essentially an x86 Windows 10 PC.”

As Green understands it, his client had a television in the lobby that was hooked up to the system in order to display information about the company: for example, when interns graduated college; names and pictures of new hires; and awards the company had received. The systems can also play audio, videos, or images: a good way to give customers their first impression when they’re visiting your company.

Or, on the other hand, a good way to sear visitors’ eyeballs if a hacker figures out how to upload whatever unsavories they like.

Poking around online, Green came across a vulnerability (CVE-2018-14573) on his client’s version of the system that allowed him to read system files. He tried to read protected files, such as the SQL database, but found that he couldn’t. What he could do, though, was to email a backed-up file to himself.

It wasn’t the exact database he was after, though, just a secondary database… one that lacked user authentication details. So Green backed out and found another way to jimmy open the system: namely, an interface that allows users to upload “bulletins,” which are the items that get displayed on the digital signage system.

It accepted ZIP files, but it spat out what Green tried to feed it. He could, however, export one of the system’s existing ZIP files to take a peek at how it liked its files structured. Using that insight, he stuck in two malicious .ASPX files and tried to upload the ZIP file, but no dice: while he could upload the boobytrapped files, he couldn’t locate them in the system.

Until, that is, he found that when files are inserted into the ZIP archives, their path separator was getting flipped around: where you’d expect a standard backslash character (), he saw that it had been changed to a forward-slash (/).

It can’t possibly be that simple

Green switched the character with a hex editor. His thought:

Surely this will not work.

Surely, it did.

That simple edit greased the wheels of his malicious files: into the Carousel system they went, and then onto the main bulletin listing, from whence they could be executed via a web shell.

Green discovered another vulnerability, CVE-2018-18931, that allowed him to jack up privileges on a user account to that of a local administrator. To exploit the bug, he’d need to restart the system, but basic accounts can’t do that. So instead, he sent a command to force a server reboot, and that did the trick.

After the system came back up, I ran a command to view the local users and administrators on the system and found that my account had been created and was now a local admin!

Green notified TRMS of the vulnerabilities in early November. The company responded on 13 November, telling him that it believed the bugs were fixed and asking for his client’s version number, in case the client was on an older, unpatched version.

However, TRMS didn’t ask for specifics about the bugs at the time. But on Tuesday, four days after publishing his findings, TRMS reached out to thank Green for his work and for helping the company to secure the digital signage system.

On Monday, TRMS posted a knowledgebase article detailing the workarounds for mitigating the vulnerabilities that Green found: CVE-2018-14573, CVE-2018-18929, CVE-2018-18930, and CVE-2018-18931.

A patch will ship for all customers later this week, TRMS told Green.

How serious is a pwned sign?

Green used the Shodan search engine to get an idea of how many installations of the Carousel product are exposed on the public internet. The answer: a lot. Some belonged to municipalities or institutions of higher education, so one imagines that vulnerable Carousel systems might be in use in areas where they’re exposed to sizeable numbers of people.

Green didn’t attempt to access any of them. Doing so could have gotten him into moral and legal hot water. Thus, he said he can’t speak to the level of security or exposure other Carousel systems may possess.

But in general, when we’re speaking about pwned signage, I came across recent, related news, leading me to the conclusion that it can get…

Downright PewDiePie-esque!

For example, on Monday morning, somebody hacked an electronic road traffic sign in Missouri to flash two alternating messages: “I hate Donald Trump” and “I love PewDiePie.”

Well, that wasn’t very civic-minded. Not good for keeping drivers’ attention focused on the task at hand. Though, arguably, it’s not as distracting as the porn that a hacker broadcast on an Indonesian billboard a few years back.

Regardless of whether your signage is on the scale of a bulletin board or a PC monitor, and regardless of what distraction hackers choose to inflict on viewers – propaganda? Links to malware-laced sites? Bogus emergency alerts or driving instructions? – you just don’t want your digital signage to flash junk.

Thankfully, TRMS jumped on this when it realized the seriousness of the vulnerabilities. Users, please do keep an eye out for the patch this week, and act accordingly.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/vcWPNxMPVgw/

Just two hacker groups are behind 60% of stolen cryptocurrency

We may not know the names of those who steal cryptocurrency from online exchanges, but we now know that most of the thefts are down to just two groups – and one of them isn’t even in it for the money alone.

A new report from blockchain investigation company Chainalysis reveals that just two criminal groups are responsible for around 60% of all cryptocurrency stolen from exchanges.

Cryptocurrency exchanges are prime targets for cybercriminals. People trading Bitcoin and other virtual currencies do so using exchanges, and many tend to leave their funds in their accounts on those exchanges rather than withdrawing them to a secure account under their control. This makes it more convenient for them to to make trades quickly without having to keep redepositing funds.

Large amounts of these funds often reside in an exchange’s hot wallet, which is connected to the blockchain and therefore online. It makes exchanges prime targets for online attacks. Chainalysis, which uses forensic techniques to find connections between cryptocurrency addresses, analysed some of those thefts to find out where the funds ended up. They may not know who owns the addresses, but using its forensic techniques it can determine whether the addresses are owned by the same people.

In its Crypto Crime Report, released last week, Chainalysis found that two groups, which it calls Alpha and Beta, were responsible for stealing around $1 billion in funds from exchanges.

Each group had different endgames, the company said. Alpha is quick to route its stolen funds through a large number of addresses – up to 15,000 in some cases – to cover its tracks. On average, the group sold three quarters of its ill-gotten gains via other exchanges within a month.

The Chainalysis report describes Alpha as “a giant, tightly controlled organization partly driven by non-monetary goals.” A spokesperson told Naked Security:

There’s one key indicator that Alpha wasn’t driven entirely by monetary goals: they had an extremely high average number of transfers, and for each transfer they had to pay a fee. And when that number of transfers is in the range of 15,000 for one hack, it adds up.

Alpha’s motive seemed to have been causing chaos and confusion, according to Chainalysis, whereas Beta was all about the money. The latter group would leave coins dormant for up to 18 months before selling them, using fewer transactions to cloak its activities.

Stolen cryptocurrencies flow to other exchanges, where criminals sell them for other currencies. With exchange hacks from Blackwallet through to CoinRail plaguing the cryptocurrency space, many investors must understandably be nervous about having anything to do with cryptotrading.

Chainalysis said:

One of the reasons hackers and bad actors use cryptocurrency for criminal activity is because it’s a relatively nascent technology in financial services with a reputation for anonymity.

The company hopes to work with exchanges to warn them about incoming funds from illicit addresses. It told us:

These insights, we hope, will help the industry work together to protect themselves against bad actors and ultimately build trust in cryptocurrency and help it become a more mainstream way of transferring value.

Exchange theft isn’t the only source of stolen coins. Ether, the token that operates on the smart contract-capable Ethereum blockchain, is a popular target for scammers. They tend to steal it through Ponzi schemes, fraudulent initial coin offerings (ICOs) and phishing scams. The value of stolen Ether is relatively low compared to the value of other stolen cryptocurrencies, but is still growing. It grew to $36m in 2018 from $17m in 2017, the report said.

Ethereum thefts are coming to resemble cryptocurrency exchange thefts in one key way: A small number of perpetrators seem to be responsible for some high-value heists. According to the Chainalysis report:

…the number of scams declined through 2018, although those that remained were bigger, more sophisticated, and vastly more lucrative.

Apparently in the world of cryptocurrency crime, the laws of power distribution are alive and well.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/X7JpZG50MkA/

Firefox 66 will silence autoplaying web audio

Quieter web browsing is finally within reach for users of Mozilla’s Firefox.

It’s been on the to-do list for a while, but a new blog by the company has confirmed that from Firefox 66 for desktop and Firefox for Android, due on 19 March, media autoplay of video or audio will be blocked on websites by default.

According to Mozilla’s developer blog, this means:

We only allow a site to play audio or video aloud via the HTMLMediaElement API once a web page has had user interaction to initiate the audio, such as the user clicking on a ‘play’ button.

Until the user does something to initiate a video or audio stream, the only thing that will be possible is muted autoplay.

If you find it annoying when videos starting of their own accord, this will come as a welcome news. But what about use cases where it’s desirable?

Currently, it is possible to achieve autoplay blocking by toggling a setting from about:config (type that into your Firefox address bar), but that is a global setting and is either on or off.

Under the new regime, there are several options: enabling autoplay once on a website, white-listing websites to always allow autoplay from those sites, or always allow or block autoplay for all websites.

Audio conundrum

When it comes to media autoplay blocking, version 66 seems to be the number to aim for – Google implemented a similar default setting with Chrome 66 in April last year.

Apple has had default blocking since June 2017, while Microsoft offered the ability for users to turn off autoplay last summer (i.e. it’s an option rather than a default).

One thing Firefox doesn’t yet block is audio that is enabled through the JavaScript Web Audio API used by many older games and web apps.

Stated Mozilla:

We expect to ship with autoplay Web Audio content blocking enabled by default sometime in 2019.

It’s an area where browser makers have to tread carefully, as Google found out to its cost when it included Web Audio API blocking by default to Chrome 66 and was assailed with complaints about broken software.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/pBuq92jE0KM/

Jack’d dating app is showing users’ intimate pics to strangers

Dating/hook-up app Jack’d is publicly sharing, without permission, photos that users think they’re sharing privately.

The Android version of the app has been downloaded 110,562 times from Google’s Play store, and it’s also available on iOS.

Jack’d is designed to help gay, bi and curious guys to connect, chat, share, and meet on a worldwide basis. That includes enabling them to swap private and public photos.

But as it turns out, what should be its “private” photos… aren’t.

Unfortunately, as the Register reported on Tuesday, anyone with a web browser who knows where to look can access any Jack’d user’s photos, be they private or public – all without authentication or even the need to sign in to the app. Nor are there any limits in place: anyone can download the entire image database for whatever mischief they want to get into, be it blackmail or outing somebody in a country where homosexuality is illegal and/or gays are harassed.

The finding comes from researcher Oliver Hough, who told the Register that he reported the security bug to the Jack’d programming team three months ago. Whoever’s behind the app hasn’t yet supplied a fix for the security glitch, which the Register has confirmed.

Given the sensitive nature of the photos that are up for grabs to one and all, the publication chose to publish its report – without giving out many details – rather than leave users’ content in danger while waiting for the Jack’d team to respond.

The thin silver lining

On the just-about-plus side, there’s apparently no easy way to connect photos to specific individuals’ profiles. Hough said that it might be possible to make educated guesses, though, depending on how slick a given attacker is.

This isn’t Hough’s first discovery of touchy content being left out to bake in the sun. He was the researcher who discovered another big, wide-open, no-password-required database a few months ago: in November, he reported that he’d found that a popular massage-booking app called Urban had spilled the beans on 309,000 customer profiles, including comments from their masseurs or masseuses on how creepy their customers are.

Kill your Jack’d photos

If the reports are accurate, the safest thing for users at this point is to delete their photos until the issue is fixed.

Given how sensitive the information is that gets trusted to mobile dating apps, it might also be wise to abstain from sharing too much. All too often, the apps spill highly personal data.

Besides Jack’d, Grindr is an example: as of September, the premium gay dating app was still exposing the precise location of its more than 3.6 million active users, in addition to their body types, sexual preferences, relationship status, and HIV status, after five years of controversy over the app’s oversharing.

The oversharing of that data can put gay men at risk of being stalked or arrested and imprisoned by repressive governments. As of September; anybody could still obtain exact locations of millions of cruising men, in spite of what Grindr claimed last April.

Please warn Jack’d users

As of Tuesday night, Jack’d parent company Online Buddies hadn’t responded to the Register’s repeated requests, and mine, for an explanation of its public sharing of private content.

Readers, we always ask that you share articles you find useful. But in this case, there’s a particularly pressing need, given that the issue apparently isn’t being acknowledged or addressed at this point. If you know of any Jack’d users, please do warn them that they’re at risk of having their intimate photos intercepted.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aAeYZAYpfcc/

London’s Met police confess: We made just one successful collar in latest facial recog trial

London cops’ use of facial recognition tech last week resulted in only one person being charged, while another was handed a £90 on-the-spot fine after trying to avoid the cams.

London, UK - March, 2018. Police officers patrolling Leicester Square and Piccadilly Circus in central London. Pic Paolo Paradiso / Shutterstock.com

Zero arrests, 2 correct matches, no criminals: London cops’ facial recog tech slammed

READ MORE

The Metropolitan Police is in the midst of what it is calling a trial of automated facial recognition (AFR) technology, although it has been using the kit since 2016.

Last week it used the kit in Romford, Essex, parking a van outside the station between 10am and 6pm. The cameras scan the faces of passersby and checks them against a watch list that is supposed to be freshly drawn up for each deployment.

Forces have argued the public expects them to use emerging tech to improve policing, but critics railed against the fact it is being tested in live environments without a legal framework and in the face of evidence it has a 98 per cent false positive rate.

And it seems unlikely that last week’s deployment has shifted that rate very much, with it leading to just three arrests, of which only one resulted in the person being charged.

That man, Scott Russell, was arrested on suspicion of breach of a molestation order and then charged and sentenced to 11 weeks’ imprisonment.

Meanwhile, the Met said a 15-year-old boy had been arrested on suspicion of robbery, but then “assessed as no longer wanted for the offence and released with no further action”. The Met didn’t comment when asked if this person should have been included on the watch-list in the first place.

A third man, aged 28, was arrested on suspicion of false imprisonment and kidnapping but was also released with no further action.

Despite the fact only one of the three was charged, the Met’s Ivan Balhatchet said in a statement that the “use of the equipment at Romford Town Centre resulted in several arrests for violent offences”.

A press release also set out five other arrests that took place during the deployment, but a Met spokesperson confirmed that these were not a direct result of AFR, but “proactive arrests as part of the wider operation”.

Three of these have been released “under investigation”, while the remaining two – arrested on suspicion of drug possession – were dealt with via a community resolution.

Perhaps more controversially, the Met also handed a man a £90 penalty during the deployment. Civil rights group Big Brother Watch tweeted from the scene that he had been approached by police after covering his face to avoid being caught on camera, after which he “got annoyed” and was handed the fine.

The Met’s account of the event was that the man “was seen acting suspiciously” and “became aggressive and made threats” to officers after they stopped him. “He was issued with a penalty notice for disorder as a result.”

Last week’s deployment was due to be the final one before a full analysis, but snow meant that the second day had to be rescheduled because footfall would be lower than is required by the tests.

The Met has yet to provide details of where and when this will be – info about the Romford trial was only released at 4pm the day before.

Meanwhile, rights group Liberty today announced that the Cardiff Administrative Court had granted it granted permission to proceed with its legal challenge against South Wales Police’s use of AFR. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/06/met_police_cop_to_just_one_successful_arrest_during_latest_facial_recog_trial/