STE WILLIAMS

Google Chrome Now Automatically Alerts Users on Compromised Passwords

A series of security enhancements seek to protect users from phishing and warn them when credentials have been compromised.

Passwords are a fact of security life for more users, and Google has just announced a series of new password protections in its Chrome browser on all platforms.

The first is an immediate alert if the credentials you enter on a website or app have been compromised. Based on the Password Checkup technology introduced in February, the feature is now part of the Safe Browsing enhancements announced on December 10.

Safe Browsing also includes a list of known phishing sites, which Google says is updated every 30 minutes. And if the user is logged into their Google account while browsing, the suite also makes use of predictive phishing algorithms to warn of sites that might have slipped through the protecting net of the known phishing list.

For those using shared devices, the notification of the account in use is now larger and more obvious, with a goal of making accidental account use less likely.

The enhanced suite of protections, developed at the Google Safety Engineering Center (GSEC) based in Munich, Germany, are being rolled out to customers over the next few weeks.

For more, read here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Security 101: What Is a Man-in-the-Middle Attack?

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/application-security/google-chrome-now-automatically-alerts-users-on-compromised-passwords/d/d-id/1336583?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

5 Tips for Keeping Your Security Team on Target

In nearly every security environment, competing priorities are a constant battleground. Here’s how to keep the focus on what’s important.

When I sit down to write an article, I encounter any number of distractions. Each distraction seems to want nothing more than to keep me from writing. But my distractibility is not limited to writing. I also encounter a near-constant stream of distractions when I sit down to do most any task.

What does this have to do with security? In security, as we go about the business of protecting our organizations, most of us face a near-constant stream of distractions. If we allow those distractions to drive our day-to-day work, we can very quickly lose focus, which in turn weakens our security postures by preventing us from tackling the items that are most relevant and important.

In this piece, I’d like to share five steps for keeping your security program on target and on track:

Step 1: Build a framework. I’ve noticed over the course of my career that the people who seem to be busy and overwhelmed all of the time are the same ones who are extremely disorganized. Although getting and staying organized requires an investment in time, in the long run, the investment will pay huge dividends. This is particularly the case with respect to evaluating what activities your security organization should engage in. Building a framework to formalize how the security team encounters new ideas and possibilities, evaluates them, and decides whether and how to approach them is essential to staying on target because it reduces the chance that the organization will be led astray by more attractive distractions.

Step 2: Develop good processes and procedures. It is far too easy to get sidetracked: A security analyst spends far too much time on dead-end leads during analysis and investigation, or a member of the governance, risk, and compliance team is overly focused on a specific control or policy whose impact is marginal. Perhaps the vulnerability team wastes time uncovering vulnerabilities without considering the necessary context that would allow them to prioritize and address the most important ones with IT. Whatever the example, having mature processes and procedures is a great way to avoid many of these potential time traps and ensures that the security program stays focused on what’s important..

Step 3: Maintain a strategic direction. To ensure that your security program is on course, it generally helps to have a well-defined vision and strategic direction. This statement may sound obvious, but far too many security teams ignore these essential guiding forces. If you take a step back and think about it, it seems foolish to expect your team members to choose the right path when they lack the fundamental criteria against which to evaluate each potential direction. The board, executives, and security leadership should all have a vision for how they want the organization to defend itself against the most concerning risks. That vision should be documented and communicated, along with the organization’s strategy to realize the vision. These documents should be readily accessible and in the foreground of each team member’s thinking. This ensures that members of the security team will have a strong foundation upon which to evaluate data points as they encounter them and make the appropriate decisions.

Step 4: Stick to goals and priorities. Vision and strategy are great strategic-level tools, but they don’t help us stay on track at the operational and tactical levels, where each functional area needs well-defined goals and priorities to chart and maintain a course of action. It’s important to take the time to set goals and priorities for each functional area in line with the security organization’s broader vision and strategy. Those goals and priorities should then be documented in detail and used as criteria for decision-making and prioritization of day-to-day activities. Before any decisions are to be made, the data points should be evaluated against specific criteria. Will going in this particular direction help us achieve our goals? Is this activity a good use of time, and is it in line with our priorities? Does the endeavor help improve the security posture of the organization? If the answer to any of these questions is no, what’s being proposed is likely not worth the effort.

Step 5: Objectively evaluate impact. In nearly every security environment, competing priorities are a constant. With limited human resources, budget, and technical capabilities, each potential undertaking needs to be evaluated against its potential impact to the organization. If a task, assignment, or project seems like a great idea and a wise use of resources, ask yourself if that activity will directly impact the organization in a positive manner. Will the undertaking improve the security posture of the organization? If so, how can that impact be measured? At what cost will that impact come? If the answers to any of these questions seem uncertain, it’s likely that you’ll need further analysis before making a decision to engage.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Criminals Hide Fraud Behind the Green Lock Icon.”

Josh (Twitter: @ananalytical) is an experienced information security leader who works with enterprises to mature and improve their enterprise security programs.  Previously, Josh served as VP, CTO – Emerging Technologies at FireEye and as Chief Security Officer for … View Full Bio

Article source: https://www.darkreading.com/operations/5-tips-for-keeping-your-security-team-on-target-/a/d-id/1336566?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Next Security Silicon Valley: Coming to a City Near You?

The high cost of doing business in California’s San Francisco Bay Area is just one factor driving infosec companies – established and and startups, alike – to pursue their fortunes elsewhere. Here’s where many are going.

(Image: archjeff/Adobe Stock)

A rich bounty of technology talent, nearby centers of cutting-edge technical research, perpetually pleasant weather, and plenty of money helped make California’s San Francisco Bay Area an IT and cybersecurity mecca. Yet one glance at current real estate prices in Silicon Valley may send one running toward rainy weather and lower rent.

The cost of doing business is just one factor driving cybersecurity companies to pursue their fortunes elsewhere. And as infosec businesses in other places in the world catch fire, setting down stakes in the Bay Area grows not only less appealing, but less necessary.

“There is a general trend of saying ‘big cities are too expensive,'” says Hank Thomas, CEO of American cybersecurity venture capital firm Strategic Cyber Ventures (SCV). That’s pushing startups and major players alike to set up shop in second-tier cities instead.

Thomas also says he’s beginning to see a new entrepreneurial spirit: More infosec professionals are confidently striking out on their own to launch startups, when only four or five years ago that was less common. Individuals with backgrounds in military or government defense, for example, or those with backgrounds in hardware are creating their own security products companies — more so than security services — and establishing headquarters in their own backyards.

“The key ingredients, in my opinion, for success of cybersecurity, which Silicon Valley has, are an abundant supply of venture capital money, entrepreneurs who are risk-takers, and a society which fosters this culture,” says Umesh Padval, partner of VC firm Thomvest Ventures. “Silicon Valley is very attractive due to its open culture to attract the best people from around the world, as the area has great universities.”

However, international students’ enrollment in US universities at both the undergraduate and graduate levels has been decreasing over the past three years, particularly from Saudi Arabia, South Korea, Iran, Mexico, the United Kingdom, and, on the graduate level, India. The changes are attributed to a mix of American universities’ increasing costs, improving programs in home countries, and changing American policies on immigration on international policy, according to the Institute of International Education.

So are there other places that do fit the description that made Silicon Valley appetizing?

Let’s take a look at where new cybersecurity startups are spinning up, established firms are laying down new offices, venture capital money is pouring in, universities are creating generating research and creating security degree programs, and cities are investing initiatives to support the development of the security industry.

(Continued on next page)

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad … View Full BioPreviousNext

Article source: https://www.darkreading.com/edge/theedge/the-next-security-silicon-valley-coming-to-a-city-near-you/b/d-id/1336586?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ad industry groups ask that the CCPA keep its mitts off their cookies

Five ad industry groups have asked California Attorney General Xavier Becerra to change stipulations about cookie-blocking in the state’s impending, far-reaching, almost-GDPR-but-not-quite privacy law, which goes into effect in the new year.

It’s for the sake of consumer choice, they said.

Initially, the language in their letter seemed to be requesting a ban on privacy tools such as extensions that block ads and tracking scripts, but the comments turned out to be asking for something a bit more nuanced than that: MediaPost reporter Wendy Davis later said that the groups clarified, saying that they only want the AG to prohibit browsers from blocking the industry’s opt-out cookies – AdChoices – as opposed to all cookies.

The five groups submitted 13 pages worth of comments to the Office of the Attorney General (OAG) on Friday – the last day that comments were still being accepted in the rule-making run-up to the enactment of the California Consumer Privacy Act (CCPA).

Last month, it looked like that law would become the ipso facto privacy law of the land – a scenario that has tech businesses fretting over the prospect of what compliance will cost – given the failure of Congress to enact a federal privacy law. Having said that, a last-ditch effort at the national level cropped up in the last days of November, when Senator Maria Cantwell introduced the Consumer Online Privacy Rights Act (COPRA).

And having said that, experts say that there’s zero chance of bipartisan agreement on that bill, or any bill at the federal level.

The five ad industry groups that want the CCPA tweaked are the American Association of Advertising Agencies (4As), the Internet Advertising Bureau (IAB), The Association of National Advertisers (ANA), the American Advertising Federation (AAF), and the Network Advertising Initiative (NAI). They said in their letter that their requested tweaks are meant to protect consumer choice.

They say that people like the ad-supported internet and either wouldn’t want to or couldn’t afford to access an internet where they have to pay subscription fees:

If a subscription-based model replaced the ad-based model, many consumers would likely not be able to afford access to, or would be reluctant to utilize, all of the information, products, and services they rely on today and that will become available in the future.

But what about all of us who use adblockers and/or set our browsers to block cookies?

The ad big boys say that what the CCPA needs is to enable opt-out mechanisms for businesses that don’t collect personally identifiable information (PII):

Many businesses in the online ecosystem may maintain personal information that does not identify a consumer on its own, for example, IP addresses, mobile advertising identifiers, cookie IDs, and other online identifiers.

For businesses that maintain this non-identifying information, webforms may not work to facilitate consumer requests to opt out, because the consumer’s submission of identifying information such as a name, email address, or postal address may not be easily matched to non-personally identifying information the business does not maintain.

The CPPA, as written, would actually force companies to re-identify PII by using techniques the industry avoids in order to “enhance consumer privacy,” they said. So how about this instead: how about we let businesses offer a way for consumers to opt-out of the sale of their PII?

The OAG has indicated that it may, in fact, issue a button or logo that would enable consumers to do just that.

What the law now says:

If a business collects personal information from consumers online, the business shall treat user-enabled privacy controls, such as a browser plugin or privacy setting or other mechanism, that communicate or signal the consumer’s choice to opt-out of the sale of their personal information as a valid request [under the law].

Consumers’ rights under CCPA can be grouped into these general categories:

  1. Businesses must inform consumers of their intent to collect personal information.
  2. Consumers have the right to know what personal information a company has collected, where the data came from, how it will be used, and with whom it’s shared.
  3. Consumers have the right to prevent businesses from selling their personal information to third parties.
  4. Consumers can request that businesses remove their personal information.
  5. Businesses are prohibited from charging consumers different prices or refusing service, even if the consumer exercised their privacy rights.

The ad groups say that if you let consumers block cookies, you let them block cookies that actually let them set privacy preferences:

…intermediaries, such as browser and operating systems, can impede consumers’ ability to exercise choices via the internet that may block digital technologies (e.g. cookies, JavaScripts, and device identifiers) that consumers can rely on to communicate their opt out preferences.

…which is the exact opposite of what the CCPA is supposed to be about, they continued:

This result obstructs consumer control over data by inhibiting consumers’ ability to communicate preferences directly to particular businesses and express choices in the marketplace. The OAG should by regulation prohibit such intermediaries from interfering in this manner.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SmRFSPB5ipA/

FTC warns Christmas buyers that smart toys are a security risk

Thinking of giving a young person an internet-connected smart toy this Christmas?

If so, the US Federal Trade Commission (FTC) wants you to think very carefully about the hidden and serious security risks you might be handing over with it.

It would be easy to dismiss such advice as glaringly obvious, but the FTC puts its finger on three capabilities that often spell trouble. These are:

  • If the toy has a camera or microphone, what control do owners have over how this operates and where any data is stored?
  • Does the toy send emails or connect to social media?
  • What control do adults have over the device’s management and security?

The FTC advice also reminds buyers to pay attention to bundled services:

  • What sort of privacy and consent policy does the service provider have regarding the toy’s usage (especially if it’s for someone under the age of 13) and the data it generates? And is any data shared?
  • How easy is it to delete personal account data?

We’d add one of our own to this list:

  • Does the vendor have a history of patching known security problems?

There’s a mountain of evidence that many toys that have some or all of the above capabilities will fail on several counts.

‘Smart’ often isn’t

On past evidence, many products are hastily cobbled together at a software level, with the result that both the device and online account security will be terrible. Very few will be patched for weaknesses.

A particularly bad example of the woes of this sector is the sad case of the SMA M2 kids’ smartwatch.

Thousands bought these watches for kids to use as safety trackers when out and about until test organisation AV-Test discovered that hackers could exploit weaknesses to access accounts and find out where kids were, including pictures of what they look like, their names and current locations.

This wasn’t simply a device security problem but a child safety disaster. But security problems like this usually only come to light later, after the product becomes mainstream.

This is just one example of a problem that has beset the whole toy industry: cheap toys built around kindergarten security designs. Because they’re made and sold cheaply, and the industry is poorly regulated, there is no incentive to improve security.

What to do

How do buyers know whether the smart toy they have bought has poor security?

First, run a search on the model and manufacturer to see whether they’ve had security problems in the past.

Next, pay attention to the privacy policy because this, at least, is something that should make explicit any data collection involved with its use.

If this mentions sharing data with third parties, our advice is to walk away. Sharing or selling of children’s data might also contravene data protection regulations such as the US Children’s Online Privacy Protection Act (COPPA).

Keep children safe by spending some time researching the privacy implications of smart toys before buying.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/t82KjRHVgzo/

DoItForState domain name thief gets 14 years for pistol-whipping plot

Remember the social media influencer who decided that the best way to get the valuable domain doitforstate.com would be by hiring his cousin to “influence” the rightful owner’s head with a pistol-whipping?

He’s been sentenced to 14 years in federal prison on one count of conspiracy to interfere with commerce by force, threats, and violence, the US Attorney’s Office for the Northern District of Iowa announced on Monday.

The epic, violent domain transfer #FAIL involved not only pistol-whipping, but also tasering, and demanding at gunpoint the transfer of doitforstate.com – a site devoted to content concerned with the beer-guzzling and butt-ogling of college students.

The domain-demanding burglar, Sherman Hopkins, Jr., of Cedar Rapids, Iowa, had been hired by his cousin to pull off the domain-transfer-by-force.

Hopkins – who got shot multiple times in the chest during his botched domain grab – was sentenced to a maximum prison sentence of 20 years in 2018. The cousin who hired him, 27-year-old Rossi “Polo” Lorathio Adams II, also from Cedar Rapids, was the one who was sentenced on Monday.

‘State Snaps’ and its lust for ‘Do It For State’

As prosecutors have described during the trials of the cousins, Adams founded a social company called “State Snaps” while he was a student at Iowa State University in 2015. Similar to Do It For State, State Snaps – and its Snapchat, Instagram and Twitter feeds – showed great gusto for boob-, butt-, beer-, setting-things-on-fire-, drug- and arrows-shot-into-the-groin-related content, as well as for depictions of drinking beer from women’s butts.

Adams had over a million followers on his social media sites at one time. In 2015, Iowa State University administrators tried to get Snapchat and Instagram to take this stuff down… which they did, but it just resurfaced with references to Iowa State stripped out.

In 2015, a Des Moines area television station aired a news segment in which Adams, who would only identify himself as “Polo,” said that it was all for fun. Don’t like it? Don’t watch it, he said. He’d continue to run State Snaps, he said, regardless of the dismay of Iowa State University administrators and the policies of the social media platforms.

The State Snaps domain, still up and running and replete with multiple derriere-as-beer-stein videos, is DoIt4State. Both Adams’s followers and those of DoItForState.com – a domain that hasn’t returned a site since 2015 – used the slogan “Do It For State!”

For two years, between 2015 and 2017, Adams tried to purchase the rival internet domain from the guy who owned it: Cedar Rapids resident Ethan Deyo. He’s the one who wound up getting pistol-whipped in the attempted domain robbery.

Deyo wouldn’t budge, even after Adams threatened one of his friends with gun emojis after the friend used the domain to promote concerts.

Prosecutors say that in June 2017, Adams enlisted his cousin, Hopkins, to break into Deyo’s home and force him at gunpoint to transfer doitforstate.com to Adams. Hopkins, a convicted felon, was living in a homeless shelter at the time.

On 21 June 2017, Adams drove Hopkins to Deyo’s house and gave Hopkins a demand note with instructions for transferring the domain to Adams’ GoDaddy account. When Hopkins broke into Deyo’s home in Cedar Rapids, Iowa, he was carrying a mobile phone, a stolen gun, and a taser. He was clad in burglar wear: a hat, pantyhose on his head, and dark sunglasses.

Deyo was upstairs when he heard Hopkins break in. He looked over the railing and saw that Hopkins had a gun. Hopkins shouted at Deyo, who ran into a bedroom upstairs and shut the door, leaning against it to keep Hopkins out. Hopkins went upstairs and kicked it open.

Hopkins then grabbed Deyo and forced him into the home office, to turn on his computer and to connect to the internet. Then, Hopkins pulled out the instructions on how to switch GoDaddy accounts for a domain name that Adams had given him. He held a gun to Deyo’s head and told him to follow the directions, taking his victim’s mobile phone and throwing it away so he couldn’t call for help.

Deyo said he needed a mailing address and phone number to make the transfer go through. Hopkins responded by pistol-whipping him in the head. He also tased him. In the struggle, Deyo was shot in the leg, but he managed to get the gun and shoot Hopkins in the chest.

It wasn’t clear at the time just who, exactly, Hopkins was trying to transfer the domain to, but Adams’s conviction – and history of trying to get the domain from Doyes – clears that all up. Hopkins was, of course, trying to get Doyes to transfer the domain from his own GoDaddy account to one that belonged to Adams.

Besides his sentence of 168 months prison time, Adams was also ordered to pay nearly $9,000 in restitution, prosecution costs that amounted to $3,957.45, and $22,000 in attorney fees. The DOJ said that Adams had court-appointed counsel during trial, but he didn’t need it: the Court found out that he was earning “significant” money while the case was pending. There’s no parole in the federal prison system, so he’ll have the full 14 years to mull his violent internet business strategies.

He’ll hopefully be spending some of that time – and the three years of supervised release that will follow – thanking his lucky stars that neither his cousin nor his target died during his ill-conceived domain grab.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/_zSeajgNmp8/

Windows 10 Mobile receives its last security patches

If you’re one of the tiny contingent still using Windows 10 Mobile, 10 December 2019 is probably a day you’ve been dreading for nearly a year.

As announced by Microsoft in January 2019, it’s the end of life date for version 1709 of the OS, which means that November’s Build 15254.597 (KB4522811) was its last ever software update and therefore its last set of security patches.

After this date, users are on their own, warming themselves in the fading heat of a dying star which began life with some fanfare what seems like a long time ago but was in fact only 2015.

It’s a death that’s been well-rehearsed by Microsoft – Windows 10 Mobile version 1703 users reached this end-of-life moment earlier this year, on 11 June.

From what we can tell, no new Windows 10 Mobile devices were released after early 2016, which means affected devices running version 1709 will be among the following models:

  • Microsoft Lumia 550
  • Microsoft Lumia 650
  • Microsoft Lumia 950/950 XL
  • HP Elite x3 (Verizon, Telstra),
  • Wileyfox Pro
  • Alcatel IDOL 4S
  • Alcatel IDOL 4S Pro
  • Alcatel OneTouch Fierce XL
  • Softbank 503LV
  • VAIO Phone Biz
  • MouseComputer MADOSMA Q601
  • Trinity NuAns Neo

Bad news too for anyone still running the unsupported (as of 11 July 2017) Windows Phone 8.1 which sees the end of its app store support on 16 December 2019. No feature updates, no security fixes and now no software of any kind.

Security fixes

Build 15254.597 fixes some Intel chip issues plus a small pile of other flaws Microsoft doesn’t identify in detail, some of which were included in previous updates:

  • Intel Processor Machine Check Error vulnerability (CVE-2018-12207).
  • Protections against the Intel Transactional Synchronization Extensions (TSX) Transaction Asynchronous Abort vulnerability (CVE-2019-11135).
  • Security fixes for Microsoft Scripting Engine, Internet Explorer, Windows App Platform and Frameworks, Microsoft Graphics Component, Windows Input and Composition, Microsoft Edge, Windows Fundamentals, Windows Cryptography, Windows Virtualization, Windows Linux, Windows Kernel, Windows Datacenter Networking, Windows Peripherals, and the Microsoft JET Database Engine.

Safe to say, if you run this OS, you’ll want the update, which should happen automatically.

Ironically, not many Microsoft employees will download this update because it seems that not many people inside Microsoft use Windows 10 Mobile. That includes figurehead Bill Gates himself, who in 2017 admitted he used an unspecified Android smartphone.

Phrases like ‘end of an era’ are easy to throw around but this does feel like one. Microsoft’s dream of a Windows for mobile devices is finally past tense.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/w-H0FgJ8FqU/

Beware of bad Santas this Xmas: Piles of insecure smart toys fill retailers’ shelves

It seems to come around quicker every year – the failure of so-called smart toys to meet the most basic of security requirements. Which? has discovered a bunch of sack fillers that dirtbags can use to chat to your kids this Christmas.

Back in 2017, the consumer group found toys with security problems relating to network connections, apps or other interactive features. The results of its latest round of testing show manufacturers are struggling to improve standards.

Working with security researchers NCC Group, Which? found a karaoke machine that could transmit audio from anyone passing within Bluetooth range because of its unsecured connection. It found walkie-talkies from VTech which anyone with their own set of similar equipment could connect to over a 200-metre range. It also found a Mattel-backed games portal which appeared to be unmoderated, allowing users to upload their own games with content inappropriate for children.

child with cardboard robot

Toucan play that game: Talking toy bird hacked

READ MORE

Ken Munro, security researcher with consultancy Pen Test Partners, said that although there was no evidence the vulnerabilities revealed by Which? had not been used by nefarious characters to contact children, parents should still beware of toys that do not meet minimum standards.

“The reason we don’t hear of these attacks is they are local: it would be one parent at a time. Is it still worrying? Yes, I don’t like the idea of this thing being unsecured,” he said.

Earlier this year, the UK’s Department for Digital, Culture, Media and Sport launched an industry consultation on the back of a 2018 report which advocated the removal of burden from consumers to securely configure their devices and instead ensure that strong security is built into IoT devices and services by design.

In 2017, Which? and German counterpart Stiftung Warentest raised concerns about i-Que Robot, which also offered an unsecured Bluetooth connection. Munro said he was not surprised manufacturers had struggled to demonstrate any improvement in security awareness or standards.

“We’ve seen much worse vulnerabilities involving kids’ tracking watches, whereby a hacker can remotely track thousands of kids in real time,” he said.

While UK regulation is still in the works, and adverse publicity has little affect, incoming legislation in California is more likely to force manufacturers to build security into product design from the outset.

From 1 January 2020, Senate Bill 327 will make reasonable security mandatory for consumer products in California. Given it is such a large market, and the home of both the global technology and media industries, the legislation is set to change smart toy manufacturing, Munro said. “I think that will have a huge influence on manufacturing. If you want to sell stuff in California, it’s got to be safe. That will trickle down, so UK production improves as well.”

In response to the Which? study, a spokesperson for VTech said consumers should be assured the VTech KidiGear Walkie Talkies, which uses the industry-standard AES encryption to communicate, are safe.

“Pairing… cannot be initiated by a single device. Both devices have to start pairing at the same time within a short 30-second window in order to connect. Additionally, if already linked to its paired handset, pairing with an additional, external handset is not possible,” a spokesperson said.

Meanwhile, Sphero, maker of the Sphero Mini interactive toy also implicated in the Which? study, said that the feature highlighted related to the Sphero Edu app, which was meant to be used in classrooms or in the home with teacher or parent supervision.

The Register has offered Singing Machine and Mattel the opportunity to comment, but neither firm has so far responded. ®

Sponsored:
How to get more from MicroStrategy by optimising your data stack

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/11/top_toys_still_toppled_by_security_testing/

Google Chrome will check for breached credentials every time you sign in anywhere

A new feature in Google’s Chrome browser will warn you if your username and password matches a known combination in a data breach every time you type credentials into any website.

This credential check is “gradually rolling out for everyone signed into Chrome” as part of the Safe Browsing option, according to the announcement.

The potential worry here is that sending your credentials to Google for checking could itself be a security risk. The technology used was announced nine months ago, when the Password Checkup extension was introduced. At the time it was described as an “early experiment”. The way it works is as follows:

  1. Google maintains a database of breached usernames and passwords, hashed and encrypted. In other words, the username/password combinations are not stored, only the encrypted hash.
  2. When you type in your credentials, the browser sends a hashed and encrypted copy of the credentials to Google, where the key used for encryption is private to the user. In addition, it sends a “hash prefix” of the account details, not the full details.
  3. Google searches the breach database for all credentials matching the hash prefix and sends the results back to the browser. These are encrypted with a key known only to Google. In addition, Google encrypts your credentials with this same key – so it is now doubly encrypted.
  4. The final check is local. Chrome decrypts the credentials using your private key, yielding a copy encrypted only with Google’s key. This is then compared to the values in the database. If a match is found, an alert is raised.

The process by which Google checks credentials against a database of breached usernames and passwords

The process by which Google checks credentials against a database of breached usernames and passwords (Click to enlarge)

The idea is that your credentials are never sent to Google in a form it can read, and that details of other people’s breached credentials are never sent to you in a form you can read. The procedure, we are told, “reflects the work of a large group of Google engineers and research scientists”.

Even though users may still feel uncomfortable enabling this kind of check, the risks are likely lower than that of being unaware that your credentials have been stolen. The bigger snag, perhaps, is that you have to sign into Chrome with all that implies in terms of giving the data-grabbing giant more information about your digital life.

In addition, Google says it is improving its phishing site protection, with 30 per cent more cases being spotted. A further protection is that if you use Chrome’s password manager, you will be alerted if you enter credentials stored there into a suspected phishing site.

What about if someone else signs into Chrome on a shared computer, and you inadvertently save your password into someone else’s profile? If this can happen you probably already have some potential security issues to worry about, but Google is trying to make it less likely by a more prominent indication of the current profile. ®

Sponsored:
How to get more from MicroStrategy by optimising your data stack

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/11/google_chrome_will_check_for_breached_credentials_every_time_you_sign_in_anywhere/

Big Changes Are Coming to Security Analytics & Operations

New ESG research points to fundamental problems, a need for scalable security data pipelines, and a migration to the public cloud.

ESG research recently completed a new research project focused on security analytics and operations. As part of this project, ESG surveyed 406 IT and security professionals working at midmarket and enterprise organizations in North America across all industries. Based on the research results, we came to the following conclusions:

Security analytics and operations continue to grow more difficult. 
Nearly two-thirds (63%) of survey respondents claim that security analytics and operations are more difficult today than they were two years ago. This increasing difficulty is being driven by external changes and internal challenges. From an external perspective, 41% of security pros say that security analytics and operations are more difficult now due to rapid evolution in the threat landscape, and 30% claim that things are more difficult because of the growing attack surface. 

Security teams have no choice but to keep up with these dynamic external trends. On the internal side, 35% of respondents report that security analytics and operations are more difficult today because they collect more security data than they did two years ago, 34% say that the volume of security alerts has increased over the past two years, and 29% complain that it is difficult to keep up with the volume and complexity of security operations tasks. Security analytics/operations progress depends upon addressing all these external and internal issues.

The security data pipeline dilemma: More data, more problems
Just under one-third (32%) of organizations collect substantially more data to support cybersecurity analytics and operations today than they did two years ago, while 44% collect somewhat more security data. Furthermore, 52% of organizations retain this data online for longer periods of time than they did in the past. The volume of real-time and historical security data creates massive data repositories that are costly and difficult to manage. Security analysts commonly offer a complaint worthy of Yogi Berra: “We have so much security data that we can’t find anything we’re looking for.”

Traditional on-premises SIEM is an incomplete solution. 
A full 70% of organizations continue to anchor their security analytics and operations with security information and event management (SIEM) systems. Despite this central role, security operations center (SOC) teams now surround the SIEM with additional tools for threat detection/response, investigations/query, threat intelligence analysis, and process automation/orchestration. This raises the question: If SIEM is essential to security analytics and operations, why do organizations need so many tools? 

The research reveals that while SIEM is good at discovering known threats and generating security and compliance reports, it’s not well suited for detecting unknown threats or other security operations use cases. What’s more, 23% of security pros say that SIEM platforms require lots of personnel training and experience, and 21% believe that SIEM requires constant tuning and operational overhead to be useful. SIEM isn’t going away, but it needs help.

Staffing and skills shortages remain ubiquitous. 
Three-quarters of survey respondents agree that the cybersecurity skills shortage has affected security analytics and operations at their organizations. Can’t CISOs simply hire their way out of this situation? It’s not that easy: 70% of security pros say that it is extremely difficult or somewhat difficult to recruit and hire SOC personnel. Organizations are addressing the skills gap by turning to managed services. Seventy-four percent of organizations use managed security services (for security analytics and operations) today, and 90% plan on increasing their use of managed security services in the future. When it comes to the SOC, it seems that no one can go it alone anymore. 

Security analytics and operations technologies are migrating to the public cloud. 
In the past, CISOs preferred the hands-on control of on-premises security analytics and operations technology, but this is no longer true. The research indicates that 41% of organizations prefer cloud-based security analytics and operations technologies while another 17% are willing to look at cloud-based security analytics and operations technology options on a case-by-case basis. 

Why move to the cloud? The most obvious reason is to avoid the cost and complexity of an on-premises security analytics and operations infrastructure (i.e., deployment and ongoing operations of data collectors/processors, load balancers, servers, storage devices, etc.). Interestingly, some progressive organizations believe that scalable, burstable cloud-based processing and storage resources can provide analytics opportunities they simply can’t achieve with homegrown on-premises efforts. This is particularly true with the application of machine learning algorithms on massive security data sets.

Based upon this research, ESG has four recommendations for CISOs and security professionals:

  1. CISOs must address SOC deficiencies with long-term and comprehensive strategies that can improve security efficacy, bolster operational efficiency, and support business objectives. Tactical tweaks won’t do.
  2. Large organizations should understand that security analytics and operations is a big data application. This demands that security teams have appropriate data management skills so they can build and operate security data pipelines at scale.
  3. CISOs must plan for cloud migration so they can create a security operations and analytics platform architecture (SOAPA) that helps them prevent, detect, and respond to security incidents across hybrid IT infrastructure. “Lift-and-shift” should be viewed as a starting, not an ending, point. 
  4. To address the scale and scope of security operations along with the ongoing cybersecurity skills shortage, SOC managers must lean on artificial intelligence, security process automation, and managed services moving forward. Once again, CISOs need a detailed plan on how these elements will augment the SOC staff, supplement and improve SOC processes, and better safeguard critical business assets. 

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Criminals Hide Fraud Behind the Green Lock Icon.”

Jon Oltsik is an ESG senior principal analyst, an ESG fellow, and the founder of the firm’s cybersecurity service. With over 30 years of technology industry experience, Jon is widely recognized as an expert in all aspects of cybersecurity and is often called upon to help … View Full Bio

Article source: https://www.darkreading.com/cloud/big-changes-are-coming-to-security-analytics-and-operations/a/d-id/1336565?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple