STE WILLIAMS

7 Ways to Hang Up on Voice Fraud

Criminals are coming at us from all direction, including our phones. Don’t answer that next call without reading this tips first.PreviousNext

Image Source: Adobe Stock: penguiiin

Image Source: Adobe Stock: penguiiin

Whether landline or mobile, for work purposes or personal use, phones are part of our everyday lives. Criminals know this, too, so it’s little wonder why voice fraud has been running at an all-time high.

According to Pindrop, 90 voice fraud attacks occur every minute. Last year, the fraud rate was one out of 685 calls, remaining at the top of a five-year peak.

Exacerbating the problem, knowledge-based authentication (KBA) information, such as date of birth and street address info, has become readily available to fraudsters, says Chris Halaschek, vice president of product at Pindrop. “We have found that in roughly 60% of the cases, fraudsters can answer the KBAs,” Halaschek says.

How can you ensure a fraudster doesn’t ruin your next call? The following tips can help consumers and businesses stay safe.

 

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full BioPreviousNext

Article source: https://www.darkreading.com/application-security/7-ways-to-hang-up-on-voice-fraud---/d/d-id/1336427?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How to Be a More Thoughtful & Safe Digital Citizen

Don’t be a Billy … or Jennie … or Betty.

Source: StaySafeOnline.org

What security-related videos have made you laugh? Let us know! Send them to [email protected].

Beyond the Edge content is curated by Dark Reading editors and created by external sources, credited for their work. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/how-to-be-a-more-thoughtful-and-safe-digital-citizen/b/d-id/1336476?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Practical Principles for Security Metrics

A proactive approach to cybersecurity requires the right tools, not more tools.

There are several key market forces affecting the cyber landscape that regularly make the headlines: a shortage of security personnel, a huge rise in the number of security tools, and a growing attack surface due to the move to bring-your-own-device policies and the cloud. However, another market force is changing the nature of the industry: increasing pressure to adhere to numerous regulations such as the General Data Protection Regulation, the SHIELD Act, the California Consumer Privacy Act, and the more-recent MAS cyber hygiene notices.

Auditors and regulators expect us to show that reasonable security measures are in place to protect customers’ personal data and business-critical applications, at any point in time. And this is where we struggle — to demonstrate that due care was taken. The trend we see is that organizations are investing in a lot of tools to manage risks. This is shown by a recent study, conducted by Forrester Research, which surveyed more than 250 senior security decision-makers in North America and Europe.

The report outlined that organizations are using multiple technologies to identify and mitigate risk, including security analytics platforms; vulnerability management; governance, risk, and compliance platforms; and vendor risk management platforms. But multiple tools can compound the issues around reporting — reports must be collated and organized manually, taking the team away from “doing security” and reducing the likely frequency of report updates, which means stakeholders do not have one version of the truth.

To alleviate the disconnect, as a sector we recognize that we need to move to continuous and accurate cyber-risk reporting, which is by fueled automated data collection and collation. The starting point for this is an agreement on what security metrics should be measured and how. There are several practical principles that we can use to make metrics more business-focused, accurate, and measurable as we move into an era where accuracy and relevance are king.  

The starting point is an agreement on which questions need to be answered to make the business more secure and what data is available to help inform the answers. The metrics must be able to stand up to scrutiny. We also need to make sure we know what to do with an answer to the original question. I liken it to The Hitchhiker’s Guide to the Galaxy — if I told you the answer to the meaning of life, the universe, and everything was 42, what would you do with that information? If we don’t know what to do with any given metric, then we need to go back to the beginning.

The next practical principle is to always aim for simplicity. A complex metric, one that is hard to interpret, may be less effective than a couple of simple ones! If the audience for a metric doesn’t get the message it’s intended to convey, the metric has failed no matter how “smart” it might be. Simple stats that are well-executed and easy to explain win over black-box analyses every day of the week. And don’t forget we need to add business context — business-focused metrics resonate with the board and business stakeholders as they enable them to drive action.

How many metrics do we need? An effective approach is to align metrics to industry-accepted security frameworks. Aligning to a framework gives an indication of how well a metrics program covers the breadth of security areas and if there’s any gaps that need filling. Frameworks can help provide a familiar structure for a metrics program and naturally provide higher levels at which we can summarize analysis and provide an effective overview for business stakeholders.

Now it’s time to collect data and build metrics. A high-quality inventory is the foundation for trusted metrics. Try to combine multiple datasets to get the most complete and accurate picture of assets possible and classify them as accurately as possible, asking questions such as: Is this server Internet-facing? Does this database support a critical app? Which business line owns this? This enables metrics to have that all-important business context and helps with prioritization. Being able to show metrics for the infrastructure supporting business critical applications is invaluable to get buy-in from the business.

Also, it’s key to verify rather than trust. We don’t want to add inaccuracies into metrics by assuming we know some of the facts already — e.g., “they told me antivirus was deployed on all my devices.” And if we can’t measure something, it shouldn’t be in our metrics program! Bear in mind, of course, there are more or less accurate ways to measure — an approximate measurement is fine as a starting point, but a guess is not.

Once we have verified, we need to verify again. Use the type of metric to assess an ideal frequency and then measure as close to that as is feasible for the organization — for example, if the vulnerability scanner is run once a week, we don’t need to update/verify data and create metrics on these daily.

Finally, never forget that whether the metrics are for the board, a business line, regulator, or auditor, the key is also knowing the accuracy, timeliness, and the limitations of the measurements. A good illustration is patching time on our servers. We need to make sure we know the percentage of servers that aren’t covered by our scanner. After all, “90% server vulnerabilities fixed within service-level agreement” becomes decidedly less impressive if we know that only 50% of servers are being scanned.

The key takeaway here is that a proactive approach to cybersecurity requires the right tools, not more tools — just as a metrics program is much more effective with simple, accurate metrics rather than a host of numbers that may be wrong, as well as out of date.

Related Content:

 

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Gamification Is Adding a Spoonful of Sugar to Security Training.”

Nik Whitfield is the founder and CEO at Panaseer. He founded the company with the mission to make organizations cybersecurity risk-intelligent. His  team created the Panaseer Platform to automate the breadth and depth of visibility required to take control of … View Full Bio

Article source: https://www.darkreading.com/risk/practical-principles-for-security-metrics/a/d-id/1336428?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How to Get Prepared for Privacy Legislation

All the various pieces of legislation, both in the US and worldwide, can feel overwhelming. But getting privacy basics right is a solid foundation.

On January 1, 2020, California will step on the world stage of privacy when the California Consumer Privacy Act (CCPA) takes effect. It follows the European Union’s General Data Protection Regulation (GDPR) and other regional legislative controls that are designed to protect the personal data of consumers.

The CCPA legislation may apply to your business if it’s a for-profit entity and collects or processes the personal information of Californian residents. And one of the following needs to apply: the business has an annual gross revenue in excess of $25 million; the business annually trades the personal information of 50,000 or more consumers, households, or devices; or the business derives 50% or more of its revenue from selling consumers’ personal information.

There may be requirements to update privacy policies, third-party, and service provider contracts and any other terms that cover the collection, retention, and protection of personal data held by the company. It’s important that all businesses review their status and establish if they need to comply with the legislation.

As with other privacy legislation, the fines for noncompliance could be significant. Under GDPR, we have recently witnessed British Airways being fined $230 million and Marriott Hotel Group $123 million for data breaches. The CCPA legislation allows for fines by the attorney general of up to $7,500 per incident and gives individual consumers the right to file a lawsuit for up to $700 each. I expect once the legislation takes effect that we will see some considerable legal action with class action lawsuits filed against companies that suffer data breaches

Complying with the legislation requires companies to adopt policies for the collection and retention of the data. It also requires companies to provide “reasonable security” to protect the personal data. The word “reasonable” can be interpreted in many different ways and the extent of what is deemed reasonable will not be clear until we see the legal cases being brought once the legislation takes effect.

To assist companies in taking proactive steps to address the requirement for reasonable security, companies could leverage the requirements of other legislation such as GDPR. The following are starting points for companies that need to comply; note, however, that these are just starting points, and I recommend seeking professional and legal advice to ensure compliance.

Privacy Basics: Single Point of Responsibility
Appoint a data protection officer (DPO). This is a requirement under GDPR and is an essential position for any company holding personal data. A DPO’s main tasks should include understanding where the data is, the business purpose of why it was collected and is being retained, controlling access to the data, and deleting data no longer required. Consumers may request a copy of their data or require it to be deleted, and a dedicated DPO facilitates a single point of contact to deal with such requests.

Having this single point of responsibility is essential so that businesses can operate in a professional and compliant environment. The DPO can organize regular risk assessments, penetration tests, and security policies. While these do not provide a 100% guarantee that no breach will happen, they are steps that provide evidence that the issues have been taken seriously in the organization, providing a defensible defense should the need occur.   

Limiting access to the data will reduce the risk of an inadvertent breach and reduce the number of employees that will need specialized training in the management and handling of personal data. It does not, however, reduce the need for all employees to be subjected to cybersecurity awareness training.

The data should be considered the crown jewels of the organization. Without customer data, or with a lost reputation due to a data breach, the company is likely to suffer financially. Treating the personal data of consumers like the crown jewels and placing it in its own protected segment of the network will create barriers and layers that can thwart the attempts of cybercriminals to gain access. We deal with layers in everyday life: The important possessions we own personally may be in a home safe or a safety deposit box. We then secure our homes with alarms and several locks on doors. With every layer, we add more complexity for the burglar.

Securing personal information should include, at a minimum, encryption and multifactor authentication. When asked, “What should I encrypt?” the answer is everything. Encryption is no longer the resource overhead it once was and by having whole-device encryption on devices, the risk of data being breached due to a stolen or lost device is reduced. Encryption of the personal data being held, including the hashing of customer passwords, will also demonstrate that significant steps have been taken to secure the data.

It would seem unprofessional not to include the following recommendations:

  • Make sure all devices connected have been patched and updated.
  • All devices should be protected with an up-to-date endpoint anti-malware product.
  • All devices and data should have a backup and recovery policy that is tested from time to time.

The need to have privacy legislation is just another step in the evolution of the digital and connected revolution that is transforming humanity. As consumers, we need to engage in understanding the value of our data, who we allow to collect it, and what we allow them to do with it. Without this engagement or legislation to protect us, we allow companies wishing to monetize by using our personal information to have free rein.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Gamification Is Adding a Spoonful of Sugar to Security Training.”

Tony Anscombe is the Global Security Evangelist for ESET. With over 20 years of security industry experience, Anscombe is an established author, blogger, and speaker on the current threat landscape, security technologies, and products, data protection, privacy and trust, and … View Full Bio

Article source: https://www.darkreading.com/endpoint/privacy/how-to-get-prepared-for-privacy-legislation/a/d-id/1336418?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New Free Emulator Challenges Apple’s Control of iOS

An open-source tool gives researchers and jailbreakers a free option for researching vulnerabilities in the operating system – and gives Apple a new headache.

A security researcher at Black Hat Europe in London next week plans to release an open source low-level emulator that can run a version of Apple’s mobile operating system.

The project, based on the open-source machine emulator QEMU, will allow security researchers to have more access to iOS processes and operations, an advantage when searching for vulnerabilities and systems weaknesses, says Jonathan Afek, the leader of the emulation project and a security team research manager at dynamic-testing provider HCL AppScan.

Afek plans to demonstrate the iOS kernel running on a QEMU virtual machine as well as show ways of using the setup to search for vulnerabilities.

“Apple iPhones are quite secure, but all platforms have a lot of vulnerabilities in them,” he says. “I think this platform, quote-unquote, in its current stage, it will make life easier for researchers and make the iPhone more secure by allowing security researchers to investigate vulnerabilities before they are exposed by others.”

Apple is unlikely to agree. The project is the latest attempt to provide interested researchers with a platform that could be used by reverse engineers to look for vulnerabilities as well as those aiming to jailbreak their phones.

Yet, like most aspects of its iOS ecosystem, Apple keeps tight control of who can run its operating system and in what ways. Apple considers any non-approved use of its iOS operating system to be an infringement of its intellectual property. 

An Apple Lawsuit

In August, Apple sued mobile device virtualization company Corellium for offering a service based on a similar platform it had developed — albeit, one that is far more mature than Afek’s open-source version.

“Although Corellium paints itself as providing a research tool for those trying to discover security vulnerabilities and other flaws in Apple’s software, Corellium’s true goal is profiting off its blatant infringement,” Apple stated in its lawsuit. “Far from assisting in fixing vulnerabilities, Corellium encourages its users to sell any discovered information on the open market to the highest bidder.”

Since January 2018, Corelliuim has offered the tool as a service to bug hunters an security researchers to emulate an iPhone running any version of the operating system. The company argues that allowing researchers to work on an emulated iOS is helpful for the entire community of users.

“We founded Corellium to equip the mobile community with the scalable, efficient, and innovative tools they need to push the mobile ecosystem forward,” Amanda Gorton, CEO of Corellium, wrote in a statement regarding the lawsuit by Apple. “By combining the fidelity of native architecture with the advantages of a virtual resource, our pioneering platform empowers security experts, software developers, and mobile testers to do their work better than they could before — whether that’s testing an app, conducting training, or working for our national defense.”

Findig vulnerabilities in Apple devices can be lucrative. In January 2019, for example, exploit and surveillance software firm Zerodium doubled the bounty — to $2 million — that the company pays to researchers that privately offers a previously unknown exploit that can compromise an iPhone with no interaction. The company provides such exploits to its clients to test their own systems and attack targeted devices.

Increasing interest in the security of its devices prompted Apple to increase what it pays to researchers as part of its bug bounty program. Finding a vulnerability that allows a program to get around the secure boot firmware can earn a bounty of $200,000 as of May 2019, according to the company’s iOS Security whitepaper. In the past, only vetted researchers could participate in the bounty program, but the company has reportedly since opened up the process to anyone.

Meanwhile, the iOS emulator that HCL AppScan’s Afek developed is still very much a work in progress. The platform cannot run the latest version of iOS nor emulate the latest hardware, he says.

“It is in the very early stages — it only runs iOS 12.1 for iPhone 6,” Afek says. “I’m currently working on additional features and support for newer iOS versions. It will be a challenge, but not a really big challenge — just a little bit of work to support the newer features.”

Afek’s Black Hat Europe presentation will be on Dec. 4. Apple did not return an e-mail request seeking comment on the new platform.

Related Content

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Home Safe: 20 Cybersecurity Tips for Your Remote Workers.”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/mobile/new-free-emulator-challenges-apples-control-of-ios/d/d-id/1336478?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Details Its Responses to Cyber Attacks, Disinformation

Government groups continue to attack user credentials and distribute disinformation according to a new blog post from Google’s Threat Analysis Group.

In the second quarter of 2019, Google sent more than 12,000 warnings to users targeted by state-sponsored phishing campaigns. That number is similar to the third quarter phishing numbers of the previous two years. The information on phishing is part of a new blog post on protecting users from government-backed hacking and disinformation from Google’s Threat Analysis Group.

In the post, Google described threats previously detailed as originating with the Russian “Sandworm” group, and steps the company has taken to defend users from the attacks. Those attacks are part of larger campaigns Google has found combining disinformation tactics with active phishing and malware attacks seeking account credentials from high-value users including journalists, human rights activists, and political campaigns.

For more, read here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Home Safe: 20 Cybersecurity Tips for Your Remote Workers.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/google-details-its-responses-to-cyber-attacks-disinformation-/d/d-id/1336480?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Analysis of Jira Bug Stresses Impact of SSRF in Public Cloud

More than 3,100 Jira instances are still vulnerable to a server-side request forgery vulnerability patched in August.

Thousands of Jira instances remain vulnerable to server-side request forgery (SSRF), a Web application vulnerability that redirects malicious requests to resources restricted to a server. The extent of this exposure underscores the impact of SSRF on applications in the public cloud.

SSRF poses a threat to cloud services due to the use of metadata API, which allows applications to access configurations, logs, credentials, and other information in the underlying cloud infrastructure. While the metadata API can only be accessed locally, an SSRF flaw makes it accessible from the Internet and could enable lateral movement and network reconnaissance.

The root cause of SSRF is a Web application needs to get resources from another domain to fulfill the request, but the input URL isn’t properly sanitized and could let attackers manipulate the destination. One of the reasons attackers like to find SSRF vulnerabilities is because they can pivot and see what’s behind the firewall. The implications of this bug vary depending on the environment, though SSRF could have potentially far-reaching effects for many companies: This is the same type of vulnerability that enabled this summer’s Capital One data breach.

To understand the effects of SSRF in the public cloud, researchers with Palo Alto Networks’ Unit 42 investigated known Jira SSRF flaw CVE-2019-8451 and analyzed its impact on six public cloud service providers (CSPs). This vulnerability, which can be exploited without authentication, was disclosed back in August. NVD shows CVE-2019-8451 was introduced in Jira v7.6, which was released in November 2017. Unit 42 found it affects versions back to v4.3, released in March 2011.

“What was especially significant was what the attackers had access to and how long the vulnerability had been out there,” says Jen Miller-Osborn, deputy director of threat intelligence at Unit 42. The fact an exploit would have worked as far back as 2011 “was concerning.”

A patch was immediately released for the vulnerability in August. However, software like Jira, which is closely integrated with business processes, is rarely the first to receive an update.

“Sometimes, with things like this, it’s in part just because you can’t have downtime,” Miller-Osborn explains. System administrators would often rather delay the patch than disrupt business operations, putting critical systems at risk. “This highlights the danger the organization is accepting if they’re making the decision not to patch,” she says.

Calculating Exposure: Who Is Most at Risk?
Researchers started with a Shodan search, which revealed 25,000 Jira instances are currently exposed to the Internet. They chose six CSPs with the highest number of Jira deployments. The goal was to determine how many Jira instances were vulnerable to CVE-2019-8451 in the public cloud, the exploitability of those instances, and the number of hosts leaking metadata, they say.

The researchers’ vulnerability scanner found 7,002 Jira instances exposed to the Internet in these six selected public clouds. Forty-five percent of the 7,002 instances (3,152) are vulnerable to this CVE because they haven’t been patched or updated. More than half (56%) of these vulnerable hosts (1,779) leak cloud infrastructure metadata, the Unit 42 team explains in a blog post.

This leaked metadata ranges from source code and credentials to internal network configuration, and vulnerable firms include technology, media, and industrial companies. DigitalOcean customers have the highest rate of metadata leak (93%), followed by customers of Google Cloud (80%), Alibaba (71%), AWS (68%), and Hetzner (21%).

Microsoft Azure is the only CSP with zero metadata leak, researchers report, because its strict header requirement in metadata API effectively blocks all SSRF requests. It’s worth noting GCP also enforces header requirements in its latest metadata API (v1); however, attackers can still access metadata using legacy APIs if legacy API endpoints (v0.1 and v1beta1) aren’t disabled.

What Admins Can Do
To fix the problem of SSRF, developers should validate the format and pattern of user input before passing it to the application logic. For admins who only install and manage Web apps, there are a few preventive measures to take to lessen the damage of an SSRF flaw.

One of these is domain whitelisting. Most apps only need to communicate with a select few domains, including database and API gateways. Creating a whitelist for approved domains that an application is allowed to communicate with can cut down on services an attacker can hit.

Metadata API can be protected by CSPs, but if this kind of protection is not available, there are additional steps businesses can take to reduce the risk of metadata leak. One of these is enabling CSP metadata API security. Some CSPs have configurable options to secure metadata API. Companies can also create firewall rules inside VMs to block the IP of metadata API.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Home Safe: 20 Cybersecurity Tips for Your Remote Workers.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/analysis-of-jira-bug-stresses-impact-of-ssrf-in-public-cloud-/d/d-id/1336479?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

SQL Injection Errors No Longer the Top Software Security Issue

In newly updated Common Weakness Enumeration (CWE), SQL injection now ranks sixth.

SQL injection errors are no longer considered the most severe or prevalent software security issue.

Replacing it at the top of the Common Weakness Enumeration (CWE) list of most dangerous software errors is “Improper Restriction of Operations within the Bounds of a Memory Buffer.” Cross-site scripting (XSS) errors rank second on the list, followed by improper input validation, information exposure, and out-of-bounds read errors. SQL injection flaws are now ranked sixth on the list of most severe security vulnerabilities.

The Department of Homeland Security’s Systems Engineering and Development Institute, operated by The MITRE Corp., this week released an updated top 25 CWE listing of software errors. The update is the first in eight years and ranks security vulnerabilities based on prevalence and severity.

The CWE team looked at a dataset of some 25,000 Common Vulnerabilities and Exposures (CVE) entries over the past two years and focused on security weaknesses in software that are both common and have the potential to cause significant harm. Issues that have a low impact or are rarely excluded were filtered out.

In the past, the compilers of the CWE list used a more subjective approach based on personal interviews and surveys of industry experts.

“We shifted to a data-driven approach because it enables a more consistent and repeatable analysis that reflects the issues we are seeing in the real world,” Chris Levendis, CWE project leader, said in a statement Wednesday. “We will continue to mature the methodology as we move forward.”

Lists like the CWE and the Open Web Application Security Project (OWASP)’s top software security vulnerabilities are designed to raise awareness of common security missteps among developers. The goal is to help developers improve software quality and to assist them and testers in verifying security issues in their code. But over the years, the entries in these lists have changed little, suggesting that developers are repeatedly making the same mistakes.

“This highlights the unfortunate reality that despite many efforts, security is not being embedded effectively enough within the developer community, or in enterprise assurance frameworks,” says Javvad Malik, security awareness advocate at KnowBe4. “It’s not that we are unaware of how to identify and remedy the issues or prevent them from occurring in the first place; there appears to be a culture where getting software shipped outweighs the security requirements,” he notes.

According to Ilia Kolochenko, founder and CEO of web security company ImmuniWeb, the new list and risk-ranking approaches make a lot of sense overall. However, some of the entries in the list are likely to cause some controversy, Kolochenko says.

Cross-site scripting errors, for example, while common are not particularly easy to exploit. “Successful exploitation of an XSS, unless it’s a stored one, always requires at least a modicum of social engineering and interaction with a victim,” he says.

One could potentially make similar comments about all other entries, arguing about the prevalence of the vulnerabilities in business-critical systems, ease of detection and exploitation, costs of prevention, and remediation time. “We will unlikely have a unanimous opinion on all of them,” Kolochenko says. “This is why it’s good to have different classifications and ratings that, once consolidated, provide a comprehensive overview of the modern-day vulnerability landscape.”

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Home Safe: 20 Cybersecurity Tips for Your Remote Workers.”

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/sql-injection-errors-no-longer-the-top-software-security-issue/d/d-id/1336481?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Firefox gets tough on tracking tricks that sneakily sap your privacy

We just did an informal survey around the office – we asked 10 people in various departments, technical and non-technical, to say the first thing that came into their head when we said, “Browser tracking.

(No one heard anyone else’s answer, in case you’re wondering how independent each reply might have been.)

All 10 said, “Cookies.

That’s not surprising, because many websites these days pop up a warning to say they make use of cookies for tracking you across visits – the theory seems to be that you can’t then later complain you didn’t know.

Cookies, therefore, are a well-documented part of online tracking, and the phrase “web cookie” can be considered everyday terminology now, rather than jargon – we encounter it all the time and have become used to it.

Indeed, some sites openly and visibly allow you to choose to accept or reject their cookies…

…although there’s an amusing irony that the most reliable way for a website to remember that you don’t want cookies set is to set a cookie to tell it not to set any more cookies.

Cookies are browser database entries unique to a website. Your browser sends back a site’s existing cookie entries with every future request to that site. In fact, cookies were specifically designed to track you between visits, without which you wouldn’t be able to set preferences such as currency and language. For example, this site sets a cookie called nakedsecurity-hide-newsletter after you sign up, so we can tell that there’s no point in showing you the signup box next time. But cookies are also easily misused – in programming jargon, cookies allow ‘stateful behaviour’, which is shorthand for a website keeping track of whether you’re paying a second, or third, or fourth visit, and therefore tying together what you do this time with how you behaved before.

Because cookies are so closely connected with tracking, an increasing number of us are using privacy tools that limit the cookies our browser will remember, or that flush all our cookies periodically, or that warn us about the sort of cookies that are commonly associated with online tracking and targeted marketing.

Cookie usage is also cafefully regulated by your browser, to stop privacy and security violations.

For example, while you are browsing on nakedsecurity.sophos.com, your browser will prevent our website from seeing any cookies set by other sites, and vice versa, so one site can’t read out any secrets set by another – this is known, for obvious reasons, as the same origin policy.

Sadly, web marketing companies have pretty much based their business model on keeping track of you for as long as possible, in as much detail as possible.

For them, the same origin policy gets in the way of tracking you between sites, and regular cookie purges prevent them tracking you on the same site for years or months rather than just days or hours.

It’s you again!

So, a minority of web marketers spend time hunting for brand new ways to detect that it’s you again, even if you tell your browser to dump all officially stored data that’s there to track you, and even if it’s pretty obvious that you don’t want to be tracked.

You’ll hear these tricks called by many different names, such as “supercookies”, “cookie respawning”, “evercookies”, “undeletable cookies” and “browser fingerprinting”, and they often rely on collecting a whole raft of apparently incidental details about your browser – data points that give away very little on their own, but that, when combined, may end up identifying you with surprising accuracy.

For example, websites often include JavaScript code to check the size of your browser window, so they can decide which visual style to use – something you can see on this site by dragging the window narrower and narrower. (Note how the visual layout changes subtly when it gets down to 768 pixels wide.)

But what if a website records your current window size for nefarious purposes, such as tracking you?

If you just happen to resize your browser window to unusual dimensions such as 1306×637 pixels, you’ll present that very same weird screen size again when you refresh the page, even if you clear your cookies in between.

The website operator won’t be sure it’s still you – but they can make a pretty good guess.

Worse still, they may be able to combine that apparently innocent detail with a bunch of other circumstantial evidence to lump you in with an ever-decreasing number of ‘viable suspects.”

Other browser characteristics that fingerprinting tricksters have abused include details such as: whether you have an external monitor plugged in; which fonts you have installed; how much battery power you have left; which operating system and browser you’re using; what timezone you’re in; the exact pixel layout your browser chooses when rendering characters; and more.

Fingerprinting tricks might even include minutiae such as the precision of the timer functions inside your browser and the accuracy of the mathematical formulae used in your browser’s JavaScript engine.

With enough apparently harmless discriminators, an unscrupulous web tracking company may be able to put you into a bucket of 1,000,000 possible users – wait, 10,000 – wait, 1000 – wait, 63 – wait, 7 – wait, only ONE POSSIBLE USER MEETS ALL THE CRITERIA COLLECTED!

In other words, browser data points that would be individually unimportant may combine to give you a browser fingerprint that is unique, or perhaps puts you into a very small bucket of possible users.

A cat-and-mouse game

The result, as with so many aspects of cybersecurity, has been a cat-and-mouse game between the browser makers and the browser fingerprinters.

Firefox, in particular, has been vocal about the anti-fingerprinting code it’s been building into its browser in recent years.

Some of these countermeasures have involved throwing out features that, no matter how useful, were primarily being used for evil, not for good – such as getting rid of the navigator.getBattery() function that allowed rogue websites to track the precise battery state of your computer, a data value that tends to change predictably over time.

Other countermeasures include deliberately reducing the precision of system data, for instance by adding random inaccuracies to it, or replacing it with a one-size-fits-all value.

Examples from Mozilla’s own list include:

  • Canvas image extraction is blocked.
  • Absolute screen coordinates are obscured.
  • Window dimensions are rounded to a multiple of 200×100.
  • Only specific system fonts are allowed.
  • Time precision is reduced to 100ms, with up to 100ms of jitter.
  • The keyboard layout is spoofed.
  • The locale is spoofed to ‘en-US’.
  • The date input field and date picker panel are spoofed to ‘en-US’.
  • Timezone is spoofed to ‘UTC’.
  • All device sensors are disabled.

The downside of all this, of course, is that any websites that make legitimate and positive use of these details – for example to improve the accessibility of the site or boost the performance and playability of online games – are out of luck.

The upside is that every browser detail that gets “de-precisioned” is a setback for the Bad Guys, and thus a privacy win for the rest of us.

For those reasons, Firefox’s latest tranche of fingerprinter blocking tools are easy to turn on…

…but they’re not yet on by default, just in case hitting back at the crooks has an annoyingly negative effect on the rest of us.

Coming soon

However, alert observers have spotted that Mozilla is planning to change that soon:

We are enabling fingerprinting blocking in the Standard mode of 72. We will revisit this decision based on the results of [our ongoing monitoring program], and may revert the change during the beta cycle for 72.

Simply put, by making us all look a bit less individual online, browsers can help to frustrate web tracking companies that are determined to keep tabs on us even when we clearly want to stay private.

Sometimes, it pays to lose your individuality and just be one of the crowd!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0A78KcSO4d8/

Police arrest alleged Chuckling Squad member who hijacked @Jack Dorsey

Police have arrested an alleged member of The Chuckling Squad: the hacking group behind the recent SIM-swap and hijacking of Twitter founder and CEO Jack Dorsey’s @Jack account.

Joseph Cox, writing for Motherboard, reported on Saturday that a Chuckling Squad leader – who goes by the handle Debug – told them that the individual was arrested about two weeks prior. Motherboard withheld their name, because they’re a minor.

Debug told Motherboard that the minor – whom they identified as a “he” – was a SIM-swapping aficionado whom the group kicked out in October:

He was a member of Chuckling Squad but not anymore. He was an active member for us by providing celebs/public figure [phone] numbers and helped us hack them.

The arrest was confirmed by the Santa Clara County District Attorney’s Office in California, which manages the Regional Enforcement Allied Computer Team (REACT) and which emailed this statement to Motherboard:

We applaud the efforts of all the law enforcement agencies involved in this arrest. REACT continues to work with and assist our law enforcement partners in any way we can. We hope this arrest serves as a reminder to the public that people who engage in these crimes will be caught, arrested and prosecuted.

Dorsey’s high-profile, high-value account – he’s got more than 4 million followers – was taken over in late August 2019 by hackers who used their brief access to go on a joyride to Nasty Town, tweeting out a racist/anti-semitic/bomb-hoaxing exhaust cloud.

A week later, Twitter temporarily yanked the ability to tweet via SMS – one of the possible ways that Dorsey’s account got taken over.

In a successful SIM-swap attack, hackers persuade a mobile phone provider to transfer a victim’s phone number to the hacker’s SIM card, giving the hacker access to the victim’s calls and messages.

At the time, Twitter said that it was suspending the ability to tweet via text due to vulnerabilities that mobile carriers need to address, and due to its reliance on having a linked phone number for two-factor authentication (2FA) – something it said it’s working to improve.

It’s not too surprising to hear that the arrested, alleged Chuckling Squad member who allegedly helped take over @Jack was a minor, given that celebrity account takeovers – unless they involved a ransom demand – aren’t necessarily worth big bucks.

Rather, they’re about lulz and/or bragging rights. Here’s Debug again, speaking about the former Chuckling Squadder who got arrested:

He would be weird. Swatting celebrities for a follow back.

But SIM-swap attacks are also big business, with hacking groups going after high rollers in the cryptocurrency scene, taking over their accounts so they can clean out their victims’ wallets.

Debug told Motherboard that the arrested kid was also allegedly behind other attacks, including one on Santa Clara County Deputy District Attorney Erin West. The hacker sent over a screenshot of a text message they said the individual sent to West, which included the hashtag “#FreeJoelOrtiz” – a reference to a SIM-swapper that West convicted.

Ortiz was one of two alleged SIM swappers busted last week for draining fat cryptocurrency wallets and hijacking high-value (“OG,” or “original gangster”) social media accounts. Accused of stealing more than $5 million in Bitcoin, Ortiz copped a plea and was sentenced to 10 years in prison.

How it’s done

As we’ve explained, SIM-swap fraud, also known as phone-porting fraud, works because phone numbers are actually tied to the phone’s SIM card – in fact, SIM is short for subscriber identity module, a special system-on-a-chip card that securely stores the cryptographic secret that identifies your phone number to the network.

Most mobile phone shops out there can issue and activate replacement SIM cards quickly, causing your old SIM to go dead and the new SIM card to take over your phone number… and your telephonic identity.

That comes in handy when you get a new phone or lose your phone: your phone carrier will be happy to sell you a new phone, with a new SIM, that has your old number.

But if a SIM-swap scammer can get enough information about you, they can just pretend they’re you and then social-engineer that swap of your phone number to a new SIM card that’s under their control.

By stealing your phone number, the crooks start receiving your text messages along with your phone calls, and if you’ve set up SMS-based two-factor authentication (2FA), the crooks now have access to your 2FA codes – at least, until you notice that your phone has gone dead, and manage to convince your account providers that somebody else has hijacked your account.

Of course, it takes time to discover that you’ve been SIM-swapped, and it takes time to notify your provider and explain it all. Crooks take advantage of that lag time to rifle through your accounts. Doing so gives them the ability to do many things, none of them good. We recently saw a victim who had his sex tapes whisked out from under him – after which the crook tried to sextort him, threatening to release the material if he didn’t pay up. We’ve seen bank account balances melt, and we’ve seen Bitcoin wallets drained.

What to do?

Here’s our advice on how to avoid a SIM swap:

  • Set up a PIN or password on your cellular account. This could help protect your account from crooks trying to make unauthorized changes. Check your provider’s website for instructions on how to do it, or just call so they can walk you through it.
  • Real companies don’t ask for passwords or verification codes. If somebody calls, says they’re one of your financial companies or your phone service provider, and asks for your password or verification code, get off that call: they’re a scammer. If you need to talk to your cellular provider or financial institution, look up the phone number, on the back of your card or on a legitimate website, and call them yourself.
  • Watch out for phishing emails or fake websites that crooks use to acquire your usernames and passwords in the first place. Generally speaking, SIM-swap crooks need access to your text messages as a last step, meaning that they’ve already figured out your account number, username, password and so on.
  • Avoid obvious answers to account security questions. Consider using a password manager to generate absurd and unguessable answers to the sort of questions that crooks might otherwise work out from your social media accounts. The crooks might guess that your first car was a Toyota, but they’re much less likely to figure out that it was a 87X4TNETENNBA.
  • Use an on-access (real-time) anti-virus and keep it up-to-date. One common way for crooks to figure out usernames and passwords is by means of keylogger malware, which lies low until you visit specific web pages such as your bank’s login page, then springs into action to record what you type while you’re logging on. A good real-time anti-virus will help you to block dangerous web links, infected email attachments and malicious downloads.
  • Be suspicious if your phone drops back to “emergency calls only” unexpectedly. Check with friends or colleagues on the same network to see if they’re also having problems. If you need to, borrow a friend’s phone to contact your mobile provider to ask for help. Be prepared to attend a shop or service center in person if you can, and take ID and other evidence with you to back yourself up.
  • Consider switching from SMS-based 2FA codes to codes generated by an authenticator app. This means the crooks have to steal your phone and figure out your lock code in order to access the app that generates your unique sequence of login codes.

Having said that, Naked Security’s Paul Ducklin advises that we shouldn’t think of switching from SMS to app-based authentication as a panacea:

Malware on your phone may be able to coerce the authenticator app into generating the next token without you realizing it – and canny scammers may even phone you up and try to trick you into reading out your next logon code, often pretending they’re doing some sort of “fraud check”.

If you’ve already been SIM-jacked…

  • Contact your cellular service provider immediately to take back control of your phone number. Then, change your account passwords.
  • Check your credit card, bank, and other financial statements for unauthorized charges or changes. If you see any, report them.
  • If you think somebody’s already got your information, such as your taxpayer ID or the number of your payment card number or bank account, people in the US can go to IdentityTheft.gov for specific steps to take. If you’re in the UK, check out tips and resources from the Information Commissioner’s Office (ICO) and/or Action Fraud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/LWmwsxXvjmg/