STE WILLIAMS

DNS-over-HTTPS is coming to Windows 10

For fans of DNS-over-HTTPS (DoH) privacy, it must feel like a dam of resistance is starting to break.

Mozilla Firefox and Cloudflare were the earliest adopters of this controversial new way to make DNS queries private by encrypting them, followed not long after by the weight of Google, which embedded DoH into Chrome as a non-default setting.

This week an even bigger name joined the party – Windows 10 – which Microsoft has announced will integrate the ability to use DoH, and eventually also its close cousin DNS-over-TLS (DoT), into its networking client.

It looks like game over for the opponents of DoH, predominantly ISPs which have expressed a nest of worries – some rather self-serving (we can’t monetise DNS traffic we can’t see) and others which perhaps deserve a hearing (how do we filter out bad domains?).

Things got so hyperbolic that last summer the UK ISP Association (ISPA) even shortlisted Mozilla for an “Internet Villain” award to punish its enthusiasm for DoH before backing down after a public backlash.

Earlier this month, Mozilla retaliated, accusing ISPs of misrepresenting the technical arguments around encrypted DNS.

HTTPS piggybacking

We’ve already covered how DoH and DoT work in previous articles, but the gist is they encrypt the queries a computer makes to DNS servers in a way that means intermediaries such as ISP and governments can’t easily see which websites are being visited.

Another way to think of it is that DoH extends the benefits of HTTPS security to DNS traffic. While not perfectly private (data still leaks via things like Server Name Indication), it’s better than sending DNS queries in the clear.

In fact, DoT has some advantages over DoH, but requires ports to be opened in routers/firewalls. DoH is indistinguishable from regular web browsing traffic whereas DoT runs in its own lane, making it easier to block or filter, and requires users to configure more settings to make it work.

Because DoH piggybacks HTTPS, it just works out of the box – as long as the client software supports it, that is. That’s why Windows 10 integration, whenever that appears, is important.

Re-centralisation

Given that DoH support is already turned on in Firefox (which uses Cloudflare resolution) and Google’s Chrome (which uses its own DNS), what does Windows 10 integration add?

The answer is that it might help re-decentralise the provision of encrypted DNS.

Today, the unencrypted DNS system is highly decentralised, which is good for stability (no single point of failure), and some aspects of security (DNS filtering is used to block malevolent sites). Anyone who doubts the importance of avoiding single points of failure might consider the Dyn DDoS attack of 2016, which caused major internet outages caused by targeting only one provider.

Even users who switch from their ISP’s DNS resolution to public alternatives such as Google’s 8.8.8.8/8.8.4.4 for performance reasons now have plenty of choice.

But if DoH or DoT ends up being turned on by default in browsers, DNS resolution could quickly shrink to a small number of providers, which might in time end up being bad for privacy.

According to Microsoft, the integration of encrypted DNS inside Windows is a way to resist this and hang on to the benefits of decentralisation:

There is an assumption by many that DNS encryption requires DNS centralization. This is only true if encrypted DNS adoption isn’t universal. To keep the DNS decentralized, it will be important for client operating systems (such as Windows) and internet service providers alike to widely adopt encrypted DNS.

However, having decided to embrace encrypted DNS, Microsoft admits there are still technical issues to iron out.

For example, Windows won’t override the defaults set by the user or admin while still being guided by some privacy ground rules:

  1. Where a chosen DNS resolver offers encrypted DNS, Windows will opt over any unencrypted alternative by default.
  2. If encrypted DNS is disrupted, Windows won’t silently fall back to a non-encrypted server.
  3. Enabling encrypted DNS will be as simple as possible to avoid the problem that only experts end up using it.

Given that encrypted DNS has emerged from the IETF, ISPs must already know they are fighting a losing battle.

Although unfolding gradually, the shift to a more private online world appears to be underway whether its opponents like it or not. The battle now is to be on the inside of this change or risk being locked out forever.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/KwCTrsw95Kk/

Bon sang! French hospital contracts 6,000 PC-locking ransomware infection

A French hospital has suffered a ransomware attack that reportedly caused the lockdown of 6,000 computers.

Rouen’s Centre Hospitalier Universitaire (CHU) reverted to pen and paper instead of computerised record-keeping during last week’s attack, according to Le Monde.

The attack, which took place on Friday November 15 at around 1900 local time, according to a hospital statement (en Francais) “made access to most business applications inaccessible, but also infected some of the workstations.”

“Many services operated in degraded mode and hospital staff were confronted with disruptions, particularly in regards to computerised prescriptions, reports or admissions management, which instead had to be operated in a degraded state, [or had to be transmitted via] telephone or paper,” continued the official statement.

Hospital managers reported a person named only as “X” to the Paris prosecutor’s office, alleging she or he had committed “fraudulent access to an automated data processing system and attempted extortion”.

The BBC added that the hospital had vowed not to pay the ransom.

Zulfikar Ramzan, chief technical officer of RSA Security, blamed “digital transformation” for the rising popularity of ransomware, elaborating: “While this has brought with it many benefits, organisations have become reliant on these digital technologies; loss of data can be a critical issue, making ransoming that data a much more profitable business… Unfortunately, this means we are seeing a lot of hits against organisations where data is critical – such as hospitals.”

Cesar Cerrudo, chief techie of rival biz IOActive, opined: “Sadly, the targeting of hospitals with ransomware is a growing trend; earlier this year seven hospitals in Australia were also impacted by ransomware. Hospitals are becoming a major target as despite new technology adoption being high, there is often a lack of cyber security knowledge, even though health data can be a very lucrative area for cybercriminals. This makes busy hospital staff the perfect targets.”

Last year an American hospital paid a $60,000 ransom to end an infection and get its files back. ®

Sponsored:
What next after Netezza?

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/21/french_hospital_rouen_ransomware/

Anatomy of a BEC Scam

A look at the characteristics of real-world business email compromise attacks – and what makes them tick.

They typically land in no more than 25 inboxes in an organization — on a weekday first thing in the morning, posing as an urgent or time-sensitive email from a co-worker or executive. Business email compromise (BEC) scams represent just a small fraction of spear-phishing attacks overall, but these lucrative campaigns contain a few telltale traits.

Barracuda Networks analyzed the characteristics and trends of 1.5 million spear-phishing emails — of which just BEC made up just 7% — to determine the key methods scammers are using in their BEC campaigns. Don’t let the tiny percentage fool you: BEC scams caused $26 billion in losses to businesses in the past four years, according to the FBI.

Some 91% of BEC attacks occur on weekdays, a tactic to blend in with the workday and appear more legitimate, the Barracuda study found. Attackers, on average, target up to six employees, and some 94.5% of all BEC attacks target less than 25 people in an organization. They do their homework on their targets, too, using real names of human resources, finance, and other executives as well as of the targeted employees.

The BEC emails often are written with a sense of urgency in order to rush the recipient into doing the attacker’s bidding, with 85% marked as urgent, 59% requesting help, and 26% inquiring about availability, according to Barracuda’s findings. And while users click on one in 10 spear-phishing emails, BEC emails are three times more likely to be opened. That doesn’t necessarily mean the target fell for the message or followed the scammer’s request, though, notes Asaf Cidon, a Barracuda adviser and professor of electrical engineering and computer science at Columbia University.

“We can’t tell whether they went into a website and gave up their credentials,” he says, or took other actions. The bottom line is when attackers impersonate someone in a position of authority or who appears legitimate, they get three times the click rate on the email, he says. 

Cidon says some attackers are making an extra effort to create very personalized messages, unlike mass phishing email campaigns. “BECs are probably going after larger amounts of money, not just trying to compromise single credentials. They are trying to extract a wire transfer out of an organization, [for example], so they are willing to do more research and spend more time” on their targets, he says.

Barracuda’s study jibes with what other researchers have found in their BEC studies. “Successful BEC attacks are usually quite simple and mimic requests that could be reasonably expected to come from an employee’s executive or supervisor,” notes Crane Hassold, head of Agari’s cyber intelligence division.

He says wire transfer or payroll attacks usually target just one or two employees, typically in the finance or human resources department. But gift card BEC scams, where the attacker poses as a supervisor requesting the victim purchase and send him or her gift cards, often are sent to dozens of employees in an organization, he notes.

Barracuda saw the most BECs on Mondays, and Agari saw the most on Tuesdays (one out of four), with scams dwindling for the rest of the week. The emails most often arrive in the morning, with 9 a.m. as the bewitching hour since that’s when most employees are first getting to their desks. Some 47% of BEC attacks are sent from Gmail accounts, and just 3% of BEC attacks come with a rigged URL or attachment. About 8% of BEC scams involve payroll requests, according to the security firm’s report.

While most of the attacks originate from Nigeria, they now also come out of Ghana, Malaysia, and the United Arab Emirates, notes Agari’s Hassold. 

The best way to beat back BECs: multifactor authentication to protect user credentials that get stolen and the usual mantra of educating users about the scams and how to spot one, including confirming an email address. Barracuda also recommends setting specific policies for financial transactions, banning email requests for any financial transactions, and adopting DMARC authentication, as well as machine learning technology, to protect the organization’s domain from being spoofed.

But even with all of the best practices, there’s no way to guarantee a user won’t get duped by a BEC email. “There’s no single silver bullet,” Cidon says.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “What’s in a WAF?

Kelly Jackson Higgins is the Executive Editor of Dark Reading. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/anatomy-of-a-bec-scam/d/d-id/1336425?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The ‘Department of No’: Why CISOs Need to Cultivate a Middle Way

A chief information security officer’s job inherently involves conflict, but a go-along-to-get-along approach carries its own vulnerabilities and risks.

Most of us are likely to agree that if we want to continue to evolve to be our best selves, we need some form of conflict or challenge. If we want to be stronger, we lift more weights or add more repetitions. If we want improved brain function, we solve puzzles or learn new skills. We may restructure our diets or diversify our exercise regimes, but, in any event, such activity almost always requires a change in behavior, a commitment to discipline, and a flexibility of approach to achieve optimal results.

And yet as security practitioners, many of us discard this type of training when we walk through the doors at work. We stay in a known, comfortable place. We ignore the independence and creativity of our own thinking and — almost as if by default — we transform into yes men and women, agreeing with our management teams and our boards about the right ways to handle risk and cyber threats.

Ironically, to much of the rest of the organization, we transform into what I call the Department of No, a group of well-intentioned but risk-averse executives who develop complex policies that restrict employee behaviors in a misguided attempt to reduce risk levels. Our go-along-to-get-along approaches, whether positive or negative and whether we realize it or not, reveal inherent biases and predisposed behaviors that may seem benign in themselves but that carry new vulnerabilities (and therefore new risks) into the workplace.

The truth is, a CISO’s job inherently has conflict. We strive to strike a balance between things like cost and quality or security and usability knowing that we’re basically making trade-offs, reducing one part of the equation to give the other more weight, and those trade-offs typically show us where our bias lies. Bias resulting from our backgrounds, training, or whatever makes us inclined toward certain assumptions and contributes to our potential misperception of risk and unintentional increased vulnerability. It’s hardly a path that enables us to do our best work.

Fragmented organizational responsibility is another inherent conflict. One department may be responsible for FedRAMP certification, another for SOC standards, and still others for privacy, information security, and compliance. Risk and control responsibilities may therefore be siloed in both decision-making and outcomes. When each department requires its own audits, controls, policies, and priorities, separating bias and working toward a common framework becomes increasingly challenging, making it easier for us to stay within our respective teams and again, perhaps unintentionally, weaken our organizations by working at cross-purposes.

We all view risk in our own way, like light shining through a prism. Depending on the angles we use, we see different refractions and reflections of light. The color and intensity of light changes as it traverses the prism into a spectrum of dispersed or mixed colors. Our evaluation of risk and the controls we use to mitigate vulnerabilities are just as diverse — diversity that is healthy if it is recognized and managed, but divisive and unnecessarily conflicting if not. The end result leaves wedges between organizations that should be working together to optimize the spectrum of information risk.

Disagreement Is Not Disloyalty
To get there requires the same commitment to discipline and flexibility of approach we bring to other areas of our lives. It requires us to pose high-contrast questions that foster constructive conversations and ensure we are open to exploring all available possibilities. Too often, especially as we rise through the ranks of an organization, we censor ourselves and agree with our CEOs and our boards because we don’t want to be perceived as disloyal.

But loyalty is often simply another form of bias. Despite what we have been taught to believe, disagreement does not equal disloyalty. In fact, I believe the reverse is true: Disagreement can be the highest form of loyalty, although that loyalty may be toward our customers or shareholders or even society at large if not to our management teams.

We cannot be so flexible that we lose sight of our duty to protect the right things at the right times in the right order. Nor can we be so rigid that our attempts to challenge a harmful status quo create equally ossified and restrictive ways of thinking. In other words, too much “yes” is dangerous, too much “no” is dangerous, but constructive conflict is essential to ensure contrasting opinions thrive and the truly serious issues at hand are met with the best approaches to solving them successfully.

We know we cannot eliminate risk entirely, but we can make good choices and strive continually toward optimization by:

1. Ensuring the cyber safety of people first — whether employees, customers, contractors, partners, or shareholders

2. Understanding and safeguarding the data relevant and necessary to keep people safe

3. Implementing a holistic framework of overarching governance that protects the long-term health of the business by putting controls in place that solve for the whole and not the sum of its parts

Independence and objectivity are key to our success and credibility. As CISOs and risk professionals, we need to cultivate the mettle necessary to do the right thing rather than allowing bad decisions to occur on our watch because we want to appear loyal.

Conflict is OK. Tension is OK. Seen through the right lens and managed toward positive outcomes, tension and conflict allow opposing ideas to flourish and be discussed, evaluated, and discarded in turn, increasing the chance that the decisions we ultimately make will provide the best overall protection to our organizations.

It might be trite in this day and age to say “if you see something, say something,” but in fact that’s precisely what we should be doing. If we can’t go to our management teams, we must go to our boards. But we can’t be afraid to stand our ground, even if it means putting our own jobs at risk to save our organizations. We owe it to the larger constituencies that depend on us — customers, shareholders, communities — to remain objective and foster dialogue that frees us from the tyranny of “yes” or “no” and allows us to keep asking “how.”

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “8 Backup Recovery Questions to Ask Yourself.”

Malcolm Harkins is the chief security and trust officer for Cymatic. He is responsible for enabling business growth through trusted infrastructure, systems, and business processes, including all aspects of information risk and security, as well as security and privacy policy. … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/the-department-of-no-why-cisos-need-to-cultivate-a-middle-way/a/d-id/1336391?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

6 Top Nontechnical Degrees for Cybersecurity

A computer science degree isn’t the only path into a cybersecurity career. PreviousNext

The gap between trained cybersecurity professionals and need is on the order of 4 million people worldwide, and yet not nearly enough students are in computer science and cybersecurity university programs around the world to bridge that gap. One solution, some say, is to look beyond the traditional computer science/cybersecurity academic pipeline for entry-level professionals. 

In fact, “About 58% of cybersecurity professionals come from fields outside technology,” says Wesley Simpson, COO of ISC(2). “Cast a big net. We need people from all different backgrounds and degrees.”

The big question is: Which degree programs are worthy of consideration?

For practical reasons, Simpson points to the liberal arts. The frequent stories of cybersecurity teams not getting management support for the tools and personnel they need comes down to not effectively telling the cybersecurity story, he says. That’s where liberal arts grads can help.

“The liberal arts people are better at telling the story, crafting the story, and talking to all the people they need to talk with to build the story,” Simpson says. “We need people from all over the spectrum to tell the story.”

Beyond the budget story, individuals from different academic and personal backgrounds can bring critical new perspectives to cybersecurity, which is “key to forming a concrete and inclusive analysis,” says Harrison Van Riper, strategy and research analyst at Digital Shadows. “Whether you’re conducting an investigation of a threat actor or performing incident response, it’s important to understand all of the different views and perspectives that may be impacted.” 

Dan Basile, executive director of the security operations center at Texas AM University, agrees. “We all need a greater diversity of thought and background, in addition to traditional diversity concerns, in order to attack the complex problems we face,” he explains. “All nontechnical majors have something that is of value to the cybersecurity field.” 

So with general agreement that a wider net is part of the solution, which nontechnical degrees should cybersecurity managers look to for their future staffing needs? We asked a number of cybersecurity professionals for their thoughts, and we received a variety of responses. The six degrees on this list were at the top of the collective heap.

We’re also curious: Is your academic background something other than computer science? Let us know where you came from — and what you think about the idea that cybersecurity teams should look beyond computer science and security programs for their future hires.

(Image: StockSnap via Pixabay)

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/careers-and-people/6-top-nontechnical-degrees-for-cybersecurity/d/d-id/1336419?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Increases Top Android Hacking Prize to $1M

Google expands its Android Security Rewards program and multiplies its top cash prize from $200,000 to $1 million.

An expansion of Google’s Android Security Rewards (ASR) program includes a new top prize of $1 million, a massive increase from the previous top prize of $200,000, Google reported today. Researchers could earn even more for exploits found in Android developer preview versions.

The ASR program launched in 2015 to reward researchers who find and report vulnerabilities in the Android ecosystem. Over four years, it has awarded more than $4 million for 1,800 reports. Payouts exceeded $1.5 million in the past year alone; the top reward in 2019 was $161,337.

Now, the program is expanding and increasing the earning potential for white-hat hackers. Google is promising a top prize of $1 million for a “full chain remote code execution exploit with persistence, which compromises the Titan M secure element on Pixel devices,” Jessica Lin of the Android Security Team explains in a blog post. Titan M stores credentials on Pixel phones.

While the $1 million prize is for the Titan exploit alone, Google is adding more categories of exploits to its awards program, including those involving data exfiltration and lock-screen bypass. Depending on the exploit category, a researcher could earn up to $500,000.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “What’s in a WAF?

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/google-increases-top-android-hacking-prize-to-$1m/d/d-id/1336431?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

3 Fundamentals for Better Security and IT Management

Nail these security fundamentals, and your organization will be well-positioned to succeed next year and in the years to come.

As 2019 draws to a close, we’ll see plenty of discussion of the year’s major security incidents, but few will focus on the foundational missteps that plague most organizations. These disruptions aren’t a mystery; in many cases, organizations still make the mistake of implementing new tool after new tool without understanding the nature of their hardware and software assets, where they sit, and what applications and systems are running on them. Throwing more tools at problems of visibility and control will leave any security and IT management strategy inherently flawed.

Let’s cut through the clutter. Here are what organizations can do now, and throughout the coming year, to ensure that strong security and IT operations fundamentals are locked in.

1. Address Gaps in Visibility
IT teams simply can’t protect what they can’t see. Good IT hygiene begins with an accurate, up-to-date, and contextual inventory of an organization’s endpoints, including servers, laptops, virtual machines, and cloud instances on the network. But that’s just the beginning, and a mass of tools — from asset discovery solutions and security information and event management systems to configuration management databases and beyond — still leads to visibility gaps.

The reason is that a collection of point tools doesn’t help organizations see the bigger picture — in other words, to have full visibility. Each product and tool has its own view of the IT environment. Individual tools may offer data that is relatively timely, contextual, or complete. But when IT teams look at this data in aggregate, visibility gaps begin to form.

Here’s an example. IT teams might have a tool that gets endpoint detection and response (EDR) telemetry up to the cloud every five minutes from all of their systems — but not their unmanaged hosts. They may get vulnerability scan results back once a week for peripheral component interconnect (PCI) systems, but only once a month for workstations. Their asset discovery solution might scan for unmanaged and managed assets, but only in the data center and only once a day. And if they need a new set of data that they didn’t anticipate and is outside the scope of their existing tooling’s hard-coded capabilities, there’s no easy way to get it. Consequently, stitching all this asynchronous data together to garner usable insights becomes so difficult as to be almost impossible.

If this lack of visibility isn’t rectified, IT teams will continue to suffer the consequences. They may continue to think they are more protected than they are, exposing themselves to vulnerabilities that should — and could — have been prevented.

One way for IT teams to address this lack of visibility is by using a unified endpoint management platform. [Editor’s note: The author’s company, Tanium, is one of a number of companies that provide such a service.] With a single source of endpoint data, those glaring visibility gaps start to close.

2. Declutter and Consolidate the IT Environment
Collections of point tools aren’t just a challenge for visibility; they’re also adding needless complexity. A Forrester survey found that, on average, organizations today use 20 or more tools from more than 10 different vendors to secure and operate their environments. And many large enterprises have 40 to 50 point solutions — a staggering number.

This cluttered environment makes it a big challenge to implement good IT hygiene habits, because each tool offers different data and different degrees of visibility. In addition, tools individually are expensive to learn, deploy, and upgrade. They often have short shelf lives because they were built for their time, usually for a specific use case, and not exactly future-proofed.

The good news is that it isn’t difficult to pare down the volume of tools. IT teams need to first identify the capabilities and deliverables their organizations need to implement, regardless of their technology and tools. Then they should go through each tool individually and catalog its capabilities. And finally, they should create a Venn diagram to see where overlap exists between these tools. Auditing your estate like this can be cumbersome, but the overlaps are the opportunities for consolidation so that IT teams can operate with fewer tools and more visibility moving forward.

3. Remove IT Operations and Security Team Silos
You can’t enforce IT hygiene and cybersecurity best practices if your teams aren’t working together. Existing point tools reinforce the silos we see crop up between IT operations and security teams instead of enabling the collaboration that isn’t just a nice-to-have, but crucial for better business outcomes. As organizations look to build and strengthen their security fundamentals, IT operations and security teams should unite around a common set of actionable data for true visibility and control over all of their computing devices. This will enable them to prevent, adapt, and respond in real time to any technical disruption or cyber threat.

Without security fundamentals firmly in place, IT teams will start the new year behind. Heading into 2020, they should be able to address visibility gaps, strategically reduce the amount of IT tools that are used, and bring together IT operations and security teams.

Make 2020 a fresh start. If teams can focus on nailing their basic security fundamentals, they will be well-positioned to succeed not just this coming year, but in the years to come.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “What’s in a WAF?

Chris Hallenbeck is a security professional with years of experience as a technical lead and cybersecurity expert. In his current role as CISO for the Americas at Tanium, he focuses largely on helping Tanium’s customers ensure that the technology powering their business can … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/3-fundamentals-for-better-security-and-it-management-/a/d-id/1336389?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

UK tax collectors warn contractors about being ripped-off – and not by HMRC for a change

The UK’s tax authorities have issued an official warning to contractors to watch out for self-assessment scams – and they don’t mean IR35 for a change.

According to Her Majesty’s Revenue and Customs (HMRC), fraudsters see self-employed individuals as a good target and are pulling off a wide variety of cons around purported tax rebates. The problem peaks between now and the tax deadline of January 31, hence the warning on Wednesday.

“Over the last year, HMRC received nearly 900,000 reports from the public about suspicious HMRC contact – phone calls, texts or emails,” a release from the Revenue reads. “More than 100,000 of these were phone scams, while over 620,000 reports from the public were about bogus tax rebates.”

The biggest scam targeting the millions of self-assessment filers is a phone call from someone claiming to be from the Inland Revenue and offering a tax refund. Also prevalent is an email or text message containing a link that also claims to offer a refund. That link will “take customers to a false page, where their bank details and money will be stolen,” the British government warns.

Watch out for heavy-handed tactics too, the authorities warn, with another scam featuring someone threatening the individual with arrest or imprisonment if you don’t immediately pay a bogus tax bill.

Here’s how you know if it’s a scam: the revenue service will never directly contact you asking for bank details or passwords or PINs. It also won’t send links in texts or emails or send attachments. So if you receive any of them, they’re not legit.

The reason self-employed people are being targeted by the scam is because they typically have to file their own tax return and send funds to the government themselves. Most people pay taxes automatically through their employer and so never have to go through the hassle and confusion of figuring out the over-complicated UK tax system.

By contrast, fake rebate scams and aggressive demands to pay bogus tax bills are prevalent in the US, where most people are still required to submit their own tax return.

Copycat phishers

Another thing to keep an eye out for, HMRC has warned, is copycat websites that have addresses similar to the real Revenue website. It advises that people “always type in the full online address www.gov.uk/hmrc to obtain the correct link.”

Which is not very realistic but then it doesn’t want to advise people to use Google because scammers can buy misleading ads on Google to lead netizens to the wrong site.

HMRC does have a customer protection team that shuts down such scams, apparently, and it has various reporting tools and hotlines. You can email [email protected] or text 60599. Or use Action Fraud’s online reporting tool.

tearing up enveloApe/letter

IT contractor has £240k bill torn up after IR35 win against UK taxman

READ MORE

But if all fails, you can use this simple guide: if someone contacts you offering a tax rebate, it’s just not going to be true because in the rare event you do have one, you are going to have to grip it with both hands and pull it out the clenched fist of the Inland Revenue.

They are not going to give you money without you working for it, my friend. And they definitely aren’t going to call you to tell you that they owe you money. It’s not Chris Tarrant, it’s the flaming taxman. Get a grip.

Likewise, if you are contacted by someone demanding you pay your tax bill and they give you the option to do so quickly and simply, then it is also an obvious scam: because HMRC is utterly incapable of making anything that simple.

Now, if you have to go through a series of mindbogglingly confusing, contradictory pages filled with turgid text and are forced to spend several hours trying to figure out if you need to fill in that box or not, and if so with what information, then – congratulations – you have found HMRC’s real self-assessment platform and all is well. ®

Sponsored:
What next after Netezza?

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/21/uk_tax_fraud/

Amnesty slams Facebook, Google over ‘pervasive surveillance’ business model

Amnesty International says the “pervasive surveillance” practiced by Facebook and Google represents a threat to human rights, a claim the two companies dispute.

On Thursday in the UK, the advocacy group, known for documenting torture and ethnic cleansing, published a report taking the two ad giants to task for business models that depend on harvesting personal data.

“Google and Facebook dominate our modern lives – amassing unparalleled power over the digital world by harvesting and monetizing the personal data of billions of people,” said Kumi Naidoo, Secretary General of Amnesty International, in a statement.

“Their insidious control of our digital lives undermines the very essence of privacy and is one of the defining human rights challenges of our era.”

The organization’s report, “Surveillance Giants: How the Business Model of Google and Facebook Threatens Human Rights,” explores how the companies do business, how that business affects privacy and influences society, how their market power limits accountability, and what can be done to change things.

“The companies’ surveillance-based business model forces people to make a Faustian bargain, whereby they are only able to enjoy their human rights online by submitting to a system predicated on human rights abuse,” the report says.

“Firstly, an assault on the right to privacy on an unprecedented scale, and then a series of knock-on effects that pose a serious risk to a range of other rights, from freedom of expression and opinion, to freedom of thought and the right to non-discrimination.”

Free expression, however, may pose problems when it’s free of oversight, as Facebook demonstrated by failing to police hate speech in Myanmar.

The report acknowledges that other large companies participate in “surveillance capitalism,” notably Amazon and Microsoft, as well as data brokers and telecom companies. But it singles out Facebook and Google for their dominance of specific markets.

Excluding China – which represents a separate surveillance story – Facebook has about 70 per cent of social media users and 75 per cent of mobile messaging users (thanks to WhatsApp), based on third-party statistics cited in the report. Google answers 90 per cent of internet searches, owns YouTube, the second largest search service and leading video platform, dominates the browser market with Chrome, and oversees the largest mobile operating system, Android, with some 2.5bn monthly active devices.

Between them, the two companies collect 60 per cent of online ad revenue and account for 90 per cent of the growth in the digital ad market. And the two firms keep trying to expand their offerings with data-dependent projects like Facebook’s Libra and Google’s interest in Fitbit and health stats, not to mention its plan to provide checking accounts tied to Google Pay.

The report cites various privacy scandals in which Facebook and Google have been involved, like Cambridge Analytica’s misuse of Facebook data, Google’s geolocation tracking that persisted even when disabled, and YouTube’s algorithmic promoting of false and incendiary commentary. It also discusses some specific harms arising from behavioral ad targeting like discrimination.

Amnesty International concludes that further government regulation, with a focus on human rights, is necessary.

“Governments must take positive steps to reduce the harms of the surveillance-based business model – to adopt digital public policies that have the objective of universal access and enjoyment of human rights at their core, to reduce or eliminate pervasive private surveillance, and to enact reforms, including structural ones, sufficient to restore confidence and trust in the internet,” the report says.

China internet

Amnesty slaps Google amid crippled censored China search claims

READ MORE

In a statement emailed to The Register, a Facebook spokesperson said the company fundamentally disagreed with Amnesty’s findings, a position the company addresses more fully in a letter included in the report.

“Facebook enables people all over the world to connect in ways that protect privacy, including in less developed countries through tools like Free Basics,” the company spokesperson said. “Our business model is how groups like Amnesty International – who currently run ads on Facebook – reach supporters, raise money, and advance their mission.”

Asked to comment, Google didn’t directly address the report. Instead, it acknowledged that data is important and noted that the company has been improving the mechanisms available to deal with data.

“We recognize that people trust us with their information and that we have a responsibility to protect it,” a company spokesperson said in an emailed statement. “Over the past 18 months we have made significant changes and built tools to give people more control over their information.” ®

Sponsored:
What next after Netezza?

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/21/amnesty_facebook_google/

Orange is the new green: Nigeria scammer bags $1m while operating behind bars

A convicted fraudster housed in a maximum security prison in Nigeria managed to pull off a $1m (£775,000) online scam from behind bars.

The African nation’s Economic and Financial Crimes Commission this week said Hope Olusegun Aroke, who is serving a 24-year stretch for a previous fraud conviction in 2015, used a contraband cellphone and a network of accomplices outside prison to scam foreigners out of money online.

“The immediate riddle that confronted the EFCC was how it was possible for the convict to continue to ply his ignoble trade of internet fraud from prison,” the commission said.

“Preliminary investigation revealed that that the convict, against established standard practice, had access to internet and mobile phone in the Correctional Centre where he is supposed to be serving his jail term.”

Using that phone and internet connection, it is believed that Aroke was able to mastermind the scams (the EFCC does not say exactly what the caper involved) as well as register two bank accounts, shift money between those accounts and one used by his wife, and purchase $60,690 of real estate and a new Lexus RX 350 for his spouse.

Aroke appears to be something of a savant when it comes to internet fraud, even for a country like Nigeria where internet scams are or at least were big business.

Handcuffs photo via Shutterstock

Required: Massive email fraud bust. Tired: Cops who did the paperwork. Expired: 281 suspected con men’s freedom

READ MORE

The commission says his 24-year sentence for fraud came after he was found posing as a computer science student at a Malaysian university. While pretending to be studying in Kuala Lumpur, Aroke oversaw a massive criminal operation spread out between Asia and Africa. He was ordered to serve two consecutive 12-year terms.

While the EFCC has yet to figure out exactly how Aroke was able to get the phone, internet connection, and bank token he used to set up and maintain the fraud operation, they note he had earlier been able to get out of the prison for a brief period to get medical treatment.

It was found that when he left the prison to be treated for the unspecified illness, Aroke first stayed at the hospital, but later moved to a hotel where he was able to meet with his wife and family, as well as attend parties.

As you might imagine, this is not the sort of thing typically afforded to an inmate at a maximum security prison, and the EFCC is investigating that matter as well. ®

Sponsored:
What next after Netezza?

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/21/nigerian_fraudster_prison/