STE WILLIAMS

Bulgarian phishing gang member who lived with his parents jailed for part in £40m fraud ring

A Bulgarian phishing criminal who created fake versions of legitimate companies’ websites as part of a £40m fraud has been jailed.

Svetoslav Donchev, 37, was extradited to stand trial at Southwark Crown Court in London, where he pleaded guilty to five counts of fraud. He was sentenced this afternoon.

Donchev, originally from the city of Pleven, created what prosecutors described as “phishing protection scripts” to be embedded on cloned websites of legitimate companies. The clones were then hosted on a server controlled by other criminals who would lure victims in by sending phishing emails.

“Donchev would not steal the bank details himself,” said specialist prosecutor Sarah Jennings. “He would supply the tools for others to do so. He may have felt removed from the scams themselves, but the thorough police investigation and this successful prosecution shows that we will track down all those involved in online scams, extradite them if they are abroad, and put them before the UK courts.”

The phishing emails included common lures such as suggesting that accounts needed to be verified or that victims were due a refund.

“When the victim then completed the forms [on the cloned website] by inputting their personal financial information, the details were either logged or sent by email to the cybercriminal,” the Crown Prosecution Service said in a statement. They were then used or resold, allegedly on the dark web.

Donchev was also said to have provided counter-detection software intended to help cloned websites evade common browser-based fraud detection systems.

Police were led to Donchev when they discovered an email address among files from recently convicted hacker Grant West. When Donchev was arrested at home in Pleven, where he lived with his parents, investigators from the UK and Bulgaria found folders on his computer revealing the extent of his criminal activities.

They established that he had created website scripts for up to 53 UK-based companies, or companies with a UK footprint. Cops estimated there were a potential half-a-million victims as a result of his criminal activity, with the fraud totalling £41.6m.

Donchev pleaded guilty to: making articles for use in fraud; supplying articles for use in fraud; encouraging the commission of an offence; concealing/disguising/transferring/removing criminal property (in that he received Bitcoin as payment for his criminal deeds); and acquiring criminal property, namely 1.5257417 Bitcoin. His sentence was a mixture of concurrent and consecutive prison terms totalling nine years.

Under current sentencing laws Donchev will spend a maximum of four-and-a-half years in prison. He is likely to be released on licence after around three years. ®

Sponsored:
How to Process, Wrangle, Analyze and Visualize your Data with Three Complementary Tools

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/20/bulgarian_svetoslav_donchev_jailed_9_years_phishing_fraud/

A Beginner’s Guide to Microsegmentation

In a world in which the data center perimeter has all but evaporated, traditional segmentation no longer is enough. Enter microsegmentation. Here’s what organizations need to do to maximize the benefits of this improved security architecture.

Image: knssr via Adobe Stock

By layering software-defined networking (SDN) and greater virtualization into one of security architecture’s most fundamental techniques, microsegmentation makes it possible to build out common-sense security boundaries in a world without perimeters.

Here’s what security experts say about how organizations can best reap the benefits of microsegmentation.

What Is Microsegmentation?
The practice of network segmentation has long been a favored way to isolate valuable, well-protected systems. By bulkheading sensitive areas of the network away from less-valuable and less-hardened areas, security architects lean on segmentation to thwart attackers from moving laterally and escalating privileges across networks. The idea is to not only reduce the blast-radius of successful attacks, but to also give security strategists the freedom to spend the most money protecting the riskiest systems — without worrying about what happens when attackers gain a foothold in low-level systems.

The growing problem of late with traditional segmentation is that it does best controlling what network architects call North-South traffic flows, or those client-server interactions that are traveling in and out of the data center. That’s problematic in our hybrid-cloud world, where the data center perimeter has all but evaporated and some 75% to 80% of enterprise traffic flows East-West, or server-to-server, between applications. 

“As we enter the era of digital transformations, cloud-first strategies, and hybrid enterprises, having the ability to create smaller zones of control for securing the data has become paramount,” says Tim Woods, vice president of technology alliances for Firemon. “It started with additional segmentation — think smaller and many more zones of control — but with greater adoption of virtualization, that segmentation can now extend all the way down to the individual workloads.”  

SDN and technologies like containers and serverless functions have been the real game-changer here, making it more affordable and technically feasible to break down workload assets, services, and applications into their own microsegments.

“In the past, segmentation required rerouting hardware — a very manual, expensive process,” says Ratinder Paul Singh Ahuja, founder and chief RD officer at Shield X. “Today, it is software-defined, which means it can be done easily and with automation as cloud environments constantly morph.”

Start by Mapping Data Flows and Architecture Thoroughly 
Security experts overwhelmingly agree that visibility issues are the biggest obstacles that stand in the way of successful microsegmentation deployments. The more granular segments are broken down, the better the IT organization need to understand exactly how data flows and how systems, applications, and services communicate with one another.  

“You not only need to know what flows are going through your route gateways, but you also need to see down to the individual host, whether physical or virtualized,” says Jarrod Stenberg, director and chief information security architect at Entrust Datacard. “You must have the infrastructure and tooling in place to get this information, or your implementation is likely to fail.”

This is why any successful microsegmentation needs to start with a thorough discovery and mapping process. As a part of that, organizations should either dig up or develop thorough documentation of their applications, says Stenberg, who explains that documentation will be needed to support all future microsegmentation policy decisions to ensure the app keeps working the way it is supposed to function.

“This level of detail may require working closely with vendors or performing detailed analysis to determine where the microsegments should be placed and how to do so in a manner that will not cause production outages,” says Damon Small, director of security consulting at NCC Group.

Use Threat Modeling To Define Use Cases
Once an organization has put the mechanisms in place to achieve visibility into data flows, that understanding can then start leading to risk assessment and threat modeling. This will, in turn, help the organization start defining where to start and how granular to go with microsegments. 

“With that understanding, you can then start identifying the risks in your environments, also known as your ‘blast radius.’ How far can an attacker go within your network if it is breached? Is a critical asset, such as a user database, within that blast radius?” says Keith Stewart, senior vice president of product and strategy at vArmour. “Once you can identify the high-risk areas, you can then start putting microsegmentation controls in place to address those risks.”

But not before you’ve established a detailed plan for action. Because microsegmentation is done with such granular access controls, it requires a significant level of due diligence and attention to detail to pull off, says Dave Lewis, global advisory CISO for Cisco’s Duo Security. 

“The need for proper planning for moving to microsegmentation cannot be understated,” he says. “It is important to know what, in fact, you need to segment.”

One thing to keep in mind is that microsegmentation can be achieved in a lot of different technical manners and with varying degrees of complexity, says Marc Laliberte, senior security analyst at WatchGuard Technologies.

“Part of your rollout plan should involve scoping your threat model to determine what form of microsegmentation is appropriate to you,” he says. “Your security investment should be based off of the risks your organization and its applications face, and the potential damages from a successful attack.” 

Balance Control with Business Needs
Throughout the threat modeling, the strategists behind a microsegmentation push need to keep business interests top-of-mind when designing the microsegments. 

“When operating at scale, it is important to develop a segmentation scheme that meets security needs but also provides the necessary access [for applications and processes to work seamlessly],” says Ted Wagner, CISO at SAP NS2. This means the scheme can’t be designed or implemented in a bubble — it’ll need to be vetted by a lot of interested parties, he explains.

Microsegmentation success requires that security reaches out to stakeholders from across business and IT to gain an intimate understanding of how all of the moving application and business-process pieces work together from the get-go. 

“It’s key to build a diverse team of business owners, network architects, IT security personnel, and application architects to implement the process,” says Scott Stevens, SVP of global systems engineering at Palo Alto Networks. 

Building out a well-rounded team can also help organizations set expectations up front and side-step the kind of political problems that could kill a project before it gets off the ground. 

“The major obstacles to implementing microsegmentation can and will be associated with communication to the business. Far too often in the past we would hear, ‘It must have been the firewall’ when something went wrong,” Lewis says. “Imagine, if you will, a world where microsegmentation is now the target of internal business unit vitriol.” 

{Continued on Next Page} 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full BioPreviousNext

Article source: https://www.darkreading.com/edge/theedge/a-beginners-guide-to-microsegmentation/b/d-id/1335849?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

A Safer IoT Future Must Be a Joint Effort

We’re just at the beginning of an important conversation about the future of our homes and cities, which must involve both consumers and many players in the industry

Though much celebrated, the Internet of Things (IoT) has nevertheless opened a Pandora’s box of new challenges in Internet security and data privacy. The need for some sort of oversight seems long overdue. But who should be responsible for ensuring safer IoT devices? Can manufacturers be trusted to provide effective safeguards on their own? Or will government be required to step in?

On March 11, 2019, members of the US Congress suggested a partial answer when it put forth the Internet of Things (IoT) Cybersecurity Improvement Act of 2019, which aims to “leverage Federal Government procurement power to encourage increased cybersecurity for Internet of Things devices.” While the legislation does not demand standards, if passed, manufacturers must provide devices that are inherently more secure by design — in other words, constructed with internal security features and password protection — to be eligible for lucrative government contracts.

It’s been a slow process. IoT devices, after all, have been around for almost 10 years, and US government agencies were aware of security issues at least by 2015. Nevertheless, many cybersecurity experts were suggesting legislation would be a “legal nightmare” and that the only solution was self-regulation. It was a similar situation in the UK. The UK government had previously stated its preference that the industry self-regulate, with some regulation where necessary. But it, too, is in the process of enacting laws aimed at better securing and protecting the data collected by connected devices.

What changed? There are still significant problems with many IoT devices currently available. Cybersecurity experts are also concerned about the longevity of these devices and the ability of manufacturers to provide security updates in a timely manner. A lightbulb may be short lived, but the lifespan of the average refrigerator is between 14 and 17 years. Will the manufacturer even still be in business then?

For these reasons, I think it’s also inevitable that future legislation will go much further and establish basic security standards for all devices sold in the US, similar to California’s IoT security law SB-327, which prohibits the use of easily hacked default passwords. Nevertheless, while government legislation has the potential to influence manufacturers and suppliers of IoT devices, it’s important to look at the big picture.

At this time, we’re just at the beginning of an important conversation about the future of our homes and cities, which must also involve many other players in the industry, such as network operators, service providers, cybersecurity professionals, educators, and consumer groups.

While the US Congress is focused on state-level security, privacy concerns of consumers must also be taken into consideration. According to a recent report from Consumers International and the Internet Society, 77% of respondents said data privacy and security are key contributors to their device buying decision-making. Nearly a third of respondents (28%) who haven’t yet purchased a smart device said they will not buy one due to privacy and security misgivings.

Manufacturers are well aware of these concerns. Indeed, to protect their own reputations and businesses, they may go beyond any future government guidelines because a major security breach could be disastrous for them. Samsung, for example, recently revealed — completely on its own initiative —  that some of its televisions have vulnerabilities and provided scanning information online. Consumers, after all, do need to stay informed and take some responsibility for their home network safety. But that smart TV is just the beginning.

According to Gartner, by the end of this year, globally, around 14 billion IoT devices will be connecting to the Internet, and that number is predicted to grow to 25 billion devices by 2021. As a number of recent reports have shown, just one vulnerable IoT device can jeopardize an entire home network and threaten a person’s privacy and personal security. Where infrastructure is concerned, the security and trustworthiness of an organization or even a public utility may be at risk. If we’re to avoid another disaster like that which affected Ukraine, when Russian hackers were able to shut down portions of its power grid, we must work together to ensure everyone’s concerns are being heard, from cybersecurity experts to city planners, especially with the further development of 5G networks.

For that reason, smart city conferences, focused on IoT security for industry and citizens, have begun to appear. In 2018, Tel Aviv hosted its first cybersecurity conference for “smart cities” attracting over 7,000 people, including 80 delegations from municipalities around the world.

Bringing together governmental representatives, cybersecurity professionals, tech giants, consumers, and researchers is definitely a step in the right direction. To learn and share knowledge, for example, at SAM Seamless Network, we have partnered with Internet service providers, gateway and IoT manufacturers, global device suppliers, and antivirus companies. We also participate in many working groups to influence the market on a higher level. In the end, securing IoT devices must be a joint effort.

Manufacturers must make IoT devices with the highest possible security measures built in, and make it easy for consumers to change passwords and update firmware. Consumers, for their part, must be prepared to learn how they can protect themselves. Internet service providers can protect the gateways to home networks. Governments must think and plan ahead, using the best data from all available sources, and with the input of consumers and vendors.

By working together, government, industry, SMBs, and consumers will enjoy all the benefits smart, secure IoT devices can offer, including more efficient homes and safer, more productive smart cities.

Related Content:

 

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The 20 Worst Metrics in Cybersecurity.”

Sivan is Co-Founder and CEO of SAM Seamless Network, a software-only cybersecurity platform that provides security for unmanaged networks and IoT devices for homes and SMBs.    Prior to founding SAM, Sivan worked at Comsec Global, overseeing cyber projects and … View Full Bio

Article source: https://www.darkreading.com/risk/a-safer-iot-future-must-be-a-joint-effort/a/d-id/1335813?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

WeWork’s Wi-Fi Exposed Files, Credentials, Emails

For years, sensitive documents and corporate data have been easily viewable on the coworking space’s open network.

WeWork’s weak Wi-Fi security has been leaving sensitive data accessible on its open network for years. That may have compromised both organizations that work in WeWork spaces as well as those that have never entered WeWork but do business with companies that use its offices.

A Fast Company report back in August highlighted poor security practices by the real estate firm, which rents out coworking spaces to mostly small businesses. A new report from CNET takes a deeper dive into the extent of WeWork’s oversight and implications for its customers.

The findings come from Teemu Airamo, who according to CNET evaluated WeWork’s Wi-Fi security in 2015 before moving his digital media company into one of its Manhattan offices. He was able to view hundreds of other companies’ financial records and devices on the building’s network; upon alerting the community manager, he found out WeWork already knew this was possible. Attempts to bring the Wi-Fi security issue through the ranks to WeWork’s upper management proved fruitless.

WeWork Wi-Fi is protected by a password, which CNET says appears in plaintext on the WeWork app. Multiple WeWorks in New York, and some in California, share the same password.

Over the past four years, Airamo has continued to run regular Wi-Fi scans on the WeWork network. His scans show 658 devices exposing an “astronomical amount” of data: financial records, business transactions, client databases, emails from other companies, driver’s license and passport scans, job applications, contracts, lawsuits, banking credentials, and health data.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The 20 Worst Metrics in Cybersecurity

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/risk/weworks-wi-fi-exposed-files-credentials-emails/d/d-id/1335865?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ransomware Strikes 49 School Districts & Colleges in 2019

The education sector has seen 10 new victims in the past nine days alone, underscoring a consistent trend throughout 2019.

Education is a hot target for ransomware: Nearly 50 school districts and colleges have been hit in 2019 so far, and more than 500 individual K-12 schools have potentially been compromised.

Cloud security firm Armor has been tracking publicly disclosed ransomware attacks since January 2019. Of the 182 total victim organizations this year, 49 have been educational institutions. This makes education the second-largest pool of victims by industry, following municipalities at 70 victims, and ahead of third-place healthcare, which reported 27 victims.

Ransomware “creates a sense of urgency,” says Chris Hinkley, head of Armor’s Threat Resistance Unit (TRU). In schools, municipalities, and other public-facing institutions with infrastructure critical to their communities, the pressure to stay up and running after an incident is high. Criminals know they can’t afford to shut down — and may be more likely to pay up. Whether a school pays depends on its backups, breadth of impact, and number of networks affected.

“When those organizations are down, especially a school, you’re losing out on a lot of money, but you’re also impacting a huge amount of people: teachers, administrators, and most importantly, the students,” he adds. When New York’s Monroe-Woodbury Central School District was hit with ransomware this month, it was forced to delay the start of its school year. The district won’t have access to computers, Wi-Fi, or smart boards until recovery is complete.

Many government organizations, especially schools, are “going to be behind the curve, relatively speaking,” when it comes to new and protective technologies, says Hinkley. They likely will run older operating systems or fall behind on patching, simply because they lack the manpower and expertise needed to stay current. The prevalence of vulnerable software and infrastructure in education makes it easier for attackers to get onto schools’ infrastructure.

Victim schools and districts span the United States, TRU reports: The most recent victim districts were in Missouri, Pennsylvania, Ohio, Nebraska, Illinois, and Florida. Connecticut has the highest concentration of ransomware targets, with seven districts and 104 schools affected.

“Most of the victims, I believe, are targets of opportunity,” says Hinkley. An attacker may have known and contacted a student, for example, or found a vulnerability on the school’s network. It’s still unknown how many of these intruders planted ransomware in targets’ environments.

Back-to-School Shopping
Crowder College of Neosho, Missouri, reported a ransomware attack on September 11. Investigators found evidence indicating the attacker had been inside the school’s systems since November 2018.

While it has not been confirmed how Crowder’s intruder gained access, Hinkley suggests they could have purchased both the malware and/or the unauthorized access on the black market. “It’s something we’re seeing a lot of,” he says.

Researchers who produced Armor’s “Black Market Report” found ransomware sold on the Dark Web as a standalone product, as well as ransomware-as-a-service, making it easy for novices to jump into the game. Many sellers of ransomware-as-a-service do the work: They provide the malware and a panel for the customer to enter a ransom message; it then generates a unique wallet address for each victim. The buyer simply has to get it onto their target system of choice.

“It’s removing a lot of the technical expertise that was previously required to carry out one of these attacks,” Hinkley says. Cybercriminals also sell credentials to Remote Desktop Protocol servers, researchers found, and this is a common vector for multiple ransomware families.

Many of the attacks against districts and individual schools have used Ryuk ransomware, which is also commonly seen in campaigns against municipalities. It’s typically proceeded by Emotet and TrickBot Trojans, which lay the foundation for networkwide compromise, TRU reports. Hinkley points out that the ransomware of choice usually depends on the deployment: Some ransomware is meant to be distributed by attackers inside the target infrastructure, he says; some is meant to be executed via social engineering techniques on the part of the end user.

Ransom Is Rising
The security industry has long pushed back against paying ransomware operators, with fear of motivating further attacks. Unfortunately, some schools are left with no other choice. New York’s Rockville Center School District recently paid $88,000 following a ransomware campaign.

Demands are getting higher: The attacker who hit Crowder College demanded $1.6 million in ransom; it’s not confirmed whether the school plans to pay. Monroe College in New York, which was hit with ransomware in July, received a $2 million ransom demand — the first million-dollar ransom TRU saw for an educational institution before Crowder was attacked later in the year.

Hinkley hypothesizes the rise in ransom demands could be linked to cyber insurance, as the financial risk of an attack is off-loaded onto a third party. While cyber insurance was not created for ransomware, this appears to be one of the more prominent uses for insurance coverage.

Homework for Schools and Districts
The top preparation and recovery step that schools should take is creating multiple backups of their critical data, applications, and application platforms. It’s not enough to simply back up the data, Hinkley points out; schools should also be testing their backups to ensure they’re ready to go.

“I’ve also seen organizations that have had robust backup plans but they didn’t test them, so the backup didn’t restructure,” he explains. “Testing those backups is equally as important.” Schools should also practice detection and response mechanisms to recover from an incident.

On top of that, Hinkley advises strong vulnerability management: Understand the assets in your infrastructure and what impact they have on the organization, and manage software updates.

 

Training is also essential. Software and hardware aside, schools are an easy target because of the people. Hundreds of kids are using machines and likely have a more relaxed approach to cybersecurity because they simply don’t know any better. Educating everyone — students, teachers, administrators — is essential for protecting a school from the effects of ransomware.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The 20 Worst Metrics in Cybersecurity.”

https://www.washingtonpost.com/archive/politics/2000/10/28/microsoft-infiltrated-by-hackers/60fbef6f-444a-41fe-b3d5-3f208770326c/?utm_term=.b50ab4c9cb11

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/ransomware-strikes-49-school-districts-and-colleges-in-2019/d/d-id/1335872?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook Libra rejected by France as “dangerous”

Facebook’s original motto: “Move fast and break things.”

France’s finance minister Bruno Le Maire: Non merci, not our global economy, you don’t.

Last week, on Thursday, 13 September, Le Maire said in a speech at the OECD Global Blockchain Policy Forum 2019 – a digital currency conference – that he sees Facebook’s forthcoming Libra cryptocurrency as a “danger to consumers”, a “systemic risk” and a threat to France itself:

Our monetary sovereignty is at stake.

The existential threat seen by France and other governments: that Facebook’s more than 2 billion users, investors, consumers, and the broader global economy could become entangled with a risky currency that sets them up for having to bail them out. Governments, including France last week and the US, the EU and the UK earlier in the month, have said that they might have no choice but to bail out Facebook if Libra goes off the rails, since it would be too big to fail.

Facebook’s 29-page white paper says that the cryptocurrency, created by the Switzerland-based Libra Association, will run on an open source blockchain software developed by Facebook. Financial institutions including Mastercard, Paypal, and Visa will support it and doubtless profit from the venture, although Facebook is taking a leadership role for now.

Le Maire said that besides the risk of countries having to bail out the currency if it goes under, other risks include harder-to-track money laundering and terrorism financing. He urged Facebook to look at creating a separate “public digital currency” instead. What he told conference attendees:

I want to be very clear. In these conditions we cannot authorize Libra’s development on European soil.

As far as Facebook’s Libra plans go, that’s not good. Le Maire’s statement came just a month after data protection officials from the US, EU and UK issued a joint statement outlining their concerns about Libra. From that statement:

While Facebook and [its Libra-focused subsidiary] Calibra have made broad public statements about privacy, they have failed to specifically address the information handling practices that will be in place to secure and protect personal information.

Facebook: Trust us; we can do this better than you can!

Facebook’s response came on Monday: In essence, it said, you’re wrong.

On Monday, David Marcus, co-creator of the Libra program and Facebook’s exec in charge of the blockchain-based currency that governments love to hate, tweeted a thread full of lessons about money and national sovereignty.

Libra is designed to be a better payment network and system running on top of existing currencies, and delivering meaningful value to consumers all around the world.

Libra will be backed 1:1 by a basket of strong currencies. This means that for any unit of Libra to exist, there must be the equivalent value in its reserve.

In other words, we won’t be messing with your ability to create new money, he continued:

As such there’s no new money creation, which will strictly remain the province of sovereign Nations.

But, if it makes you feel better, you can come regulate us, he said. We want you to:

We also believe strong regulatory oversight preventing the Libra Association from deviating from its full 1:1 backing commitment is desirable.

Of course, Facebook can’t launch its currency without the permission of lawmakers, banks, and regulators.

We will continue to engage with Central Banks, Regulators, and lawmakers to ensure we address their concerns through Libra’s design and operations.

Perhaps Facebook should come up with a new draft of that 29-page white paper it put out that addresses the criticisms, concerns and flat-out “not on our soil, bub” rejections it’s getting.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/XH0fbN6_glY/

Report: Use of AI surveillance is growing around the world

When you think of nations using artificial intelligence (AI) -enhanced surveillance technologies, China probably comes to mind: the place where facial recognition is used to ration toilet paper, to name and shame jaywalkers, and to outfit police with glasses to help them find suspects.

It’s not just China, of course. According to a report from the Carnegie Endowment for International Peace, the use of AI surveillance technologies is spreading faster, to a wider range of countries, than experts have commonly understood.

The report found that at least 75 out of 176 countries globally are actively using AI technologies for surveillance purposes, including smart city/safe city platforms, now in use by 56 countries; facial recognition systems, being used by 64 countries; and smart policing, now used by law enforcement in 52 countries.

The report’s author, Steven Feldstein, told AP that he was surprised by how many democratic governments in Europe and elsewhere – just over half – are installing AI surveillance such as facial recognition, automated border controls and algorithmic tools to predict when crimes might occur:

I thought it would be most centered in the Gulf States or countries in China’s orbit.

The report didn’t differentiate between lawful uses of AI surveillance, those that violate human rights, or those that fall into what Feldstein called the “murky middle ground.”

Smart city technologies are an example of how murky things can get. In Quayside, the smart city that’s developing on Toronto’s eastern waterfront, good intentions come in the form of sensors meant to serve the public, in that they’re meant to “disrupt everything,” from traffic congestion, healthcare, housing, zoning regulations, to greenhouse-gas emissions and more. But Quayside is also referred to as a privacy dystopia in the making.

The purpose of the research is to show how new surveillance technologies such as these are transforming the way that governments are monitoring and tracking us. It tackles these questions…

  • Which countries are adopting AI surveillance technology?
  • What specific types of AI surveillance are governments deploying?
  • Which countries and companies are supplying this technology?

The Carnegie Endowment for International Peace presents the answers in the first-ever compilation of such data, which it’s calling the AI Global Surveillance (AIGS) index: a “country-by-country snapshot of AI tech surveillance”, mostly concerned with data pulled in from 2017 to 2019. Here’s the full index.

Some highlights:

China doesn’t just use a lot of AI surveillance. It’s also a big exporter of the technologies. The research found that Chinese companies – particularly Huawei, Hikvision, Dahua, and ZTE – supply AI surveillance technology to 63 countries. Huawei alone is responsible for providing AI surveillance technology to at least 50 countries worldwide. “No other company comes close,” the report says. The next largest non-Chinese supplier is Japan’s NEC, which supplies AI surveillance tech to 14 countries.

Chinese vendors often sweeten their product pitches with offers of soft loans to encourage governments to buy. That works particularly well in countries with underdeveloped technology infrastructures, including Kenya, Laos, Mongolia, Uganda, and Uzbekistan, which likely wouldn’t be able to get the technology otherwise. From the report:

This raises troubling questions about the extent to which the Chinese government is subsidizing the purchase of advanced repressive technology.

US companies are also active in worldwide exports. 32 countries are getting their AI surveillance technologies from the US. The most significant exporters are IBM, selling to 11 countries; Palantir, selling to 9; and Cisco, selling to 6.

Other companies based in liberal democracies are proliferating the technologies. France, Germany, Israel, and Japan aren’t “taking adequate steps to monitor and control the spread of sophisticated technologies linked to a range of violations,” the report found.

Liberal democracies are major users of AI surveillance. The AIGS shows that 51% of advanced democracies deploy AI surveillance systems. In contrast, 37% of closed autocratic states, 41% of electoral autocratic/competitive autocratic states, and 41% of electoral democracies/illiberal democracies deploy AI surveillance technology. Governments in “full” democracies are deploying a range of surveillance technology, from safe city platforms to facial recognition cameras, the research found. That doesn’t mean that they’re abusing these systems; whether or not governments use it for “repressive purposes” depends on “the quality of their governance.”

For example:

Is there an existing pattern of human rights violations? Are there strong rule of law traditions and independent institutions of accountability? That should provide a measure of reassurance for citizens residing in democratic states.

That doesn’t mean that “advanced” democracies aren’t struggling to balance security interests with civil liberties protections, though. The research cites a few examples of where civil liberties are losing out in that equation in such democracies as the US and France:

  • A 2016 investigation revealed that Baltimore police had secretly deployed aerial drones to carry out daily surveillance over the city’s residents. Photos were snapped every second over the course of 10-hour flights. Baltimore police also deployed facial recognition cameras to monitor and arrest protesters, particularly during 2018 riots in the city.
  • A slew of companies are providing advanced surveillance equipment for use at the US-Mexico border, including dozens of towers in Arizona to spot people as far as 7.5 miles away, as the Guardian reported in June 2018. Other towers in use feature laser-enhanced cameras, radar and a communications system that scans a 2-mile radius to detect motion. The captured images are analyzed with AI to pick out humans from wildlife and other moving objects. It’s unclear whether these surveillance uses are legal or necessary.
  • In France, the port city of Marseille is running the Big Data of Public Tranquility project: a program aimed at reducing crime via a vast public surveillance network featuring an intelligence operations center and nearly 1,000 intelligent closed-circuit television (CCTV) cameras – a number that’s going to double by 2020.
  • In 2017, Huawei “gifted” a showcase surveillance system to the northern French town of Valenciennes to demonstrate what’s being called a “safe city” model. It included upgraded high-definition CCTV surveillance and an intelligent command center powered by algorithms to detect unusual movements and crowd formations.

Autocratic and semi-autocratic governments are more prone to abuse these technologies, including those in China, Russia, and Saudi Arabia. Other governments with “dismal human rights records” are also exploiting AI surveillance to carry out repression in more limited ways, but all governments are at risk of unlawful exploitation of the technology “to obtain certain political objectives.”

Military spending strongly correlates to AI surveillance spending. 40 of the world’s top 50 military spending countries also use AI surveillance technology.

Such countries include full democracies, dictatorial regimes, and everything in between: richer countries such as France, Germany, Japan, South Korea, as well as poorer states such as Pakistan and Oman. This isn’t too surprising, Feldstein writes:

If a country takes its security seriously and is willing to invest considerable resources in maintaining robust military-security capabilities, then it should come as little surprise that the country will seek the latest AI tools.

The motivations for why European democracies acquire AI surveillance (controlling migration, tracking terrorist threats) may differ from Egypt or Kazakhstan’s interests (keeping a lid on internal dissent, cracking down on activist movements before they reach critical mass), but the instruments are remarkably similar.

State surveillance isn’t inherently unlawful

Surveillance isn’t necessarily rooted in governments’ desire to repress its citizens, the report points out. It can play a vital role in preventing terrorism, for example, and can enable authorities to monitor other threats.

But technology has also ushered in new ways to carry out surveillance, and that’s caused the amount of transactional data – i.e., metadata – to burgeon, whether it be emails, location identification, web-tracking, or other online activities.

The report quotes former UN special rapporteur Frank La Rue from a 2013 surveillance report:

Communications data are storable, accessible and searchable, and their disclosure to and use by State authorities are largely unregulated. Analysis of this data can be both highly revelatory and invasive, particularly when data is combined and aggregated.

As such, States are increasingly drawing on communications data to support law enforcement or national security investigations. States are also compelling the preservation and retention of communication data to enable them to conduct historical surveillance.

Feldstein says that it goes without saying that such intrusions “profoundly affect an individual’s right to privacy – to not be subjected to what the Office of the UN High Commissioner for Human Rights (OHCHR) called ‘arbitrary or unlawful interference with his or her privacy, family, home or correspondence.’ [and]… likewise may infringe upon an individual’s right to freedom of association and expression.”

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mWK0Hr3l6MU/

IBM’s new 53-qubit quantum ‘mainframe’ is live in the cloud

IBM has boosted its growing stable of quantum computers with a new 53-quantum bit (qubit) device, the most powerful ever offered for commercial use.

Google announced a more powerful 72-qubit ‘Bristlecone’ model last year, but that was for its internal techies only. IBM’s, by contrast, feels significant because it can be used by absolutely anyone who can find a use for such a computer.

The new and still-to-be-named computer will sit in the company’s Quantum Computation Center in Poughkeepsie, New York State, which has recently turned into a hotbed for commercial development.

The facility also houses an array of older quantum computers, including five with 20 qubits (including the first Q System One launched in January), four with 5 qubits, and one with 14 qubits.

The involvement of Poughkeepsie is no coincidence – this is the heritage site where IBM built many of the mainframes that made its name synonymous with business computing.

Might quantum computers be on course to be the mainframes of the 21st century?

Lab coats

Readers will doubtless know that the qubit is a rough measure of the amount of work a quantum computer can do (read our detailed backgrounder on how quantum computers work for more on this), which loosely parallels the number of bits in a classical computer.

It’s not a perfect analogy, but what matters is that the more qubits you have, the more work you can do (IBM favours a different measure called ‘quantum volume’ which takes into account things such as connectivity and ‘gate set’ performance, algorithm errors, and the efficiency of software and compilers).

But the real significance of IBM’s new system isn’t the number of qubits it has so much as the claimed ease with which customers can get access to them.

This issue has long been a problem for quantum computers, which even today rely on physicists to run them. Then there’s the need to keep the qubit hardware mounted on a data plane at an incredibly cold temperature of -273.15 degrees Celsius, only 0.02 degrees above absolute zero.

Don’t take our word for it – IBM has an image of the sort of cooling system that makes this possible on its website as well as video of the Q System One qubit device itself.

Cloud service

Here’s the interesting part – customers don’t need to get their hands dirty with any of this because IBM’s new quantum computer will be offered as a cloud service.

This makes sense. Quantum devices are complex, specialised bits of kit that it might take organisations years and huge sums of money to master. Accessing them as a service is a simple way to benefit from their theoretical advantages now without worrying about how they work.

Despite their exotic reputation, it seems that now is finally the right moment for quantum devices in a growing number of niches, including research and development, and intriguing if rarified areas of financial engineering such as options pricing.

At some point in the future, that list might also include tamper-proof cryptography and new ways to crack cryptographic algorithms. Quantum computers’ ability to perform calculations in parallel is a potential threat to many of the cryptographic algorithms we rely on, as Paul Ducklin explains in his article Post-Quantum Cryptography (and why we’re getting it):

…many people are worried that quantum computers, if they really work as claimed and can be scaled up to have a lot more processing power and qubit memory than they do today, could successfully take on problems that we currently regard as “computationally unfeasible” to solve.

The most obvious example is cracking encryption.

In other words, if reliable quantum computers with a reasonable amount of memory ever become a reality – we don’t know whether that’s actually likely, or even possible, but some experts think it is – then anything encrypted with today’s strongest algorithms might suddenly become easy to crack.

In truth, the current applications for quantum computing emphasise that large companies are still very much at the experimental phase, feeling their way to understanding which sorts of problems quantum computers might be good at, and which are better left to today’s computers.

But customers must start somewhere, just as IBM must if it is to find a way to start making money from quantum computing after decades of lab testing.

Might quantum be a new paradigm that supplants today’s computers?

For that to happen, they’d have to reach ‘quantum advantage’, a hypothesized ‘superpolynomial’ moment when quantum devices start turning out answers that would not be possible when using classical computers.

The catch – those answers would almost certainly be to problems nobody has yet found a way to ask. So, don’t throw away your microprocessor laptop just yet.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/NcZHCu0FsAw/

Server-squashing zero-day published for phpMyAdmin tool

A researcher has just published a zero-day security bug in one of the web’s most popular database administration software packages.

The bug makes it possible for an attacker to delete a server by hijacking a user’s account in phpMyAdmin, a 21-year-old open-source tool used to manage MySQL and MariaDB databases.

The flaw is a classic cross-site request forgery (CSRF). It’s a long-used attack in which an attacker can force a logged-in user’s browser to perform malicious actions such as changing their account details. A browser request includes any details associated with the site, such as the user’s session cookie, making it difficult to distinguish between the real request and a forged one.

The bug report on the Full Disclosure mailing says that an attack would have to target phpMyAdmin’s setup page. The CVE listing for the bug gives it a medium severity rating.

According to the Full Disclosure listing, an attacker can create a fake hyperlink containing the malicious request. It mentions that the CSRF attack is possible because of an incorrectly used HTTP method.

The researcher who discovered it, Manuel Garcia, explained to us:

The post/get requests are not validated. To avoid the CSRF attacks you need to implement a token.

Using tokens is a common protection against CSRF bugs, as OWASP explains in its CSRF prevention cheat sheet. In his Full Disclosure bug report, Garcia recommends that a token variable be validated in each call, adding that other phpMyAdmin requests already do this. So the call made from the setup page is an anomaly.

The bug hasn’t been patched and affects version 4.9.0.1 of phpMyAdmin at the time of writing. Garcia said that he told phpMyAdmin about it on 13 June and followed up on 16 July. When a patch hadn’t appeared on 13 September, exactly three months after initial submission, he published it. So he seems to have followed responsible disclosure guidelines.

phpMyAdmin had acknowledged the bug and explained to Garcia that it would inform him when the bug was fixed. Project co-ordinator Isaac Bennetch told us:

We discussed this report internally and felt it was better included as part of a bug fix release rather than issuing a security hotfix. We consider the attack vector quite small and the possible damage that could be done to be of an inconvenient nature rather than a security concern.

What to do?

Bennetch added last night that the team will fix the bug that Garcia discovered in the release of version 4.9.1, which will be available “in less than a day”.

Until then, administrators can protect themselves by logging out of their accounts after they’ve completed their work. They might also want to look at isolating their browsing activities, perhaps using a different browser that they never use to log into other online services.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/XxpMh0dUMaQ/

Nice work if you can grift it: Two blokes accused of swindling $10m from the elderly with bogus virus infection alerts

Two Americans used bogus virus-infection alerts to bilk $10m out of PC owners, it is alleged.

Romana Leyva, 35, of Las Vegas, Nevada, and Ariful Haque, 33, of Bellerose, New York, were each charged this week with one count of wire fraud and conspiracy to commit wire fraud. Each count carries a maximum of 20 years in the clink.

According to prosecutors in southern New York, Leyva and Haque masterminded a classic tech-support scam that warned netizens their computers were infected with malware that didn’t actually exist, and would need a costly, and yet entirely unnecessary, repair.

We all know this type of scam: phony “system alert” pop-up ads in web browsers that try to scare punters into believing their machine is riddle with spyware, along with a phone number to call for “tech support” or a repair service that costs an arm and a leg – and doesn’t actually do anything useful.

“In at least some instances, the pop-up threatened victims that, if they restarted or shut down their computer, it could ’cause serious damage to the system’ including ‘complete data loss’,” the prosecution wrote in its court [PDF] paperwork.

“In an attempt to give the false appearance of legitimacy, in some instances the pop-up included, without authorization, the corporate logo of a well-known, legitimate technology company.”

scam

Devs slam Microsoft for injecting tech-support scam ads into their Windows Store apps

READ MORE

While not particularly novel or remarkable in its tactics, the alleged scam was ridiculously effective, netting the duo an estimated eight-figures in ill-gotten gain, it is claimed.

This may have been, in part, due to the target audience of the ads. Prosecutors claimed Leyva and Haque deliberately aimed their bogus pop-ups and adverts at elderly netizens who were more likely to know little about their machines and thus be prone to falling for the tech support fraud.

In addition to scamming the marks for one-time support costs, it is alleged the duo also got recurring payments by signing victims up for subscription services and, in some cases, they were said to have gone into outright bank fraud by telling victims the support company had folded and asked for account details in order to deposit a “refund” that, of course, turned into a withdrawal.

“The conspirators allegedly caused pop-up windows to appear on victims’ computers – pop-up windows that claimed, falsely, that a virus had infected the victim’s computer,” said US attorney Geoffrey Berman.

“Through this and other misrepresentations, this fraud scheme deceived thousands of victims, including some of society’s most vulnerable members, into paying a total of more than $10 million.”

Both men have been cuffed, charged, made their initial court appearance, and are now awaiting trial. ®

Sponsored:
Your Guide to Becoming Truly Data-Driven with Unrivalled Data Analytics Performance

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/20/tech_support_charges/