STE WILLIAMS

T-Mobile US hacked, Monero wallet app infected, public info records on 1.2bn people leak from database…

Roundup Time for another roundup of all the security news that’s fit to print and that we haven’t covered yet.

T-Mobile US says hackers broke into customer info

T-Mobile US prepaid account holders got some unwelcome news this week when their wireless carrier admitted on Friday it was compromised by miscreants who would have been able to ogle customers’ personal information.

Exposed details include name, billing address, account number, and mobile plan types. T-Mobile notes that, at least, no bank card info was exposed.

“Our cybersecurity team recently discovered and shut down unauthorized access to some customer information, including yours, and promptly reported it to authorities,” it says. “No financial data (including credit card information) or social security numbers were involved, and no passwords were compromised.”

Given the rise in SIM-swapping attacks, however, those details could still be extremely useful to a criminal who is looking to con support staff into switching an account over to a new SIM card, thus giving them control of the number and all connected accounts.

Bad Binder breakdown

Over on the Google security blog, exploit-tracker Maggie Stone has written a detailed dissection of an Android security hole known as Bad Binder aka CVE-2019-2215. It is a use-after-free() flaw in the kernel that allows an attacker to escape the Chrome sandbox and take over the target device.

“The bug is a local privilege escalation vulnerability that allows for a full compromise of a vulnerable device,” Stone said. “If chained with a browser renderer exploit, this bug could fully compromise a device through a malicious website.”

Indeed, it was exploited by the NSO Group’s Pegasus mobile spyware in the wild to hijack gadgets, we understand.

Louisiana hit by ransomware outbreak

Just days after holding a hotly contested election, the US state of Louisiana fell victim to a ransomware attack that led Governor John Bel Edwards to activate the state cybersecurity response team and temporarily shut down some government services, including the state department of motor vehicles.

It looks like the damage was not too serious, and there was no connection to the election results.

Malware sneaks into official Monero wallet build

A brief but serious compromise of the Monero crypto-coin website resulted in official downloads of the project’s wallet app being infected with malicious code.

The legit builds were switched out for the shady versions, and remained available for 35 minutes on Monday before being taken down. Users who downloaded and installed the software are advised to check the hashes of their applications with the hashes on the site, and get a new, clean copy of the software if needed. Although the attack was fairly brief, it’s massively concerning crooks were able to perform the switch in the first place: the malicious code was seemingly designed to siphon money from the wallets.

Also, the compromise was noticed when the hashes of the dodgy builds did not match the hashes published on the site. Always. Check. The. Hashes.

Roboto botnet examined

The team at Qihoo 360 Netlab has posted a deep dive into a peer-to-peer botnet known as Roboto. The malware seems to have DDoS capabilities, but the team can’t say for sure exactly what its purpose is yet.

1.2 billion people’s records exposed

The team at DataViper has discovered yet another inadvertently wide-open public-facing Elasticsearch instance filled with personal information, this one purportedly with information on 1.2 billion people. Don’t get too worked up, though, all of the information looks to have already been public. It’s not not nice having it all under one roof for miscreants to mine. There were no password protections on the database, which has since been taken don.

The silo included things like names, email addresses, phone numbers, and LinkedIn and Facebook profile details.

Ryuk bites vets

Some 400 veterinarian offices in America were hit with ransomware after the network of National Veterinary Associates was infected by Ryuk. The biz itself has recovered, but some individual customers are still working to get their files back.

US Senate green-lights $250m electric grid security fund

The US Senate’s Natural Resources Committee has advanced a bill that would earmark a quarter of billion dollars for electric grid security.

The proposed Protecting Resources on the Electric Grid with Cybersecurity Technology law will now head to the floor for a full vote. If passed by both sides of Congress, it would run from 2020 through 2024.

OnePlus warns of breach

Smartphone builder OnePlus has notified punters it has once again been relieved of some of their personal details. No payment card data nor social security numbers nor passwords were lifted by miscreants who broke into the outfit’s systems, apparently, though the company is still expecting some of the info to be weaponized.

“We can confirm that all payment information, passwords and accounts are safe, but certain users’ name, contact number, email and shipping address may have been exposed,” OnePlus told folks. “Impacted users may receive spam and phishing emails as a result of this incident.”

Intel warns of VM crashes

A recently-posted technical document from Intel warns that a number of its more recent processors are subject to a vulnerability that can be triggered by malicious virtual machines and operating systems. If exploited, CVE-2018-12207 could allow a guest OS on a host server to crash the underlying physical machine.

Uber unveils in-car recording scheme

Uber says it is beginning trials of a program that will allow riders and drivers to record their conversations. According to the Washington Post, the first pilot programs will be run in Latin America, and if they are successful will move to the US, where they will no doubt face scrutiny over privacy and data security concerns.

Sandworm crew spied on Android phones

The Russian government’s Sandworm hacker gang targeted foreign officials using Android malware that masqueraded as a pair of Korean-language apps and later a Ukranian-language app, according to Google eggheads. All three strains of the spyware slipped into the official Play store before being spotted and removed. ®

Sponsored:
Technical Overview: Exasol Peek Under the Hood

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/23/security_roundup_nov22/

RDP loves company: Kaspersky finds 37 security holes in VNC remote desktop software

VNC remote desktop software has no shortage of potentially serious memory-corruption vulnerabilities, you’ll no doubt be shocked to hear.

This is all according to [PDF] a team at Kaspersky Lab, which has uncovered and reported more than three dozen CVE-listed security holes, some allowing for remote code execution.

VNC, or Virtual Network Computing, is an open protocol used to remotely access and administer systems. Much like with the BlueKeep flaw in Microsoft’s RDP service, miscreants can exploit these holes in VNC to potentially commandeer internet or network-facing computers.

Kaspersky says that, based on its best estimates from Shodan searches, about 600,000 public-facing machines offer VNC access as do around a third of industrial control devices.

“According to our estimates, most ICS vendors implement remote administration tools for their products based on VNC rather than any other system,” said Kaspersky researcher Pavel Cheremushkin earlier today. “This made an analysis of VNC security a high-priority task for us.”

The team focused its efforts on four specific targets: LibVNC, UltraVNC, TightVNC 1.x, and TurboVNC, four popular VNC libraries or tools whose open-source licenses allowed for code to be studied. RealVNC forbids reverse engineering, we’re told.

X-ray scanners, CCTV cams, hefty machinery … let’s play: VNC Roulette!

READ MORE

The investigation kicked up a total of 37 CVE-listed memory corruption flaws: 10 in LibVNC, four in TightVNC, one in TurboVNC, and 22 in UltraVNC. All have now been patched, save for the bugs in TightVNC 1.x which were present in a no-longer supported version: you should be using version 2.x anyway.

The bulk of the vulnerabilities were found in the client-side of the software, which should come as good news for admins. This, Cheremushkin says, is in large part because of the way the VNC protocol works, putting most of the code on the client. There were some exploitable server-side bugs, though.

“Attackers would without doubt prefer remote code execution on the server. However, most vulnerabilities are found in the system’s client component,” the researcher noted.

“In part, this is because the client component includes code designed to decode data sent by the server in all sorts of formats. It is while writing data decoding components that developers often make errors resulting in memory corruption vulnerabilities.”

The Kaspersky report comes just as the the clamor over the BlueKeep flaw has begun to die down. That bug recently made headlines after reports of (extremely limited) active exploits surfaced.

Admins can protect themselves from RDP and VNC exploitation by updating their software (or migrating off, in the case of TightVNC) and using network filters to lock down access. ®

Sponsored:
How to get more from MicroStrategy by optimising your data stack

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/23/kaspersky_vnc_bugs/

Why cryptocoin scams work, and how to avoid them

Fascinated by cryptocurrencies? Wishing you’d got in on the ground floor for the Bitcoin boom of 2017?

Many people would answer “Yes” to both those questions – and with good reason.

After all, the dramatic roller coaster ride that the Bitcoin (BTC) price has been through from 2017 onwards is kind of unimportant to anyone who mined their own bitcoins in the early days.

Ten years ago, bitcoins were almost worthless, with one historical chart claiming that a user going by the name SmokeTooMuch tried to sell BTC 10,000 for just $50 back in 2010, but couldn’t find a buyer.

Only in 2011 did one bitcoin go above $1, so if you have even a tiny stash of BTC from before that date, the very worst value multiplier you would have seen in the past two years would still be more than 3,000-fold (that’s 300,000% if you prefer percentages).

In other words, the currency’s recent volatility in flapping between a nadir of just over $3,000 and a zenith of just under $20,000 since December 2017 simply doesn’t matter to anyone with BTC 10,000 from back in 2010.

That’s not the difference between rich and poor, it’s the difference between rich and Richie Rich rich.

Simply put, people who got into BTC at the very start and held onto their bitcoins are, in theory at least, extra-super wealthy now as a result.

(The publisher of the system that makes Bitcoin work, the still-anonymous Satoshi Nakamoto, is claimed by one analyst to have mined about one million bitcoins in those heady, early days; all of them apparently remain unspent.)

Enter the ICO

So it’s not surprising that confidence tricksters – crooks with the gift of the gab, and an apparent fluency in the jargon of cryptocoins and blockchains – have found that promising “a brand new cryptocurrency that you can join at the very start” can be a great way to defraud well-meaning people of their hard-earned savings.

Cybercrooks of this sort often pitch what’s called an Initial Coin Offering, or ICO.

That’s a newly-minted term that’s meant to mirror the terminology IPO, short for Initial Public Offering, which stock markets use to describe a private company going public by putting up shares for sale on an open market.

IPOs can give investors a chance to realise rapid gains, for example by selling quickly if immediate demand for the new shares is high, or to make money in the long term by holding onto their early shares in a company that’s already well known.

But even IPOs by big, popular companies don’t guarantee that your investment will go up, and that’s in a market ecosystem that, in most countries, is fairly strictly regulated.

Not just anyone can set up an IPO; there are strict rules about what positives you are allowed to claim about your company, and which potential negatives you are obliged to disclose up front; there are controls on what you can say to the media during the lead up to the IPO, and who can say it, and when… and much more.

In contrast to the rules around IPOs, in many countries, ICOs are either scarcely regulated or not regulated at all.

Loosely speaking, someone who wants to “market” an ICO can promise the world – and can do so without needing any existing products, or prototypes, or stock, or patents, or intellectual property, or indeed anything much at all except a cool-sounding name for their new cryptocoins and a groovy-looking website.

Sadly, that makes it surprisingly easy for a cybercrook to invite “investments” – for example by using a bunch of fake testimonials and some judiciously chosen (and perhaps actually accurate) graphs showing how other cryptocurrency values have shot up to the apparently enormous benefit of those who joined in early on.

Building a pyramid

A wily cybercriminal might run a website that shows their new “currency” steadily gaining in value, based on some sort of unspecified “mining and trading” activity, perhaps with “real time transaction logs”.

The crook might even make regular “dividend” payments to early investors to “prove” that the product is doing well.

For example, you might login and see a page showing that your initial $10,000 investment is already worth $47,578, say – and you might even be encouraged to “withdraw” some of your “gains”, possibly subject to some sort of investment period limit that restricts you getting it all at once.

Of course, if you’ve put in $10,000 and the crook permits you to cash out out, say, $178.56 of “dividend” right now, after just a few weeks, it might feel as though you are living the dream…

…but in the unregulated world of ICOs and cryptocurrency investments, there may be few or no legal safeguards to ensure that the $178.56 you’ve extracted are genuine earnings, rather than just a tiny percentage of your own money back.

Some early adopters might actually get paid back more than they put in – so their delighted and very public claims that “they genuinely made money” might indeed be true, so far as they can tell.

But there may be no legal or operational safeguards by which you can be sure that those lucky few actually made their money because of a genuine increase in value of the cryptocurrency they think they bought.

For all you know, those lucky few might simply have been paid directly out of the money put in by subsequent investors, meaning that the product that they thought they had funded, and that had allegedly grown in value, didn’t exist at all.

That’s a classic pyramid or Ponzi scheme, named after an early perpetrator of the scam called Carlo Pietro Giovanni Guglielmo Tebaldo Ponzi, better known as Charles Ponzi.

A more recent perpetrator is Bernard Lawrence Madoff, who made off with billions of dollars in his own Ponzi scheme before getting a whopping 150-year prison sentence in 2009. According to Wikipedia, Bernie Madoff’s release date is in 2139, assuming 20 years off for good behaviour, and assuming he lives to be more than 200.

So, what can be done to discourage ICO scammers from stealing money from innocent but trusting victims in this comparatively simple yet high-tech-sounding fraud?

One thing is to find, arrest, convict and imprison those who practise this sort of deceit, and the good news is that the US Department of Justice (DOJ) is willing and able to do so.

Indeed, the DOJ this week announced the imprisonment of one Maksim Zaslavskiy for 18 months, with US Attorney Richard P. Donoghue stating that it was “an old-fashioned fraud camouflaged as cutting-edge technology.” The DOJ explained Zaslavskiy’s scam:

In July 2017, Zaslavskiy marketed RECoin as “The First Ever Cryptocurrency Backed by Real Estate,” and subsequently Diamond as an “exclusive and tokenized membership pool” hedged by diamonds. In reality, Zaslavskiy bought neither real estate nor diamonds, and the certificates he sent to investors were worthless. Zaslavskiy also falsely advertised that REcoin had a “team of lawyers, professionals, brokers and accountants” who would invest the proceeds from the REcoin ICO in real estate, and that 2.8 million REcoin tokens had been sold.

Caveat emptor?

Reading back this straighforward description, it feels as though anyone investing in Zasavskiy’s schemes ought to have seen through them at once, given that there wasn’t anything to rely upon except unsubstantiated statements from the crook himself.

But before you criticise the victims of this sort of crime for what might seem like a mixture of gullibility and short-sightedness, remember that successful cryptocurrencies such as Bitcoin are essentially backed by nothing but their blockchains – distributed digital ledgers that are maintained by a network of users who pay for the electrical power needed to perform what amount to verification or validation calculations to “approve” transactions into those blockchains.

With that in mind, the promise of a cryptocurrency that uses the same cryptographic technology for its digital transaction ledger, yet is allegedly backed by the actual purchase of real estate using the money of investors, is an understandably alluring one.

After all, if Bitcoin can (and has) made early adopters rich without any real estate in the equation at all, why shouldn’t a technologically similar scheme that includes some sort of real-world “value backstop” be an even better investment?

Hey, even if the real estate doesn’t go up in value much, or even at all, surely you’re already better off than just buying Bitcoin, because there’s at least something behind it? Not to mention that this time, you get in on the ground floor, just like Mr SmokeTooMuch did with his BTC 10,000 back in 2010.

What to do?

We’re not investment advisors, so we can’t comment on the value, or otherwise, of cryptocurrency investments.

The problem with the RECoin scam that netted Zaslavskiy an 18-month prison term is that it wasn’t an investment at all – it was just a tower of lies, given technological zing through its modern-sounding, blockchain-based, cryptocoin-flavoured description.

So, remember:

  • Beware any online schemes that make promises that a properly regulated investment would not be allowed to do. Investment regulations generally exist to keep the lid on wild and unachievable claims, so be sceptical of any scheme that sets out to sidestep that sort of control in unregulated areas.
  • Don’t be taken in by cryptocoin jargon and a smart-looking website. Anyone can set up a believable-looking website with what look real-time graphs, community endorsement and an online commenting system that seems to be awash with upvotes and positivity. Open source website and blogging tools make it cheap and easy to create professional-looking content – but those tools can’t stop a crook feeding them with fake data.
  • Consider asking someone with an IT background whom you know and trust for advice. Find someone who isn’t already part of the scheme and doesn’t show any particular interest in it. Be wary of advice or endorsement from people who are (or claim to be) already part of the scheme. They could be paid shills, or fake personas, or they could be early winners who’ve been paid with money Ponzied from later investors
  • If it sounds too good to be true, it probably is. That advice applies whether it’s an ICO, a special online offer, a new online service, a survey to win a prize, or even just the good old lure of “free stuff”. Take your time to understand what you’re signing up for.

If in doubt, don’t give it out, and that definitely includes your money!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PJnGzgGV8k4/

When You Know Too Much: Protecting Security Data from Security People

As security tools gather growing amounts of intelligence, experts explain how companies can protect this data from rogue insiders and other threats.

(Image by andrew_rybalko, via Adobe Stock)

Modern security tools are growing increasingly capable, scanning millions of devices and gathering intelligence on billions of events each day. While the idea is to piece together more information for threat intelligence, it also begs the question of how all this data is secured.

“There’s so much more data today, more than there has ever been,” says Rebecca Herold, founder and CEO of The Privacy Professor consultancy. “And organizations never delete it, so they’re always adding more, with more devices and more applications.”

Further, she adds, there are several more locations where information is collected, stored, and accessed. Many companies lack control over employee-owned devices, which may be used to access key data.

Malicious insiders are a real and growing threat to companies, especially those who hold vast amounts of sensitive data. Twitter and Trend Micro are two examples on a long – and growing – list of organizations that have abused legitimate access to enterprise systems and information.

With sensitive data streaming in, it is imperative that security companies reconsider how they store it and who can reach it.

For many businesses, this demands a closer look at the IT department, which Herold says is often given too much access to data, even in the largest firms. IT pros who develop and test new applications are often given full access to production data for testing.

“This is a huge risk in a couple of big ways,” she notes. When you give developers and coders access to production data, you’re letting them see some pretty sensitive information and bring it into potentially risky situations. “Oftentimes, what is being done with those applications could leak the data, depending on what the system or app they’re building does,” Herold says.

Inappropriately sharing data with unauthorized entities creates a vulnerability, but that isn’t the only consequence. It also violates a growing number of data protection laws and regulations that say companies can only use personal data for the purposes for which it’s collected. Using data to test new applications and updates generally isn’t one of these purposes, she adds.

Herold also points out how it’s “still a pretty common practice,” especially among IT and development teams, to share a single user ID and password for each system. They can use these credentials to log in, make changes, tweak data, or remove it. The problem is, if something happens to the data, there is no way to know who was behind malicious activity.

“When you have multiple people using the same user ID, you completely remove the accountability for those using that ID,” she explains. Without a clear tie between a person and specific user ID, it’s hard to ascertain whether someone used that ID to steal key information. Failing to implement controls could make it easier for an insider to get away with data theft.

Those who can access sensitive data should have their access monitored, says Herold, and using individual IDs can help keep track of employees obtaining certain types of data or sharing it outside the organization. Data backups are one area that insiders will take advantage of, but one that organizations don’t often consider when they’re thinking about which data to protect.

“I’ve seen so many organizations who have strong controls on their data that they use for production, for their daily work activities, but then their backups are pretty much left wide open,” Herold says. Access to backup data often isn’t strictly prohibited to employees, granting access to many people who could obtain corporate secrets or personal information.

(continued on next page: Steps to protect data)

 

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full BioPreviousNext

Article source: https://www.darkreading.com/theedge/when-you-know-too-much-protecting-security-data-from-security-people/b/d-id/1336435?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Target Seeks $74M in Data Breach Reimbursement from Insurance Company

The funds would cover some of the money Target paid to reimburse financial institutions for credit card replacement after the 2013 breach.

Target has paid banks $138 million to settle claims made by banks stemming from the retailer’s massive 2013 data breach. Of that total, $74 million has not been reimbursed by the company’s insurers, and now Target has gone to court to force insurance companies to pay up.

The numbers come from a lawsuit filed this week against ACE American Insurance Co., now part of the Chubb Corp. In the suit, Target says that it has been in discussions with ACE for more than a year, but has reached no resolution on claims that ACE has so far refused to recognize.

Target paid the banks to reimburse expenses the financial institutions incurred in canceling and reissuing physical payment cards. In total, Target says that it had about $292 million in expenses related to its data breach, about $90 million of which have been offset by insurance.

For more, read here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “In the Market for a MSSP? Ask These Questions First.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/target-seeks-$74m-in-data-breach-reimbursement-from-insurance-company/d/d-id/1336443?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Researchers Explore How Mental Health Is Tracked Online

An analysis of popular mental health-related websites revealed a vast number of trackers, many of which are used for targeted advertising.

Researchers who analyzed a collection of mental health-related websites found a vast majority embed “an impressive number” of trackers mostly used for marketing purposes. More than a quarter embed third parties engaged in programmatic advertising and Real Time Bidding (RTB).

Eliot Bendinelli, a technologist with UK non-profit Privacy International, says the organization wanted data protection agencies to take action because it believed there was a fundamental problem with the tracking industry. Its project began with an investigation into sales in the field of ad tech companies, credit rating agencies, ad blockers, and related organizations, he says.

“We were building a case, and basically we think what they’re doing is unlawful,” Bendinelli continues. While waiting for agencies to act, the research team wanted to find an example of how tracking is taking place on Web pages where people go to read and share sensitive data.

“We wanted a concrete example of how tracking is happening on websites where you think you are safe, and where you are looking up or exchanging data that is sensitive and personal,” he adds. They chose sites related to mental health because, as Bendinelli puts it, people may research mental health conditions online because they aren’t yet ready to discuss it in person.

The World Health Organization reports 25% of the European population suffers from depression or anxiety each year, and about half of major depressions are untreated. This means every day, millions of people are researching depression online, whether it’s to seek help or support someone in their lives who could be suffering from a mental health condition.

At the same time, the Internet’s current business model relies on targeted advertisement to make money, tracking individuals and using their personal data to build accurate ad profiles.

To learn how these trackers follow people to mental health websites, Bendinelli and his research team analyzed 136 popular depression-related websites in France, Germany, and the UK. They used a VPN so the search results would be country-specific and began by searching terms related to depression in each of the countries’ languages. They collected websites listed on the first page of search results and using a tool called Webxray, analyzed data including the number of trackers on each website, number of cookies dropped, and other analytics data.

What they found “unfortunately, was not surprising,” Bendinelli says. “Google is pretty much everywhere,” was a key takeaway, and he wasn’t shocked to find 97% of websites analyzed had some form of Google tracker. Ninety percent had Doubleclick, which is a business owned by Google that lets online advertisers and publishers display advertisements on their websites. Right behind Google, in terms of numbers, were Facebook and Amazon trackers, he adds.

“These trackers’ sole purpose was to do targeted advertising,” says Bendinelli.

Real Time Bidding

More than a quarter of webpages analyzed embed third parties who engage in programmatic advertising and Real Time Bidding, a process by which ad impressions are bought and sold. When someone visits a website, a “bid request” is sent to an ad exchange. This request contains various types of data such as demographical data, location data, and browser history. The ad exchange sends this data to advertisers, who bid for the ad impression as it’s presented to the site visitor. The one who bids highest will win the impression and have its ad served to users.

The problem, Bendinelli explains, is many companies are learning information the website visitors may not necessarily want to share. “It could be shared with literally hundreds of companies,” he says of bid request data. While it isn’t sold, it can be used for targeted ads.

Also concerning was the researchers’ discovery that a small subset of websites offering depression tests would directly or indirectly share responses and results with third parties. Of the nine testing websites they found, two were storing test results and one was sending responses to another company not mentioned anywhere on the site, Bendinelli explains.

Researchers followed up with the websites they had flagged. One completely removed the test. “They realized they were doing something wrong,” he notes.

Bendinelli and Frederike Kaltheuner, tech policy fellow with the Mozilla Foundation, will present more of these research findings at the Black Hat Europe 2019 conference in a briefing entitled “Is Your Mental Health for Sale?

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “In the Market for a MSSP? Ask These Questions First

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/researchers-explore-how-mental-health-is-tracked-online/d/d-id/1336444?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Why do cryptocoin scams work, and how to avoid them?

Fascinated by cryptocurrencies? Wishing you’d got in on the ground floor for the Bitcoin boom of 2017?

Many people would answer “Yes” to both those questions – and with good reason.

After all, the dramatic roller coaster ride that the Bitcoin (BTC) price has been through from 2017 onwards is kind of unimportant to anyone who mined their own bitcoins in the early days.

Ten years ago, bitcoins were almost worthless, with one historical chart claiming that a user going by the name SmokeTooMuch tried to sell BTC 10,000 for just $50 back in 2010, but couldn’t find a buyer.

Only in 2011 did one bitcoin go above $1, so if you have even a tiny stash of BTC from before that date, the very worst value multiplier you would have seen in the past two years would still be more than 3,000-fold (that’s 300,000% if you prefer percentages).

In other words, the currency’s recent volatility in flapping between a nadir of just over $3,000 and a zenith of just under $20,000 since December 2017 simply doesn’t matter to anyone with BTC 10,000 from back in 2010.

That’s not the difference between rich and poor, it’s the difference between rich and Richie Rich rich.

Simply put, people who got into BTC at the very start and held onto their bitcoins are, in theory at least, extra-super wealthy now as a result.

(The publisher of the system that makes Bitcoin work, the still-anonymous Satoshi Nakamoto, is claimed by one analyst to have mined about one million bitcoins in those heady, early days; all of them apparently remain unspent.)

Enter the ICO

So it’s not surprising that confidence tricksters – crooks with the gift of the gab, and an apparent fluency in the jargon of cryptocoins and blockchains – have found that promising “a brand new cryptocurrency that you can join at the very start” can be a great way to defraud well-meaning people of their hard-earned savings.

Cybercrooks of this sort often pitch what’s called an Initial Coin Offering, or ICO.

That’s a newly-minted term that’s meant to mirror the terminology IPO, short for Initial Public Offering, which stock markets use to describe a private company going public by putting up shares for sale on an open market.

IPOs can give investors a chance to realise rapid gains, for example by selling quickly if immediate demand for the new shares is high, or to make money in the long term by holding onto their early shares in a company that’s already well known.

But even IPOs by big, popular companies don’t guarantee that your investment will go up, and that’s in a market ecosystem that, in most countries, is fairly strictly regulated.

Not just anyone can set up an IPO; there are strict rules about what positives you are allowed to claim about your company, and which potential negatives you are obliged to disclose up front; there are controls on what you can say to the media during the lead up to the IPO, and who can say it, and when… and much more.

In contrast to the rules around IPOs, in many countries, ICOs are either scarcely regulated or not regulated at all.

Loosely speaking, someone who wants to “market” an ICO can promise the world – and can do so without needing any existing products, or prototypes, or stock, or patents, or intellectual property, or indeed anything much at all except a cool-sounding name for their new cryptocoins and a groovy-looking website.

Sadly, that makes it surprisingly easy for a cybercrook to invite “investments” – for example by using a bunch of fake testimonials and some judiciously chosen (and perhaps actually accurate) graphs showing how other cryptocurrency values have shot up to the apparently enormous benefit of those who joined in early on.

Building a pyramid

A wily cybercriminal might run a website that shows their new “currency” steadily gaining in value, based on some sort of unspecified “mining and trading” activity, perhaps with “real time transaction logs”.

The crook might even make regular “dividend” payments to early investors to “prove” that the product is doing well.

For example, you might login and see a page showing that your initial $10,000 investment is already worth $47,578, say – and you might even be encouraged to “withdraw” some of your “gains”, possibly subject to some sort of investment period limit that restricts you getting it all at once.

Of course, if you’ve put in $10,000 and the crook permits you to cash out out, say, $178.56 of “dividend” right now, after just a few weeks, it might feel as though you are living the dream…

…but in the unregulated world of ICOs and cryptocurrency investments, there may be few or no legal safeguards to ensure that the $178.56 you’ve extracted are genuine earnings, rather than just a tiny percentage of your own money back.

Some early adopters might actually get paid back more than they put in – so their delighted and very public claims that “they genuinely made money” might indeed be true, so far as they can tell.

But there may be no legal or operational safeguards by which you can be sure that those lucky few actually made their money because of a genuine increase in value of the cryptocurrency they think they bought.

For all you know, those lucky few might simply have been paid directly out of the money put in by subsequent investors, meaning that the product that they thought they had funded, and that had allegedly grown in value, didn’t exist at all.

That’s a classic pyramid or Ponzi scheme, named after an early perpetrator of the scam called Carlo Pietro Giovanni Guglielmo Tebaldo Ponzi, better known as Charles Ponzi.

A more recent perpetrator is Bernard Lawrence Madoff, who made off with billions of dollars in his own Ponzi scheme before getting a whopping 150-year prison sentence in 2009. According to Wikipedia, Bernie Madoff’s release date is in 2139, assuming 20 years off for good behaviour, and assuming he lives to be more than 200.

So, what can be done to discourage ICO scammers from stealing money from innocent but trusting victims in this comparatively simple yet high-tech-sounding fraud?

One thing is to find, arrest, convict and imprison those who practise this sort of deceit, and the good news is that the US Department of Justice (DOJ) is willing and able to do so.

Indeed, the DOJ this week announced the imprisonment of one Maksim Zaslavskiy for 18 months, with US Attorney Richard P. Donoghue stating that it was “an old-fashioned fraud camouflaged as cutting-edge technology.” The DOJ explained Zaslavskiy’s scam:

In July 2017, Zaslavskiy marketed RECoin as “The First Ever Cryptocurrency Backed by Real Estate,” and subsequently Diamond as an “exclusive and tokenized membership pool” hedged by diamonds. In reality, Zaslavskiy bought neither real estate nor diamonds, and the certificates he sent to investors were worthless. Zaslavskiy also falsely advertised that REcoin had a “team of lawyers, professionals, brokers and accountants” who would invest the proceeds from the REcoin ICO in real estate, and that 2.8 million REcoin tokens had been sold.

Caveat emptor?

Reading back this straighforward description, it feels as though anyone investing in Zasavskiy’s schemes ought to have seen through them at once, given that there wasn’t anything to rely upon except unsubstantiated statements from the crook himself.

But before you criticise the victims of this sort of crime for what might seem like a mixture of gullibility and short-sightedness, remember that successful cryptocurrencies such as Bitcoin are essentially backed by nothing but their blockchains – distributed digital ledgers that are maintained by a network of users who pay for the electrical power needed to perform what amount to verification or validation calculations to “approve” transactions into those blockchains.

With that in mind, the promise of a cryptocurrency that uses the same cryptographic technology for its digital transaction ledger, yet is allegedly backed by the actual purchase of real estate using the money of investors, is an understandably alluring one.

After all, if Bitcoin can (and has) made early adopters rich without any real estate in the equation at all, why shouldn’t a technologically similar scheme that includes some sort of real-world “value backstop” be an even better investment?

Hey, even if the real estate doesn’t go up in value much, or even at all, surely you’re already better off than just buying Bitcoin, because there’s at least something behind it? Not to mention that this time, you get in on the ground floor, just like Mr SmokeTooMuch did with his BTC 10,000 back in 2010.

What to do?

We’re not investment advisors, so we can’t comment on the value, or otherwise, of cryptocurrency investments.

The problem with the RECoin scam that netted Zaslavskiy an 18-month prison term is that it wasn’t an investment at all – it was just a tower of lies, given technological zing through its modern-sounding, blockchain-based, cryptocoin-flavoured description.

So, remember:

  • Beware any online schemes that make promises that a properly regulated investment would not be allowed to do. Investment regulations generally exist to keep the lid on wild and unachievable claims, so be sceptical of any scheme that sets out to sidestep that sort of control in unregulated areas.
  • Don’t be taken in by cryptocoin jargon and a smart-looking website. Anyone can set up a believable-looking website with what look real-time graphs, community endorsement and an online commenting system that seems to be awash with upvotes and positivity. Open source website and blogging tools make it cheap and easy to create professional-looking content – but those tools can’t stop a crook feeding them with fake data.
  • Consider asking someone with an IT background whom you know and trust for advice. Find someone who isn’t already part of the scheme and doesn’t show any particular interest in it. Be wary of advice or endorsement from people who are (or claim to be) already part of the scheme. They could be paid shills, or fake personas, or they could be early winners who’ve been paid with money Ponzied from later investors
  • If it sounds too good to be true, it probably is. That advice applies whether it’s an ICO, a special online offer, a new online service, a survey to win a prize, or even just the good old lure of “free stuff”. Take your time to understand what you’re signing up for.

If in doubt, don’t give it out, and that definitely includes your money!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PJnGzgGV8k4/

Iran’s APT33 sharpens focus on industrial control systems

Iran’s elite hacking group is upping its game, according to new evidence delivered at a cybersecurity conference this week. The country’s APT33 cyberattack unit is evolving from simply scrubbing data on its victims’ networks and now wants to take over its targets’ physical infrastructure by manipulating industrial control systems (ICS), say reports.

APT33, also known by the names Holmium, Refined Kitten, or Elfin, has focused heavily on destroying its victims’ data in the past. Now though, the group has changed tack according to Ned Moran, principal program manager at Microsoft, who spoke at the CYBERWARCON conference in Arlington, Virginia on Thursday. Moran, who is also a fellow with the University of Toronto’s Citizen Lab focusing on security and information technologies, focuses on identifying and disrupting state-sponsored attackers in the Middle East.

The APT33 group is closely associated with Shamoon malware that wipes data from its targets’ systems. Experts have also warned of other tools in the group’s arsenal, including a data destruction tool called StoneDrill and a piece of backdoor software called TURNEDUP.

Moran said that APT33 used to use ‘password spraying’ attacks, in which it would try a few common passwords on accounts across lots of organizations. More recently, though, it has refined its efforts, ‘sharpening the spear’ by attacking ten times as many accounts per organisation while shrinking the number of organisations it targets. It has also focused heavily on ICS manufacturers, suppliers and maintainers, Moran said.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/kTGBTd_US1E/

Google plans to take Android back to ‘mainline’ Linux kernel

Better late than never, momentum seems to be building inside Google to radically overhaul Android’s tortured relationship with its precious Linux-based kernel.

It’s a big job and has been a long time coming, arguably since the mobile operating system was unveiled in 2007.

The company hasn’t made any firm announcements on this but journalists this week noticed a low-key video posted to YouTube of a presentation given by Android Kernel Team chief Sandeep Patil, at September’s Linux Plumbers Conference.

The problem? The development model that underpins how Android uses the Linux kernel leads to a lot of complexity that slows updates, raises costs, and makes life difficult for both Google and the device makers downstream in all sorts of ways.

Today, the Linux kernel used by an Android device can be slightly different for every maker, model and at different moments in time.

Device makers start with the LTS (Long Term Support) kernel, before ‘out of tree’ Android common kernel customisations are added. According to Patil, there are a lot of these – as of February 2018, it was adding 355 changes, 32,266 insertions, and 1,546 deletions on top of LTS 4.14.0, and even that is said to be an improvement over the past.

Then major system on chip (SoC) companies such as Qualcomm add numerous hardware customisations before, finally, manufacturers add even more vendor and device-specific software.

These layers of customisation require that each device use a single Linux kernel as its starting point, which has knock-on effects for all the subsequent modifications. Over time, that can lead to Google and device makers supporting several forks of Linux at the same time across numerous devices.

It’s one reason why Android devices have a defined shelf life but it also makes it much more time-consuming to apply security patches.

The new Android

Google’s alternative seems to be to get the mainstream Linux kernel to do as much of the heavy lifting as possible, reducing or even removing the need for the Android kernel modifications.

Essentially, this would take Android back into the Linux kernel fold it once seemed happy to abandon in search of improvements that turned out to come at a price.

In this new and improved world, each generation of devices would then use the same kernel supplied by Google with their own modifications turned into modules applied on top.

That regularises the way the kernel and the manufacturer code relate to one another, with the kernel now the defining component.

It makes for a superb diagram – one kernel and lots of modules on top of this. It also chimes perfectly with Google’s largely successful 2018 Project Treble API initiative to speed up device patching.

Can it really be that easy to fix more than a decade of slow software rot?

Reportedly, Tom Gall of the director of the Linaro Consumer Group showed off a Xiaomi smartphone running Android on top of the mainline Linux kernel, which shows that the goal is achievable in principle.

Doing the same for the forest of other Android devices will take a lot longer. If this is a fix, it isn’t a quick one.

There might be another problem – a completely new operating system that Google is said to be working on, called Project Fuchsia – which it’s claimed could replace Android and possibly the Chrome OS.

This wouldn’t just integrate the development layers of Android but would work across multiple devices, including desktop computers.

At worst, this implies that Android’s days are numbered. Or perhaps they aren’t, and Google is just toying with us while stoking some competition within its development teams.

For sure, Android won’t go away quickly. There’s too much invested in it right now and it does a decent job despite all the pitfalls. If Google devs have their way, it won’t go away quietly either.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jG_BXBWSvEQ/

The 5-Step Methodology for Spotting Malicious Bot Activity on Your Network

Bot detection over IP networks isn’t easy, but it’s becoming a fundamental part of network security practice.

With the rise of security breaches using malware, ransomware, and other remote access hacking tools, identifying malicious bots operating on your network has become an essential component to protecting your organization. Bots are often the source of malware, which makes identifying and removing them critical.

But that’s easier said than done. Every operating environment has its share of “good” bots, such as software updaters, that are important for good operation. Distinguishing between malicious bots and good bots is challenging. No one variable provides for easy bot classification. Open source feeds and community rules purporting to identify bots are of little help; they contain far too many false positives. In the end, security analysts wind up fighting alert fatigue from analyzing and chasing down all of the irrelevant security alerts triggered by good bots.

At Cato, we faced a similar problem in protecting our customers’ networks. To solve the problem, we developed a new, multidimensional approach that identifies 72% more malicious incidents than would have been possible using open source feeds or community rules alone. Best of all, you can implement a similar strategy on your network.

Your tools will be the stock-and-trade of any network engineer: access to your network, a way to capture traffic, like a tap sensor, and enough disk space to store a week’s worth of packets. The idea is to gradually narrow the field from sessions generated by people to those sessions likely to indicate a risk to your network. You’ll need to:

  • Separate bots from people
  • Distinguish between browsers and other clients
  • Distinguish between bots within browsers
  • Analyze the payload
  • Determine a target’s risk

Let’s dive into each of those steps.

Separate Bots from People
The first step is to distinguish between bots (good and bad) and humans. We do this by identifying those machines repeatedly communicating with a target. Statistically, the more uniform these communications, the greater the chance that they are generated by a bot.

Distinguish Between Browsers and Other Clients
Having isolated the bots, you then need to look at the initiating client. Typically, “good” bots exist within browsers while “bad” will operate outside of the browser.

Operating systems have different types of clients and libraries generating traffic. For example, “Chrome,” “WinInet,” and “Java Runtime Environment” are all different client types. At first, client traffic may look the same, but there are some ways to distinguish between clients and enrich our context.

Start by looking at application-layer headers. Because most firewall configurations allow HTTP and TLS to any address, many bots use these protocols to communicate with their targets. You can identify bots operating outside of browsers by identifying groups of client-configured HTTP and TLS features.

Every HTTP session has a set of request headers defining the request and how the server should handle it. These headers, their order, and their values are set when composing the HTTP request. Similarly, TLS session attributes, such as cipher suites, extensions list, ALPN (Application-Layer Protocol Negotiation), and elliptic curves, are established in the initial TLS packet, the “client hello” packet, which is unencrypted. Clustering the different sequences of HTTP and TLS attributes will likely indicate different bots. 

Doing so, for example, will allow you to spot TLS traffic with different cipher suites. It’s a good indicator that the traffic is being generated outside of the browser — a very non-humanlike approach and hence a good indicator of bot traffic.

Distinguish Between Bots within Browsers
Another method for identifying malicious bots is to look at specific information contained in HTTP headers. Internet browsers usually have a clear and standard header image. In a normal browsing session, clicking on a link within a browser will generates a “referrer” header that will be included in the next request for that URL. Bot traffic will usually not have a “referrer” header — or worse, it will be forged. Identifying bots that look the same in every traffic flow likely indicates maliciousness.

User-agent is the best-known string representing the program initiating a request. Various sources, such as fingerbank.org, match user-agent values with known program versions. Using this information can help identify abnormal bots. For example, most recent browsers use the “Mozilla 5.0” string in the user-agent field. Seeing a lower version of Mozilla or its complete absence indicates an abnormal bot user-agent string. No trustworthy browser will create traffic without a user-agent value.

Analyze the Payload
Having said that, we don’t want to limit our search for “bad” bots only to the HTTP and TLS protocols. We have also observed known malware samples using proprietary unknown protocols over known ports and such could be flagged using application identification.

In addition, the traffic direction (inbound or outbound) has a significant value here. Devices that are connected directly to the Internet are constantly exposed to scanning operations and therefore these bots should be considered as inbound scanners. On the other hand, scanning activity going outbound indicates a device infected with a scanning bot. This could be harmful for the target being scanned and puts the organization IP address reputation at risk.

Determine a Target’s Risk
Until now, we’ve looked for bot indicators in the frequency of client-server communications and in the type of clients. Now, let’s pull in another dimension — the destination or target. To determine malicious targets, consider two factors: target reputation and target popularity.

Target reputation calculates the likelihood of a domain being malicious based on the experience gathered form many flows. Reputation is determined either by third-party services or through self-calculation by noting whenever users report a target as malicious.

All too often, though, simple sources for determining targets reputation, such as URL reputation feeds, alone are insufficient. Every month, millions of new domains are registered. With so many new domains, domain reputation mechanisms lack sufficient context to categorize them properly, delivering a high rate of false positives.

Bot detection over IP networks is not an easy task, but it’s becoming a fundamental part of network security practice and malware hunting specifically. By combining the five techniques we’ve presented here, you can detect malicious bots more efficiently.

For detailed graphics and practical examples on applying this methodology, go here.

Related Content:

 

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “What’s in a WAF?

Avidan Avraham is a Security Researcher in Cato Networks. Avidan has a strong interest in cybersecurity, from OS internals and reverse engineering, to network protocols analysis and malicious traffic detection. Avidan is also a big data and machine learning enthusiast who … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/the-5-step-methodology-for-spotting-malicious-bot-activity-on-your-network/a/d-id/1336408?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple