STE WILLIAMS

Facebook brings Messenger to kids as young as 6

Did you know that up until yesterday, there’s been an unmet need for a messaging app that lets children “connect with people they love” but which also “has the level of control parents want?”

Facebook’s newly announced Messenger Kids app, for the age 6-12 clutch of Facebook users-to-be, won’t have ads. Nor will your kids’ information be used for marketing, Facebook promises.

There won’t be any in-app purchases, either. Whew! That will be a relief for people who’ve, say, joined class action lawsuits to sue Facebook after their tots spent hundreds of dollars on in-game purchases for Ninja Saga… you know, the “What? Virtual currency costs real money?” kind of thing.

No siree, no gunslinging chickens or picnic-lugging bears, courtesy of your online-game-loving offspring, will drain your bank account via Messenger Kids.

In fact, Messenger Kids has been designed to be compliant with the Children’s Online Privacy Protection Act (COPPA). Congress enacted the legislation in 1999 with the express goal of protecting children’s privacy while they’re online. COPPA prohibits developers of child-focused apps, or any third parties working with such app developers, from obtaining the personal information of children 12 and younger without first obtaining verifiable parental consent.

Facebook was careful enough to work with parents, families, US parenting experts, and associations like the National Parent-Teacher Associaton (PTA) (which does not endorse any of this, mind you, Facebook points out).

So why are people ruffled about Messenger Kids? I mean, really, as Gizmodo asked, what could possibly go wrong?

…or, more specifically, in the words of Gizmodo’s Melanie Ehrenkranz, what could possibly go wrong when the land of “rampant harassment, misinformation, and foreign election interference” comes for your kids?

In an email to Gizmodo, a Facebook spokesperson said that the company launched the app because…

…many of us at Facebook are parents ourselves, and it seems we weren’t alone when we realized that our kids were getting online earlier and earlier.

Gizmodo reports that Facebook cited an external study from Dubit that found that 93% of six to 12-year-olds in the US have access to tablets or smartphones. That’s a lot of kids, doing a lot of chatting, without parents’ oversight.

Messenger Kids accounts have to be set up by a parent or carer. All they need to do is download the Messenger Kids app onto their child’s iThing (it’s only out as an iOS preview at this point, and only in the US), authenticate with their own Facebook credentials, trust Facebook when it says that this will neither create a Facebook account for their child nor give them access to their parent’s account, and hand over the device so the child can chat with whatever family or friends the parent approves.

That’s when the Snapchat-ish fun begins: the app provides playful masks, puppy ears and noses galore, emojis and sound effects to “bring conversations to life.”

Speaking of Snapchat, how many kids with access to an iThing will actually use a parent-controlled app rather than, well, chatting with whoever they want to, without Mom or Dad’s say-so, on Snapchat? Or YouTube? Or Instagram? Or Musical.ly? Or, for that matter, Facebook itself, sans parental control?

True, those apps require that users be at least 13. But since when has that stopped kids? It’s been estimated that millions of pre-teens log in to Facebook every day.

So maybe getting them out into the open is a good thing, yes? They’re already there, so maybe Facebook’s helping by moving kids to where parents can vet who they’re talking to, yes? Facebook’s also promising to block children from sharing nudity and sexual or violent content. The company also plans to have a dedicated moderation team to respond to flagged content.

That sounds great, in theory. We’ll have to see how it works out in practice. My eyeballs are still seared from seeing what’s been happening on YouTube Kids recently, what with videos targeted at kids but that feature a) cartoon characters that turn into monsters and try to feed each other to alligators, b) a Claymation Spiderman urinating on Elsa of “Frozen”, or c) Nick Jr. characters in a strip club.

YouTube was using human-free, automatic filters that are supposed to strip out any content that’s not child-friendly, so that it can be streamed on what’s supposed to be the kid-safe YouTube Kids site. Not exactly what you’d call fool-proof, that!

Maybe humans at Facebook will do better with Messenger Kids.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MUyYj6bwVpc/

High schooler hacks his way to a higher GPA

You’d think students smart enough to hack into their school’s IT system and change their grades wouldn’t need to hack into their school’s IT system and change their grades.

But, of course, smarts don’t automatically mean good grades. And in the hyper-competitive world of elite college admissions, good grades are frequently not good enough.

In this latest student hack, a 16-year-old senior at Tenafly High School, New Jersey, is being charged in juvenile court for allegedly breaching the school’s system, raising several of his grades (which then raised his overall GPA) and sending out college applications with the doctored transcripts.

The student isn’t being named, but NorthJersey.com reported that school officials discovered the breach, suspended the student and rescinded the transcripts.

And the incident also launched another discussion about the pressure to succeed.

Ashley Kipiani, who has tutored high school students for more than 15 years, told NorthJersey.com that the pressure to cheat, “is higher today as students aspire for a perfect grade point average, AP credits and a ticket into a top college.”

Given those incentives, it should not be a surprise that Tenafly is just one of many high schools and colleges targeted by students looking to hike their grades. Recent years are littered with similar stories:

  • The FBI arrested Trevor Graves, 22, a former University of Iowa wrestler, at the end of October and charged him with planting hardware keyloggers on several school computers. He allegedly compromised the information of 250 students, faculty and staff and changed his grades more than 90 times between March 2015 and November 2016.
  • Chase Arthur Hughes, 19, was arrested in September 2016, after allegedly using a professor’s account to access sensitive information, including employment history, credit, financial and medical information. He was accused of changing grades in two separate classes at Kennesaw State University, including bumping some students’ grades from an “F” to “A” and another from a “C” to “A”. For himself, police say, he changed his from a “B” to an “A.”
  • Roy Sun was sentenced to three months in jail in March 2014 after he was convicted of altering his grades – some from an F to an A – while he was a senior at Purdue University. Authorities said he and an accomplice, Mitsutoshi Shirasaki, broke into professors’ offices, installed keyloggers and then waited to hack into the university computer system until 10 minutes before professors’ deadline to submit their grades for the semester.

There are other past examples, of course, and there will surely be more. Business Insider reported in August that students don’t even have to do the hacking themselves.

(They) can access the Dark Web to hire a hacker to change their grades, attack their school’s network with a DDoS, buy drugs and more.

Still, one could argue that these hackers weren’t all that smart if they didn’t know enough to cover their tracks well enough to avoid being caught. In the Purdue case, authorities said the hackers changed professors’ passwords, failed to mask their IP addresses and weren’t “subtle” about the grade changes.

A large part of the problem, school and university officials have been admitting for years, is that academic systems are designed to be open, and are therefore less secure. At a 2014 SANS Security Leadership Summit in Boston, a panel of higher education IT officials said they try to keep things “reasonably safe,” but can’t be “dictators” about security.

Fitchburg State University information security officer (ISO) Sherry Horeanopoulos:

We work in an environment that is designed to be wide open and unguarded. Professors and students need access to resources that span the globe. So how do you take a top-down approach in a bottom-up environment?

Of course, it would help a lot simply to use basic security hygiene. In the case of the University of Iowa hack, the school didn’t use two-factor authentication (2FA) for its student management system, so the login credentials allowed Graves access to teachers’ accounts.

Indeed, using 2FA is no more “dictatorial” than locking office doors. It’s simple prudence.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/LnI3_uWodxM/

Politicians boast about sharing passwords, bask in blissful ignorance

Britain’s Houses of Parliament must be a pretty stressful place to be a computer security admin.

For starters, it’s a given that you’ll find yourself defending the House’s 650 MPs, 800 Lords, and 2,000 or so other staff from daily state-sponsored cyberattacks, such as the one that led to the compromise of dozens of MP’s email accounts in June.

Not easy.

Then there is the large and frankly risky porn habit of some of Parliament’s public servants, which amounted to a reported 110,000 attempted accesses to X-rates sites in 2016 (itself a marked reduction on previous years).

Apart from being rather sleazy for the mother lode of democracy, porn sites are like malware flypaper, so that’s not good either.

Rounding out the misery list is the lax personal behaviour of the MPs themselves, which this week we learned runs to sharing precious account passwords with their staff willy nilly.

Ironically, news of this behaviour emerged from comments made by MP Nadine Dorries, who was defending fellow Conservative First Secretary of State Damian Green from recent accusations that he downloaded porn to his computer in 2009.

She tweeted:

The reasoning being that if porn was accessed from Green’s PC while he was apparently logged into email and other accounts, this did not necessarily mean he was personally responsible.

Before anyone could dismiss Dorries’ remark as a one-off, fellow MP Nick Boles tweeted his agreement:

But perhaps it is Dorries’ next tweet that deserves more attention:

No need to worry, then – who beyond Dorries’ office could possibly be interested in something as trifling as an email account and its measly credentials?

By now, Parliamentary IT staff reading these exchanges were probably feeling the need to head for darkened rooms for a long lie down.

Then the Information Commissioners Office (ICO) intervened on their behalf:

We would remind MPs and others of their obligations under the Data Protection Act to keep personal data secure.

And that section 2.7.2 of the official data protection advice for MPs and staff (2010) clearly states:

Keep personal information secure and introduce office practices to ensure that security measures are followed. Take particular care when sharing information or sending it off-site.

Might some of this be unfair to Dorries and password-sharing MPs in her situation?

It could be countered that the problem is not simply what she is owning up to – MPs have a legitimate, if limited, need to share credentials after all – but her lack of awareness that there are safer ways to achieve this by, for instance, using an online password manager.

Sharing passwords (or using delegated access) in a formal way also preserves accountability because it allows behaviour to be tied to the real person accessing an account. MPs should never be able to hide online behaviour behind the exuse that someone else was using an account on their behalf.

Parliamentary IT earlier this year championed its first cybersecurity awareness month designed to help MPs and staff “brush up their existing knowledge and learn new skills.”

All very worthy, but if recent cyberattacks and Dorries’ tweets tell us one thing, it’s that the model of leaving security up to busy politicians is ineffective to say the very least.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aHYXerUqnVU/

Once again, UK doesn’t rule out buying F-35A fighter jets

The United Kingdom is edging ever closer to buying F-35As, instead of the B model needed to fly from the Navy’s new aircraft carriers, as a senior officer once again refused to rule out a future F-35A purchase.

Lieutenant General Mark Poffley, deputy chief of the defence staff for military capability, told MPs “I don’t think we can rule that out” when asked if the British armed forces would buy F-35As as well as F-35Bs.

Last year defence procurement minister Harriett Baldwin MP similarly refused to rule out an F-35A purchase.

This matters because if the Royal Air Force secures a purchase of the non-navalised F-35A variant, it could leave Britain’s future flagships without enough fighter jets to deliver their intended effect.

Squadrons, numbers, deployments

To ever so slightly oversimplify things, the basic idea behind the two new Queen Elizabeth-class aircraft carriers is that they can rock up off a hostile country’s coast and use their F-35B air wings to impress, scare, shoot down and potentially even bomb the uppity natives into submission. For non-hostile countries, the carriers rock up and become one of the world’s greatest floating cocktail bars with an awesome (and moveable) view, complete with a hangar that easily beats most London nightclubs for floorspace. In the British military argot, all of this is called “carrier enabled power projection”.

The astute reader will rapidly realise that the entire thing is based around there being enough F-35Bs aboard the carriers to project the power, in a warfighting scenario.

The basic F-35B deployment aboard the carrier will consist of one squadron, possibly two at a pinch. One squadron is 12 jets. Unlike the RAF’s ground-based operating model where small detachments from squadrons fly to a nearby airfield for combat operations, an entire F-35B squadron will have to deploy onto the carrier (singular – the rough idea is that one carrier will be deployed at sea while the other is alongside at home for crew training).

Roughly, you need around four squadrons in total to sustain one squadron at sea: one squadron aboard the Big Grey War Canoe™; one squadron that has just come off the Big Grey War Canoe™; one squadron at home on leave; and one squadron working up ready to take its turn aboard the aircraft carrier. The maths is not precise; in the modern armed forces, aircraft are pooled instead of being issued to particular squadrons for their exclusive use, while experienced personnel whose skills are in short supply may be unlucky and end up with back-to-back deployments.

Incidentally, the same four-owned-for-one-operational is the same model used for Britain’s nuclear deterrent submarines.

What has this got to do with the RAF turning some of Britain’s planned purchase of F-35Bs into F-35As, then?

Break out the abacus

The UK has long publicly committed itself to buying 138 F-35Bs. We know that the UK intends to use around 63 aircraft in its operational fleet at any one time, leaving the rest in reserve. That gives you five usable squadrons, not counting the permanent RD jets based in America, which will never leave that country.

Of those five squadrons, one will be the operational conversion unit (ie, the training squadron). That leaves four squadrons … see where this is going?

Working on the assumption that the MoD has decided it will have no more than those 63 aircraft to fly at any one point, a purchase of F-35As would eat into the number of aircraft available for working up the carrier air wing. To make an RAF F-35A unit viable you’d need about 20 or so aircraft, allowing for testbed jets in America, operational conversion aircraft, and the 12 actually needed by the frontline squadron.

That would leave the F-35B fleet short by two squadrons’ worth of aircraft. Suddenly, absent a massively unlikely cash injection to operate another two squadrons of F-35Bs, your neat and predictable four-squadron model drops to a two-squadron one. You also need lots more trained and skilled personnel to fly two separate fleets; the F-35B is the vertical takeoff model, optimised for short field (and carrier) flying, whereas the F-35A is a conventional land-based aircraft.

In terms of what the F-35A can do, it’s not a million miles from the Eurofighter Typhoon, though its communications fit is far more advanced and, being 20 years newer, it’ll be around for longer. Perhaps the MoD’s intention is to buy F-35As at the very end of the F-35 purchase run, though this is pure guesswork.

In short, then, buying F-35As would lead to increased costs and less eventual capability. Which raises the obvious question: why on Earth is the MoD repeatedly not ruling this out? ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/05/uk_f35a_buy_still_on_cards/

Brit bank Barclays’ Kaspersky Lab diss: It’s cyber balkanisation, hiss infosec bods

Analysis Barclays has stopped offering free Kaspersky Lab products to new users in a move that shows, like Best Buy, commercial firms can be swayed by governmental stances on dealing with the Russian software firm.

best buy

Red panic: Best Buy yanks Kaspersky antivirus from shelves

READ MORE

As El Reg reported yesterday, the UK high street bank replaced its Kaspersky download micro-site with a notice stating that this software “isn’t available at the moment”.

A Barclays Bank spokesman explained that although it wasn’t offering Kaspersky’s security software to new customers it wasn’t asking them to uninstall it either. Consumers should continue to use anti-malware, Barclays advises.

The move, of course, followed advice from the UK’s National Cyber Security Centre warning against the use of Russian-antivirus software on government computers – but only those which process SECRET (and above) documents…

A spokesperson for the banking group said: “Barclays treats the security of our customers very seriously. Even though this new [NCSC] guidance isn’t directed at members of the public, we have taken the decision to withdraw the offer of Kaspersky software from our customer website.”

We don’t know whether or not it plans to reinstate the free security software offer using a different provider.

Kaspersky Lab has yet to respond to requests from El Reg to comment on Barclays’ decision to drop offers of its software, but founder Eugene Kaspersky has commented on the NCSC advisory.

“There is *no* ban for KL products in the UK. We are in touch with @NCSC regarding our Transparency Initiative and I am sure we will find the way to work together, he said on Twitter.

US government cyber assurance agencies including the Department of Homeland Security have been advising against the use of Kaspersky’s technically well-regarded security software on government systems since September. US electronics chain Best Buy pulled Kaspersky products from its shelves amid concerns over supposed ties – strongly denied by the security firm – between Kaspersky Lab and the Russian government.

Some industry pundits see the developments as the start of a new era of so-called cyber balkanisation.

The term “cyber balkanisation” (aka the splinternet) is a term coined by Cass Sunstein in 2001 and explains the division of the internet into smaller, competing “factions” as a result of factors such as technology, commerce, politics, nationalism, religion and interests.

It’s a long way from the lofty goal that accompanied the inception of the internet as a network to connect the computers of academics together for collaboration, innovation and information sharing.

The reverse can be said for cyber balkanisation, which threatens to destabilise the World Wide Web as we know it.

The latest Kaspersky / Israel / United States spying furore has highlighted just how close a splitup may be. Putin’s apparent threat for Russia and its chums to set up their own internet doesn’t ease people’s concerns, either.

With nations pointing the finger at one another and accusations of spying being flung across cyber space, distrust pervades the industry and suspicion blurs the lines, according to managed security services firm SecureData.

Charl van der Walt, chief security strategy officer at SecureData, comments:

“In a reality where nations are in conflict, it’s a real, hard fact that Kaspersky is a Russian company, employing Russian people, paying Russian taxes and subject to the power of the Russian government. Therefore it would be grossly irresponsible of any nation who sees Russia as a threat to allow any government department or significant business entity to be using any Russian technology, let alone something so deeply embedded as an antivirus engine.”

“It won’t stop with Russia either. China is already in the firing line and governments will increasingly see its technologies and systems being shunned.

“This is why Cyber Balkanisation seems like such an inevitable tragedy: If the West starts rejecting Russian and Chinese technologies, why wouldn’t those two countries do the same? Inevitably pro-western allies like Israel and Western Europe will be painted with the same brush. All the way up and down the computing stack, governments and companies will be forced to choose vendors with which they feel a political alliance,” he adds. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/05/barclays_kaspersky/

Improve Signal-to-Noise Ratio with ‘Content Curation:’ 5 Steps

By intelligently managing signatures, correlation rules, filters and searches, you can see where your security architecture falls down, and how your tools can better defend the network.

It’s a chaotic world for a security professional. The media is a flurry of messages about ransomware attacks and the latest malware. So-called “cyberthreat intelligence” comes in feeds that are a firehose of information that, more often than not, are more distracting than helpful. Unfortunately, as leaders, though not our intention, we sometimes focus on detection and alerts that prove to be irrelevant, and we sometimes unknowingly squander budget, time and occasionally the long-term success of our organizations when we succumb to threats that our security operations centers (SOC) should detect.

It’s time to get back to basics and remember the purpose of our tools and defenses: to protect the company mission. Yet many security teams focus on protecting assets and processes, under the mistaken belief that collecting an arsenal of data will help them do that. The problem here is two-fold. For one, more data doesn’t automatically give you more intelligence. If it’s more of the right data, then great, but frequently that’s not the case. Secondly, it is a widely held fallacy that security is the act of protecting IT systems from harm. In reality, IT is disposable; it’s the business mission that we actually want to protect.

Consider the following:

  • Outdated information and the false positives it yields. Let’s say you’ve got outdated indicators ringing alarm bells for a site that was compromised but has since been cleaned up. These historical indicators can send your team down rabbit holes, generating the kind of noise that can consume analyst processing power that could be better used to assess valid events.
  • Wasted effort on intel that requires tools you don’t have. If you’re getting file hashes for malicious files but don’t have the tech to see if the file hashes traverse your network or get written to one of your endpoints, what was accomplished? There’s also the wear and tear on your technology. No security tool has unlimited processing power. Each bit of content you load takes some resources and those resources are finite. Fill up with worthless content and you won’t have room for the good (read: bad) stuff.
  • Too much focus on irrelevant information. There’s no point in chasing every malware outbreak that comes down the pike, or expending effort on commodity, consumer-oriented malware floating around the Internet. The team’s time and skill should be dedicated to threats targeting your business. Consider malware that’s trying to steal Facebook credentials. To the extent that it’s affecting the company’s social media team, you might mitigate that risk, but if you try to protect every employee accessing personal Facebook accounts, that’s not a good allocation of resources.

Turning Data Dross into ‘Content’ Gold
The first step toward calming the chaos is to intelligently manage the “content” you are deploying into your security architecture. In this context, content refers to the signatures, correlation rules, filters, searches, and other security data that you create to enable detection or bring focus to activity that may indicate an attack or compromise. Dealing with a mass of data in its entirety is searching for a needle in a haystack. But curating that content and turning it into useful insights can help determine where your security architecture is falling down and how your tools can better protect and defend the network.

Here are five step to improve your signal-to-noise ratio with content curation:

1. Let use cases drive your SOC. Organize your monitoring, detection, and hunting activities around actual attack patterns and methods or objectives. Use-cases, such as email monitoring, provide structure and focus to SOC detection activities. Under each use-case are scenarios that describe more specific attacks or exploitation actions, for example, spear-phishing by impersonating high-profile users. Use-cases are selected and developed based on the risk-profile and threat-model of the organization.

2. Prioritize your content by relevance. You can’t watch every feed for every alert. Your content needs to be connected back to the use-case and meaning it has for your company. Not sure where to start? Purge anything outdated, review and tag content to use-case, collect the analyst feedback on the rest, and use that feedback to decide if content is yielding value or not. Content should be aligned to the most critical threats to your environment and linked back to the threat-intel reporting and use-case.

3. Find the context. Identifying malicious activity alone doesn’t mean much. You have to find the larger story around it, connecting the activity to the threat intel reporting and, understanding the nature and objectives of the attack — what is the target and what risk does that pose to the business. Often teams want to move fast on their data without first analyzing and vetting it, but in doing so they decrease the effectiveness of that data. There’s no shortage of feeds that can net your organization a load of indicators. However, if you act on data without context, you may limit your visibility into other related problems or the underlying source of the problems.

4. Empower the CISO. Too often CISOs lack access to the CIO’s trove of valuable data that security teams ultimately need if they want to start creating defensive security content. IT and security have to work hand-in-hand, with IT providing security the visibility needed to enable security content to effectively protect the network, all the while working together to understand assets on the network and how they’re connected.

5. Take a proactive stance. Imagine it’s flu season. Do you stock up on decongestants and Kleenex or do you go out and get a flu shot? The same principle applies to cybersecurity. Detecting exploitation is great, but proactive and preventative strategies are even better. Connecting threat intel to vulnerability allows you to assess your attack surface before an attack occurs. If you receive actionable information, if you know you’re vulnerable in a specific area, proactively reduce that attack surface.

Any organization that wants to streamline their overworked security architecture and employees must curate its intelligence content. By efficiently managing data with an approach that makes smarter use of their team’s time, tools, and expertise, SOC leaders can get better value from their tools and mount a stronger defense against cyber attacks.

Related Content:

Justin Monti has nearly 20 years of IT and information security experience in the private and public sector. Mr. Monti currently serves as chief technology officer of MKACyber where he is oversees technical security services delivery including security architecture, managed … View Full Bio

Article source: https://www.darkreading.com/analytics/improve-signal-to-noise-ratio-with-content-curation-5-steps/a/d-id/1330505?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google prepares 47 Android bug fixes, ten of them rated Critical

Google has teased 47 Android patches for Nexus and Pixel devices.

Among the critical bugs in the Android Security Bulletin, five concern the media framework, one is system-level, four hit Qualcomm components. The worst, Google said, is one of the media framework bugs, not yet fully disclosed, but it “could enable a remote attacker using a specially crafted file to execute arbitrary code within the context of a privileged process”.

Two of the media framework bugs only affect Android 6.0 (31 per cent of active devices), one affects only Android 8.0 (0.3 per cent), one affects all versions between 7.0 and 8.0 (20.9 per cent), and the most widespread is in all version after 6.0 (nearly 52 per cent of devices).

Google hasn’t yet gone public with the nature of these bugs, nor has it divulged the system-level bug that affects Android 7.0 onwards, beyond saying that “a proximate attacker” could “execute arbitrary code” (in other words, vulnerable versions could be attacked over-the-air, either via WiFi, the cellular modem, or Bluetooth).

Three out of the four bugs inherited from Qualcomm are have already been revealed to the public. In CVE-2017-11043, there’s an integer overflow in the numap process (part of the WiFi code); in CVE-2016-3706 and CVE-2016-4429, there’s a stack overflow in a UDP RPC component. All three could be remotely exploitable.

A Qualcomm closed-source component is vulnerable to the yet-to-be-disclosed CVE-2017-6211.

37 of the bugs are rated “High”, five of which are also Qualcomm-specific, and one upstream fix in the Linux kernel to take care of a privilege escalation bug.

Other vendors in the naughty corner include MediaTek and Nvidia, with three vulnerabilities each.

Source code patches will land within 48 hours, Pixel and Nexus firmware images are due December 5, US time, and the rest of the world can, as usual, wait for patches to wend their tired way down through vendors and carriers to land as an over-the-air update. Eventually. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/05/android_december_security_bulletin/

Turns out Leakbase can keep a secret: It has shut down with zero info

Stolen-creds-for-sale site Leakbase has gone dark and started redirecting to Troy Hunt’s HaveIBeenPwned.

Since it’s published only three tweets relating to the shutdown, Leakbase left plenty of room for speculation about the reason for its disappearance.

Brian Krebs associated the shutdown with the break-up of the Hansa “darkweb” operation in July.

That woke Leakbase for one last Tweet a short time ago:

Troy Hunt said he wasn’t surprised by the closure and told The Register any leaked credential database risks unwanted attention from law enforcement, unless it’s very careful about how it handles the data it holds. That alone, he opined, could have been Leakbase’s undoing.

Like LeakedSource, which was shuttered in January this year, Leakbase let customers buy data sourced from breaches.

Hunt said luring customers to help them “use that data to disadvantage the victims of a breach” was always a high-risk model.

“I was contacted by a trusted source last week saying they would be going offline,” Hunt said, and Krebs got in touch “48 hours ago telling me it was redirecting”. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/05/leakbase_closes/

SEC’s cyber-cops cyber-file cyber-first cyber-fraud cyber-charges

The SEC’s new online crime unit says it has frozen what officials believe to be a fraudulent cryptocurrency.

The US securities watchdog claims Canada-based PlexCorps and its owners, Dominic Lacroix and Sabrina Paradis-Royer, are violating anti-fraud statutes by promising US investors impossible returns on investments in their initial coin offering (ICO) operation.

According to the commission’s complaint [PDF], filed in the Eastern New York (Brooklyn) District Court, the PlexCorps ICO had drummed up $15m in investments for their PlexCoin currency promising returns of 1,354 per cent on investment within 30 days.

The SEC says that Lacroix defrauded investors by hiding his own dubious financial history in Canada and presenting the PlexCoin launch as a means of funding the launch of PlexCorps as a global cryptocurrency operation.

Rather, they charge, Lacroix was simply planning to pocket the money:

Contrary to these false representations, and as Lacroix knew or recklessly disregarded, PlexCorps and the PlexCoin Token are a scam because: (a) there is no PlexCorps team, other than a handful of Lacroix’s employees in Quebec working on the project, and no group of experts working across the globe; (b) the reason that PlexCorps did not disclose the identity of its principal executive—Lacroix—was because Lacroix was a known recidivist securities law violator in Canada; (c) the proceeds from the PlexCoin ICO were not destined for business development but instead were intended to fund Lacroix and Paradis-Royer’s expenses including home decor projects; and (d) there was no reasonable basis to project returns on investment in Defendants’ scam.

On Monday, a judge granted the SEC’s order to freeze the assets of PlexCorps, Lacroix, and Paradis-Royer pending the outcome of the case. The freeze will effectively halt the planned ICO.

The case is the first to be undertaken by the SEC’s new Cyber Unit division. The group, founded in September, aims to take action against securities fraud operations that take place entirely online, such as ICO fraud, corporate hacking, and cases of financial misinformation spread through social media.

“This first Cyber Unit case hits all of the characteristics of a full-fledged cyber scam and is exactly the kind of misconduct the unit will be pursuing,” Cyber Unit chief Robert Cohen said of the suit.

“We acted quickly to protect retail investors from this initial coin offering’s false promises.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/05/secs_cybercops_cyberfile_cyberfirst_cyberfraud_cybercharges/

International team takes down virus-spewing Andromeda botnet

Police and private companies have taken down a massive botnet used to move malware onto compromised PCs.

The Armageddon botnet, also known as Gamarue, is thought to have spanned over two million PCs and distributed over 80 types of malware onto infected PCs. It was shut down on November 29 in a combined operation by Europol, the FBI, security vendor ESET and Microsoft.

A suspect thought to be associated with the botnet was arrested in Belarus.

“This is another example of international law enforcement working together with industry partners to tackle the most significant cyber criminals and the dedicated infrastructure they use to distribute malware on a global scale,” said Steven Wilson, the Head of Europol’s European Cybercrime Centre.

“The clear message is that public-private partnerships can impact these criminals and make the internet safer for all of us.”

The Andromeda takedown was made possible by last year’s operations to close the Avalanche botnet. During that effort German police found important information about Andromeda on one of the computers seized during the anti-Avalanche operation and passed the details on to Europol.

botnet

Online criminals iced as cops bury malware-spewing Avalanche

READ MORE

Traffic from Andromeda-infected PCs has now been disrupted, with the authorities taking control of 1,500 malicious domains employed by the malware. Microsoft noted that these domains were contacted by over two million IP addresses in 223 countries and municipalities.

The Andromeda malware first appeared in September 2011 and was detected and blocked on over a million PCs last month. The code’s primary purpose was to harvest credentials but the malware’s highly modular design allowed operators to add in their own custom modules for things like web page content theft or spam campaigns.

Researchers at Microsoft and ESET spent 18 months following the Wauchos malware used to build the botnet to identify its command and control mechanisms. They then moved, with police help, to take control of the domains used to control the botnet and hopefully it won’t be restarted.

“In the past, Wauchos has been the most detected malware family amongst ESET users, so when we were approached by Microsoft to take part in a joint disruption effort against it, to better protect our users and the general public at large, it was a no-brainer to agree,” said Jean-Ian Boutin, senior malware researcher at ESET.

“This particular threat has been around for several years now and it is constantly reinventing itself – which can make it hard to monitor. But by using ESET Threat Intelligence and by working collaboratively with Microsoft researchers, we have been able to keep track of changes in the malware’s behavior and consequently provide actionable data which has proven invaluable in these takedown efforts.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/05/international_team_takes_down_virusspewing_andromeda_botnet/