STE WILLIAMS

Mobile spyware maker mSpy leaks millions of records – AGAIN

It’s one thing to slip spyware onto somebody’s phone so you can surreptitiously intercept text messages, call logs, emails, location tracking, calendar information and record conversations – that kind of privacy-spurning stuff.

It’s another thing entirely to be the company that makes and markets the software… and – the coup de GAH! – to suffer a breach that exposes not only the private data of the buggers, but that of the buggees… Twice. In three years.

Yes, we’re talking about mSpy. The “ultimate tracking software” runs on mobile phones and tablets, including iPhones and Androids. The company claims that it helps more than a million paying customers spy on the mobile devices of their kids and partners.

(Is it illegal? Well, mumble mumble, totally legal if you tell the target… which of course you’ll do, right… well, anyway, it’s your problem.)

The most recent breach, first reported by security journalist Brian Krebs on Tuesday, involves what he says is millions of sensitive records published online, “including passwords, call logs, text messages, contacts, notes and location data secretly collected from phones running the stealthy spyware.”

The open database was discovered by security researcher Nitish Shah.

It’s since been taken offline, but while it was flapping open, anyone could query what Krebs said were “up-to-the-minute mSpy records for both customer transactions at mSpy’s site and for mobile phone data collected by mSpy’s software,” all accessible without requiring user authentication.

That includes usernames, passwords and the private encryption keys of each mSpy customer who logged in to the mSpy site or purchased an mSpy license over the past six months. Shah said that with the private key, anyone could track and view details of a mobile device running the software.

But wait, there’s more, Krebs reports:

In addition, the database included the Apple iCloud username and authentication token of mobile devices running mSpy, and what appear to be references to iCloud backup files. Anyone who stumbled upon this database also would have been able to browse the WhatsApp and Facebook messages uploaded from mobile devices equipped with mSpy.

That means that someone could have spied on an indeterminate number of kids, besides others under mSpy surveillance, given that some parents install mSpy in order to keep track of their children.

One of the testimonials from mSpy’s site:

Why did I decide to use mSpy? Simple, I am not gonna sit and wait for something to happen. I read about Amanda Todd and other kids. Seriously, my son’s safety costs way more than $30.

Unfortunately, when you collect this type of private information, you get a situation that’s the opposite of keeping kids safe. You instead entrust a company with your child’s details, stored as they are in a database that’s a plum target for scumbags such as trolls, stalkers or child predators. The last thing in the world that any parent would want is for such people to have access to their children’s social media messages or account details, let alone be able to track their whereabouts and eavesdrop on their conversations. But that, unfortunately, is the risk you run when you install spyware: you run the risk that anybody in the wide web can spy on your lover or child.

Shah said he was ignored when he tried to report the breach to mSpy. Krebs had better luck: after he contacted the company on 30 August, he got this reply from mSpy’s chief security officer, who identified himself only as “Andrew”:

We have been working hard to secure our system from any possible leaks, attacks, and private information disclosure. All our customers’ accounts are securely encrypted and the data is being wiped out once in a short period of time. Thanks to you we have prevented this possible breach and from what we could discover the data you are talking about could be some amount of customers’ emails and possibly some other data. However, we could only find that there were only a few points of access and activity with the data.

Krebs notes that some of those “points of access” are his and Shah’s. They were both able to see their own activity on the site in real-time via the exposed database.

The first time that someone tore a hole in mSpy and published its database on the dark web was in 2015.

At the time, for more than a week, mSpy denied the breach, in spite of customers confirming that their information was involved. It finally acknowledged to the BBC that yes, the breach had occurred.

It blamed blackmailers and said it was doubling up on security. Yet Krebs reports that more than two weeks after news of that first breach broke, the company still hadn’t disabled links to “countless” screenshots on its servers that were lifted from mobile devices running mSpy.

Would you really trust this company enough to put its software on your loved ones’ phones? No, neither would we.

To protect against someone doing it to you, make sure to secure your phone with a passcode that you don’t share with anyone: it can help to prevent spyware like this from sneaking onto your phone. Read our 10 tips for securing your smartphone for more advice on protecting your mobile data.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/g7kW6BtycUQ/

HTTPS crypto-shame: TV Licensing website pulled offline

The UK’s TV Licensing agency has taken its website offline “as a precaution” after being blasted for running transactional pages that were not sent over HTTPS.

The publicly funded outfit had been criticised for inviting folk to submit sensitive data over unencrypted links. Just a few hours after proclaiming “we will soon migrate our entire website to HTTPS” it announced:

The telly taxman has been maintaining a secure version of its website but it also ran an HTTP branch – which crucially didn’t redirect over to HTTPS even for handling forms containing sensitive personal information.

Google warning for TV Licensing website

Not secure! Google warning for UK’s TV Licensing website

Following changes it made in late July, Google Chrome began clearly marking the HTTP version of the website as “Not Secure”. Despite the HTTPS-everywhere push, the authority supports a website where data is exchanged without encryption and even goes out of its way to ensure this version appears first in search engine listings.

Yesterday, the form for submitting a name and email address through the site – step one for applying for a TV licence – was not secure. The form for home addresses wasn’t either. Worse still, as of Wednesday 5 September, the form for submitting bank details for setting up a direct debit was also insecure.

Techie Mark Cook let tvlicensing.co.uk know via Twitter. He sent screenshots of the insecure connection process and later blogged about his concerns.

“It’s HTTP through the whole thing. Name, address, email, and bank details,” infosec consultant Scott Helme sighed on Twitter. “They do card payments over HTTPS but only because it’s an external provider.”

After some prodding, TV Licensing told Cook that all was well, advising him to ignore any warning from Chrome. “Our website is secure and security certificates are up to date. Pages where customers enter data are HTTPS. Non-HTTPS pages are safe to use despite messages from some browsers (e.g. Chrome) that say they are not.”

TV Licensing told El Reg that herding consumers towards unencrypted transactional pages was a slip-up it was correcting:

We take security very seriously which is why we use encryption for all payment transactions. However, an issue has been brought to our attention over the recent level of security on transactional pages which were previously fully secure via HTTPS, and as a precaution, we have taken the website offline until this is resolved and are working urgently to fix it. We’ve identified that this issue has happened very recently, and we’re not aware of anyone’s data being compromised.

TV or not TV, that is the question

One common misconception is that secure HTTPS connections are only required on pages that are highly sensitive, such as those used for processing payments. The UK’s National Cyber Security Centre advises that all websites should use HTTPS, “even if they don’t include private content, sign-in pages, or credit card details”, as Cook pointed out.

Running an unencrypted site means hackers might be able to snoop on traffic or inject code into its pages, perhaps via a man-in-the-middle attack or similar.

TV Licensing does have a secure version of their website, “it’s just that you need to manually type in the ‘s’ after the http, which is of course ridiculous,” Cook said. “The TV Licensing website specifically tells search engines to use the insecure version over the secure version by using a canonical tag.”

Cook’s concerns could be addressed by TV Licensing dropping its odd search engine preferences alongside adding a few lines of code to redirect all HTTP requests to HTTPS.

TV Licensing customer support said on Wednesday, soon after replying to Cook, that it would “soon migrate our entire website to HTTPS”, acknowledging the arguments of Cook and others. Just hours later, the site was taken offline and is still not back up, though the error page is on HTTPS.

Cook said: “I’m really glad that tvlicensing are taking steps to make their website more secure. My post was written after tvlicensing.co.uk had publicly responded on Twitter, saying their website was secure… [which] suggested at the time they were going to take no action. It’s good to hear that it was a temporary glitch, but it would be reassuring to know exactly what time frame this was over and what tvlicensing.co.uk’s estimate of potentially affected customers is.”

Infosecurity consultant Paul Moore commented: “There really are no words for such ineptitude, but at least they’re moving to HTTPS everywhere as a result of this.” ®

Bootnote

A shout-out to readers Paul R and William B who expressed concerns that the connection to TV Licensing’s website was untrusted late last year. Their hackles were raised by warnings generated through Firefox, related to TV licence renewal emails. The Google Chrome warnings – along with Cook’s blog post – appear to have brought a long-simmering issue to the boil.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/06/tv_licensing_https_fail/

Understanding & Solving the Information-Sharing Challenge

Why cybersecurity threat feeds from intel-sharing groups diminish in value and become just another source of noise. (And what to do about it.)

Cybersecurity information sharing is not a new topic. In fact, we’ve been talking about it for years. We know we should share information and we expect others to as well. We even see pockets of success, typically among peers who are in the same industry and have a personal or long-term business relationship. They have established a level of trust that allows them to feel comfortable exchanging information that is truly useful.

However, when we try to scale that type of exchange through government and industry groups that exist to promote and facilitate information sharing, we’re less successful. At a corporate level, because of real or perceived liabilities, organizations often aren’t as willing as individuals are to share as individuals, so information sharing on a broader scale in a way that really benefits larger communities of defenders hasn’t taken off. The quantity of active participants and the quality of information shared simply are not there to allow many of these exchanges to work as effectively as intended.

Quality and Quantity: A Cycle of Diminishing Value
Many organizations treat information sharing as another check box. They want to be part of an industry-specific Information Sharing and Analysis Center (ISAC) or a government sharing group, such the Department of Homeland Security’s Automated Indicator Sharing capability or the UK’s Cyber Security Information Sharing Partnership. But they haven’t set up an internal program to identify the type of information their organization can share and how they will share it. Instead, they are focused on receiving information that others share. Eventually, and because sharing groups have guidelines they enforce, organizations will begin to share. But this raises the issue of quality.

As group membership grows, trust weakens, and many organizations are less comfortable sharing information that they have personally found to be of value — for example, from a breach they faced. Instead, organizations tend to share indicators of compromise such as IP addresses and domains. Information sharing becomes automated, with little or no context and sometimes regurgitated from another source. Without context, other participants don’t know if the information is relevant to their organization and should be prioritized. This creates a waning interest in the sharing group as members become overwhelmed with quantity and lack of quality. The threat feed from this intelligence-sharing group diminishes in value and becomes another source of noise.

Groups that can overcome the quality hurdle and find ways to share rich, contextual threat intelligence within communities of interest often rely on the largest members to initially fill the queue with shared intelligence. The hope is that as time goes on, the smaller companies will begin to share as well. This rarely happens, though. Only the more progressive, smaller companies with more developed threat operations programs are able to share high-value information, with the remainder acting primarily as consumers. As a feeling of inequality spreads, the entire sharing construct eventually falls apart.

Breaking the Cycle: 3 Steps
But it isn’t all gloom and doom. In fact, there are three areas where we can focus to strengthen information sharing and allow it to deliver value at scale as intended.

Step 1. Establish information sharing and consumption programs.
Organizations need to understand what they can share from a legal and compliance perspective. This will allow them to strike a balance so they don’t over react and shut down sharing but also don’t inadvertently share something that is proprietary or protected under privacy laws. With clear guidelines, security teams can do better at providing high-quality information with context and relevance. They also need to understand what they are going to consume and how they will use it. This will ensure they’re doing their part to derive value from the intelligence they receive and not suffer from data overload and waste valuable resources. 

Step 2. Monitor for quality.
As information-sharing groups have grown, a surge in automated sharing of tactical information has become their downfall. Sharing groups must monitor information for quality. It must be curated to ensure there is value in passing it along to other members, either as “known bad” or packaged with context so that recipients can determine relevancy within their own environments.

Step 3. Devise ways for all to participate.
The writing is on the wall: Measuring success by numbers isn’t the path to more effective information sharing. To maintain quality and balance quantity, we need to consider forming subgroups with trust built into them. At the same time, smaller organizations also need access to high-value threat information. We must accept that at least initially, they may not be able to contribute much information and will mostly be consumers.

A two-pronged approach can help to address their needs. First, smaller organizations should join or create their own industry-specific sharing community and then actively participate in sharing contextual, relevant intelligence that they have seen on their network. In turn, this will help larger industry sharing groups be more successful at protecting the industry as a whole — including the smaller companies that are part of their ecosystem. Second, small organizations that contract with managed security service providers (MSSPs) should rely on their providers to offer such intelligence. This community defense model is often part of the promise MSSPs make to their customers, so smaller companies should make sure their vendor is delivering.

As we break the cycle of diminishing value by getting a handle on the quantity/quality challenge, information exchanges will begin to thrive. Finally, we’ll be able to do less talking and more sharing.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

As Senior VP of Strategy of ThreatQuotient, Jonathan Couch utilizes his 20+ years of experience in information security, information warfare, and intelligence collection to focus on the development of people, process, and technology within client organizations to assist in … View Full Bio

Article source: https://www.darkreading.com/risk/understanding-and-solving-the-information-sharing-challenge-/a/d-id/1332717?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

US to Charge North Korea for Sony Breach, WannaCry

The DoJ plans to charge North Korean threat actors for their involvement in two major cyberattacks, US officials report.

The Department of Justice is preparing to charge North Korean hackers for the 2014 Sony cyberattack and WannaCry, the May 2017 global ransomware campaign, US officials report.

A New York Times report on the indictment states the US government has long had the suspect, North Korean spy Pak Jin-hyok, on its radar. Intelligence officials believe Pak worked with the North Korea Reconnaissance General Bureau, the country’s equivalent to the CIA and the same organization believed to be responsible for both WannaCry and Bangladesh bank thefts.

The indictment was delayed, the New York Times continues, because much of the incriminating information officials wanted to leverage against Pak was classified and could not be used.

In a separate report, Reuters implies the DoJ will charge multiple North Korean hackers for both the Sony and WannaCry attacks. It also states the charges are part of a US government strategy to prevent future cyberattacks by publicly identifying the alleged threat actors.

Read more details here.

 

Black Hat Europe returns to London, Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/us-to-charge-north-korea-for-sony-breach-wannacry/d/d-id/1332748?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The SOC Gets a Makeover

Today’s security operations center is all about reducing the number of alerts with emerging technologies – and enhancing old-school human collaboration. Here’s how some real-world SOCs are evolving.

Blame it on the success of the SIEM. For many security operations center (SOC) managers, the security information and event management system was both a blessing and a curse: It was a way to consolidate and correlate security alerts from firewalls, routers, IDS/IPS, antivirus software, and servers, for example, into a centralized console. But with the recent wave of new security tools, threat intelligence feeds, and constantly mutating threats, SOCs are drowning in anywhere from thousands to a million security alerts daily. 

“A lot of companies have tool fatigue right now. There are a lot of tools that are partially implemented and not getting the care and feeding they need,” says DJ Goldsworthy, director of security operations and threat management at Aflac.

The flood of alerts and out-of-tune tools, compounded by the industry’s persistent talent gap and high turnover rate for junior-level SOC analysts, have forced some organizations to rethink and retool how they organize and run their SOCs.

In many cases, the evolution is being spurred by another tool: The new generation of security orchestration and automation tools that streamline and automate some tasks with automated playbooks is replacing some of the more manual tasks of clicking through each and every alert, looking for that deadly needle in the haystack.

SOC operators are reorganizing their people power, too, collapsing the virtual hierarchical walls between Tier 1, 2, and 3 SOC analysts and fostering a more cooperative and collaborative team operation where analysts work together to analyze and troubleshoot an event – and don’t simply pass the baton up the chain after completing their task.

Josh Maberry, director of security operations for managed security services provider Critical Start, says once security alerts from multiple tools started flowing into the SIEM way too fast and furiously, the floodgates opened, and SOC analysts became overwhelmed. “There’s only so much you can manually get to one in day, only so much you can see,” says Maberry, whose firm built its own event orchestration platform, which it spun off as sister company Advanced Threat Analytics. “So [SOC] analysts began to drown … The whole thing became an events-to-bodies ratio, and that’s no way to win.”

Organizations struggle to staff their SOC operation today, both due to that losing events ratio and also because there just aren’t candidates to fill the jobs. Some 80% of organizations don’t have enough analysts to run their SOC, according to a new report from security event orchestration vendor Demisto. It takes eight months, on average, to train SOC analysts for readiness; meantime, there’s a two-year turnover for one-quarter of all SOC analysts, the study shows. The fallout: It takes organizations an average of 4.35 days to resolve a security incident.

More than half either don’t have incident response process playbooks in place, or they do but don’t bother updating them, the study found. “One relatively overlooked side effect of the alert fatigue and day-to-day firefighting that security teams face is the stagnancy of security processes. When analysts are strapped for time, they find it difficult to capture the gaps in current processes and update them on an ongoing basis,” the report states.

Taking Back the SOC
One of the larger SOC operations is that of security vendor Symantec, with six SOC locations worldwide. Symantec’s Herndon, Va., site houses both the company’s internal SOC as well as a SOC that manages security event and response services for its customers. A team of more than 500 security professionals make up Symantec’s global SOC operation, which handles more than 150 billion security logs per day.

Symantec SOC in Herndon, Va. --- Courtesy of Symantec

Symantec’s internal SOC analysts are designated by level of experience and seniority (think: tiers), but they often work as a team when a security event occurs. Tony Martinez, cybersecurity operations lead for the Joint Security Operations Center (JSOC) in Herndon, says all SOC analysts – even Tier 1 analysts – are encouraged to handle an incident “end to end,” meaning from detection to resolution/response. “They don’t have to just throw the ticket over the fence” to a senior analyst, he says. “They ask a senior analyst to assist them.”

Symantec’s JSOC recently added Splunk’s Phantom security orchestration and automation platform to consolidate security tools and alerts. On the managed security services side of the house, SOC analysts sit in close proximity to foster more collaboration during their shifts, and Tier 1/entry-level SOC analysts undergo three months of intense training plus a timed “queue” test that simulates incoming incidents in the SOC. Not only do they have to solve each issue correctly in the queue test, but they also must explain why they chose a specific answer, says Steve Meckl, director of managed security services at the Symantec SOC.

Junior SOC analysts ultimately get to take on more regular calls with customers and attend on-site customer visits for face-to-face meetings.

“We don’t work in silos at all,” Meckl says. All SOC analysts get to work on a problem from start to finish, with the junior analysts getting input from a senior one.

This more advanced and hands-on role for entry-level SOC analysts is becoming a trend: The Tier 1 SOC analyst role is expected to evolve into more of a Tier 2 role, where he or she can analyze a flagged alert themselves rather than passing it over to a Tier 2 SOC analyst.

In most SOCs, Tier 3 analysts are the more skilled analysts who can investigate a threat or malware more deeply, and do forensics. As more of the Tier 1 analysts work gets picked up by automation (think: security orchestration and automation tools), that provides Tier 3 the bandwidth to conduct more proactive operations like threat hunting. Tier 1 and 2 take on more of the investigative duties. 

Brian Genz, a senior engineer at Northwestern Mutual and an expert in security orchestration and automation, response, and threat hunting, says the insurance company decided to fashion its SOC as slimmer and more collaborative to better thwart rapidly evolving threats.

Northwestern Mutual’s implementation of a new security orchestration and automation tool has helped shape that transformation with automated playbooks for handling events as the come in. “Our junior SOC analysts are becoming more engaged in the what and why now  –  not just ‘close this tickets to hit your metrics,'” Genz says.

Genz says the insurer, which uses an MSSP for around-the-clock security ops, actually considers its SOC an IR analyst operation. “So we don’t typically use the term ‘SOC,'” he says. “We try to put people in the shoes of an incident response analyst.”

Homegrown
Sometimes it takes a little homegrown technology to streamline SOC operations. Take Aflac, which in its SOC runs a centralized SIEM with a behavioral analytics platform that handles a terabyte of security log data each day. The system is streamlined with a proprietary risk algorithm created by Aflac that aggregates and filters alerts. Aflac has reduced its alert count by about 70% with this combination of automation, analytics, and risk scoring.

It has also changed the role of Aflac’s entry-level SOC analysts. They’re not manually clicking through raw alerts and either ignoring or escalating them like traditional Tier 1 analysts. “This has automated Tier 1 to an extent,” Aflac’s Goldsworthy says.

The junior analysts at Aflac examine the insurer’s pre-vetted alerts: “They analyze them, do a smell test, and if they think it’s worthy of investigation, they send it to a Tier 2 analyst,” he says.

Even so, the SOC still operates in a traditional tiered manner, but each level has more advanced duties than in the old days. Tier 2 analysts handle preliminary incident investigation and then hand off confirmed events to the incident response team. Tier 3 analysts provide forensics investigations and typically have deep endpoint and network analysis skills, Goldsworthy says.  

Another tool that’s changing Aflac’s SOC operation is deception technology – a sort of next-generation honeypot – to further minimize its false-positive alerts. Goldsworthy calls its Attivo Networks deception tool “an insurance policy for the unknown.”

Goldsworthy says deception decoys give SOC analysts a “unique perspective” about attackers and their methods, which they then can share with other members of the team and, in turn, respond accordingly with proper defenses. “Deception also allows our security team to collaborate with and enable the business by allowing for more rapid adoption of new technologies because deception can be deployed wherever the business needs IT to go,” he says.

(Next Page:  SOC Analyst Mashup)

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full BioPreviousNext

Article source: https://www.darkreading.com/risk/the-soc-gets-a-makeover/d/d-id/1332744?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ungagged Google warns users about FBI accessing their accounts

Dozens of people say they’ve received an email from Google informing them that the FBI has been sniffing around for information on their accounts. Now that a gag order has been lifted, the company is able to “disclose the receipt of the legal process” to any affected users, Google said.

That’s not entirely surprising: the gag orders that often accompany such requests keep organizations such as Google, Microsoft, Facebook and Apple from disclosing the order for a given period of time. Any email provider worth its salt nowadays issues transparency reports, and the biggest companies have called for increased transparency in government surveillance requests.

But these nondisclosure orders can be lifted, cybercrime lawyer Marcia Hoffman told Motherboard:

It looks to me like the court initially ordered Google not to disclose the existence of the info demand, so Google was legally prohibited from notifying the user. Then the nondisclosure order was lifted, so Google notified the user. There’s nothing unusual about that per se. It’s common when law enforcement is seeking info during an ongoing investigation and doesn’t want to tip off the target(s).

Who are the targets in the FBI’s inquiry – targets who can now be safely tipped off?

The emails lack specific details about whatever the FBI was investigating, though they did contain a case number that corresponded to a sealed case when Motherboard looked it up on PACER.

Some who received the letters posted screenshots in online forums. From one such:

Google received and responded to legal process issue by Federal Bureau of Investigation (Eastern District of Kentucky) compelling the release of information related to your Google account. A court order previously prevented Google from notifying you of the legal process. We are now permitted to disclose the receipt of the legal process to you.

Though the letters had scanty detail, some of the recipients have a hunch regarding what it’s all about.

In threads on Reddit, Twitter, and Hack Forums, conjecture is that the FBI was looking for information on people associated with LuminosityLink: an easy to use, remote access Trojan (RAT) that was selling for as little as $39.99.

…until, that is, it wasn’t. Europol snuffed out LuminosityLink in February, following a UK-led dragnet in September 2017 that involved over a dozen law enforcement agencies in Europe, Australia and North America that went after hackers linked to the tool.

In July, 21-year-old Kentuckian Colton Grubbs pleaded guilty to federal charges of creating, selling and providing technical support for the RAT to his customers, some of whom used it to gain unauthorized access to thousands of computers across 78 countries worldwide.

Some of those who received the notice from the newly ungagged Google said that they consider the mystery solved: they had purchased LuminosityLink, which may well have caught the attention of the FBI.

Buying LuminosityLink doesn’t necessarily brand somebody a cybercrook. It had a split personality when it came to its marketing: it was sold as a legitimate tool for Windows admins to “manage a large amount of computers concurrently”. On the flip side, it was also a cheap, easy-to-use, multi-purpose pocket knife with a slew of malware tools you could flip out: a RAT that could be surreptitiously installed without a user being aware and which disabled anti-virus and anti-malware protection on targets’ computers before going to work switching on webcams to spy on video feeds; accessing and viewing documents, photographs, and other files; stealing passwords; and/or installing a keylogger to automatically record victims’ keystrokes.

Some bought it to do legitimate systems administration. Others say they bought it for research purposes. Their activities would only be illegal if they used the tool’s more nefarious capabilities.

While it’s not unusual for a gag order to be subsequently lifted, it is perhaps unusual for the FBI to try to track down every person who purchased software that may not be considered illegal, as one lawyer pointed out to Motherboard. Gabriel Ramsey, a lawyer with a specialty in cybersecurity and internet law, said that just buying a tool like LuminosityLink doesn’t determine guilt:

If one is just buying a tool that enables this kind of capability to remotely access a computer, you might be a good guy or you might be a bad guy. I can imagine a scenario where that kind of request reaches – for good or bad – accounts of both type of purchasers.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/gm_bgCwlIiI/

Ungagged Google warns users about FBI accessing their accounts

Dozens of people say they’ve received an email from Google informing them that the FBI has been sniffing around for information on their accounts. Now that a gag order has been lifted, the company is able to “disclose the receipt of the legal process” to any affected users, Google said.

That’s not entirely surprising: the gag orders that often accompany such requests keep organizations such as Google, Microsoft, Facebook and Apple from disclosing the order for a given period of time. Any email provider worth its salt nowadays issues transparency reports, and the biggest companies have called for increased transparency in government surveillance requests.

But these nondisclosure orders can be lifted, cybercrime lawyer Marcia Hoffman told Motherboard:

It looks to me like the court initially ordered Google not to disclose the existence of the info demand, so Google was legally prohibited from notifying the user. Then the nondisclosure order was lifted, so Google notified the user. There’s nothing unusual about that per se. It’s common when law enforcement is seeking info during an ongoing investigation and doesn’t want to tip off the target(s).

Who are the targets in the FBI’s inquiry – targets who can now be safely tipped off?

The emails lack specific details about whatever the FBI was investigating, though they did contain a case number that corresponded to a sealed case when Motherboard looked it up on PACER.

Some who received the letters posted screenshots in online forums. From one such:

Google received and responded to legal process issue by Federal Bureau of Investigation (Eastern District of Kentucky) compelling the release of information related to your Google account. A court order previously prevented Google from notifying you of the legal process. We are now permitted to disclose the receipt of the legal process to you.

Though the letters had scanty detail, some of the recipients have a hunch regarding what it’s all about.

In threads on Reddit, Twitter, and Hack Forums, conjecture is that the FBI was looking for information on people associated with LuminosityLink: an easy to use, remote access Trojan (RAT) that was selling for as little as $39.99.

…until, that is, it wasn’t. Europol snuffed out LuminosityLink in February, following a UK-led dragnet in September 2017 that involved over a dozen law enforcement agencies in Europe, Australia and North America that went after hackers linked to the tool.

In July, 21-year-old Kentuckian Colton Grubbs pleaded guilty to federal charges of creating, selling and providing technical support for the RAT to his customers, some of whom used it to gain unauthorized access to thousands of computers across 78 countries worldwide.

Some of those who received the notice from the newly ungagged Google said that they consider the mystery solved: they had purchased LuminosityLink, which may well have caught the attention of the FBI.

Buying LuminosityLink doesn’t necessarily brand somebody a cybercrook. It had a split personality when it came to its marketing: it was sold as a legitimate tool for Windows admins to “manage a large amount of computers concurrently”. On the flip side, it was also a cheap, easy-to-use, multi-purpose pocket knife with a slew of malware tools you could flip out: a RAT that could be surreptitiously installed without a user being aware and which disabled anti-virus and anti-malware protection on targets’ computers before going to work switching on webcams to spy on video feeds; accessing and viewing documents, photographs, and other files; stealing passwords; and/or installing a keylogger to automatically record victims’ keystrokes.

Some bought it to do legitimate systems administration. Others say they bought it for research purposes. Their activities would only be illegal if they used the tool’s more nefarious capabilities.

While it’s not unusual for a gag order to be subsequently lifted, it is perhaps unusual for the FBI to try to track down every person who purchased software that may not be considered illegal, as one lawyer pointed out to Motherboard. Gabriel Ramsey, a lawyer with a specialty in cybersecurity and internet law, said that just buying a tool like LuminosityLink doesn’t determine guilt:

If one is just buying a tool that enables this kind of capability to remotely access a computer, you might be a good guy or you might be a bad guy. I can imagine a scenario where that kind of request reaches – for good or bad – accounts of both type of purchasers.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/gm_bgCwlIiI/

Thousands of unsecured 3D printers discovered online

You’ve installed an exciting new 3D printer in the office and decide you want to access it remotely because – heck – that sounds convenient… now what do you do?

According to an alert put out by the SANS Internet Storm Center (ISC), for 3,759 owners using an open-source monitoring utility called OctoPrint, the answer was to hook up their expensive 3D printer to the internet without bothering with the nuisance of authentication.

This is a bad idea because it’s trivially easy for someone with malicious intentions to spot the unsecured printer using Shodan (a search engine for internet-connected devices). In fact, the ISC was tipped off about the issue by someone who’d done just that.

The great thing about OctoPrint is how easy it makes it for an owner to control their complex 3D printer, but that applies to any other internet user connecting to it when access control is turned off.

In this state a hacker could steal valuable IP by downloading previous print job files in the unencrypted G-code format or, worse, try to damage the printer by uploading specially-crafted print files. Because most 3D printers have a built-in webcam for print monitoring, they could even watch their malicious print handiwork from afar.

A blog response by OctoPrint’s developers to the ISC warning was incredulous:

OctoPrint is connected to a printer, complete with motors and heaters. If some hacker somewhere wanted to do some damage, they could.

Open access could even be used to compromise the firmware, it said, but “catastrophic failure” was the main risk.

The Shodan trawl showed that the worst offenders were in the US, which accounted for 1,585 printers, ahead of Germany on 357, France on 303, the UK on 211, and Canada on 162.

This only covers OctoPrint, of course, which raises the possibility that owners using other 3D printer monitoring software might be making the same mistake.

What to do?

This is a problem caused by bad configuration and not the OctoPrint software, which clearly warns against enabling access without access control. Any owner exposing their printer to the internet without this must have chosen to do so.

However, even with this turned on anyone will be able to view read-only data, which is not something an owner is likely to want to allow. To avoid this, OctoPrint’s developers recommend that users consider an alternative means of remote access – such as via a plug-in like OctoPrint Anywhere or Polar Cloud, a VPN, or an Apache or Nginx reverse proxy.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/hZVtpwN0j2g/

Thousands of unsecured 3D printers discovered online

You’ve installed an exciting new 3D printer in the office and decide you want to access it remotely because – heck – that sounds convenient… now what do you do?

According to an alert put out by the SANS Internet Storm Center (ISC), for 3,759 owners using an open-source monitoring utility called OctoPrint, the answer was to hook up their expensive 3D printer to the internet without bothering with the nuisance of authentication.

This is a bad idea because it’s trivially easy for someone with malicious intentions to spot the unsecured printer using Shodan (a search engine for internet-connected devices). In fact, the ISC was tipped off about the issue by someone who’d done just that.

The great thing about OctoPrint is how easy it makes it for an owner to control their complex 3D printer, but that applies to any other internet user connecting to it when access control is turned off.

In this state a hacker could steal valuable IP by downloading previous print job files in the unencrypted G-code format or, worse, try to damage the printer by uploading specially-crafted print files. Because most 3D printers have a built-in webcam for print monitoring, they could even watch their malicious print handiwork from afar.

A blog response by OctoPrint’s developers to the ISC warning was incredulous:

OctoPrint is connected to a printer, complete with motors and heaters. If some hacker somewhere wanted to do some damage, they could.

Open access could even be used to compromise the firmware, it said, but “catastrophic failure” was the main risk.

The Shodan trawl showed that the worst offenders were in the US, which accounted for 1,585 printers, ahead of Germany on 357, France on 303, the UK on 211, and Canada on 162.

This only covers OctoPrint, of course, which raises the possibility that owners using other 3D printer monitoring software might be making the same mistake.

What to do?

This is a problem caused by bad configuration and not the OctoPrint software, which clearly warns against enabling access without access control. Any owner exposing their printer to the internet without this must have chosen to do so.

However, even with this turned on anyone will be able to view read-only data, which is not something an owner is likely to want to allow. To avoid this, OctoPrint’s developers recommend that users consider an alternative means of remote access – such as via a plug-in like OctoPrint Anywhere or Polar Cloud, a VPN, or an Apache or Nginx reverse proxy.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/hZVtpwN0j2g/

NASA ‘sextortionist’ allegedly tricked women into revealing their password reset answers, stole their nude selfies

A former NASA contractor was arrested and charged on Wednesday for allegedly sextorting women.

Richard Gregory Bauer, 28, was detained at his Los Angeles home by special agents from the space agency’s internal watchdog. Bauer is accused of stalking, unauthorized access to protected computers, and aggravated identity theft, according to a 14-count indictment returned by a federal grand jury on August 28 – and obtained by The Register this week.

US prosecutors claim that between February 7, 2015, and June 11, 2018, Bauer – who worked at NASA’s Armstrong Flight Research Center in Edwards, California – harassed seven women over the internet by claiming he had compromising pictures of them naked. He allegedly threatened to spread the images publicly online unless they gave him additional X-rated snapshots.

For six of the women, according to the US government, Bauer did have nude pictures, which he obtained by hacking the victims’ accounts with Facebook, Google, and other online services. The indictment stated Bauer, without attempting to conceal his identity, contacted some of the women through Facebook messages to ask them a series of questions, under the pretense that he needed survey data for his “human societies class.”

Some of the questions were those used by online services to reset passwords, such as the city where the victim’s parents met, the name of her first pet, or the make of her first car.

With the answers provided, the indictment stated, Bauer was able to log into the victims’ online profiles and private photo albums, where in most cases he found explicit images he could use against them. He is then alleged to have contacted the victims under a different identity with messages like this:

So a mutual friend gave me some picture of you, and said you would give me more. I liked what I saw. I assume this is you? i have mannnnnny more. So what do you say about giving me some more? I dont want to put these somewhere…

The charges against Bauer also claimed he convinced some victims to install malware on their computers, under the pretense that he needed help testing some image enhancement software he’d written.

Some of the victims responded by changing their email address or deleting their Facebook account, however the indictment stated Bauer continued to harass them.

A NASA Armstrong Flight Research Center spokesperson declined to provide details about Bauer or his contract work and would not identify the contracting firm that employed him. A call to the NASA Office of Inspector General special agent handling the case was not immediately answered.

A spokesperson for the US Attorney’s Office said Bauer was working as a NASA contractor during the period that most if not all of the alleged offenses occurred. The spokesperson declined to comment on whether Bauer used NASA equipment in furtherance of his alleged scheme, but said a probe was launched after a coworker provided information to NASA’s Office of Inspector General.

The indictment noted that the last victim worked with an undercover law enforcement officer, who monitored Bauer’s alleged threats to expose nude pictures. If convicted on all 14 counts, Bauer could be sent down for as long as 64 years.

Don’t forget to switch on your two-factor or multi-factor authentication on your accounts, folks. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/06/nasa_contractor_charged/