STE WILLIAMS

For pity’s sake, groans Mimecast, teach your workforce not to open obviously dodgy emails

A JavaScript-based phishing campaign mainly targeting British finance and accounting workers has been uncovered by Mimecast.

The attack, details of which the security company published on its blog, “was unique in that it utilized SHTML file attachments, which are typically used on web servers”.

When the mark opened a phishing email, the embedded JavaScript would immediately punt them off to a malicious site where they were invited to enter sensitive login credentials.

Tomasz Kojm, senior engineering manager at Mimecast, said: “This seemingly innocent attachment redirecting unsuspecting users to a malicious site might not be a particularly sophisticated technique, but it does present businesses with a big lesson. Simple still works. That’s a huge challenge for organisations trying their best to keep their systems secure.”

Sample phishing email provided by Mimecast

Targeted attack email caught by Mimecast (click to enlarge)

British financial and accounting firms have taken the brunt of the attack, receiving 55 per cent of the emails so far detected, with the same sector being targeted in South Africa with 11 per cent of phishing attempts. Around a third of emails were sent to Australian targets, typically in the higher education sector.

Mimecast, which among other things makes email security software, said it had blocked the attack from reaching 100,000 subscribers, and, among the usual thinly disguised sales pitches, offered some rather good advice:

“Train every employee so they can spot a malicious email the second it arrives in their inbox. This can’t be an annual box-ticking quiz, it needs to be regular and engaging. Phishing is not going away any time soon, so you need to ensure your employees can act as a final line of defence against these threats… If in doubt, follow the basic rule to ignore, delete and report.”

As threat intelligence, security and antivirus companies all go down the route of building ever more sophisticated (and expensive) “endpoint solutions,” firewalls and the like, it is important to remember that the simplest attack vectors are the ones enjoying ever more success.

UK spy agency GCHQ’s public-facing offshoot, the National Cyber Security Centre, warned in its annual report that it had halted 140,000 phishing attacks and nixed 190,000 fraudulent websites – though of the top 10 phishing targets, a significant number were identified as government or quasi-governmental agencies (including HMRC, the BBC and the Student Loan Company), which were at risk of reputational harm.

UK.gov reckons its brand reputation protection service brave cyber defenders are having an impact. The Department of Digital, Culture, Media and Sport reckoned there was about a 10 per cent decline in UK businesses reporting cyber attacks and breaches over the past year. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/17/finance_phishing_javascript/

For Real Security, Don’t Let Failure Be Your Measure of Success

For too long, we’ve focused almost exclusively on keeping out the bad guys rather than what to do when they get in (and they will).

Maintaining security is a critical mission for every business today, so you’d expect there to be a commonly accepted way to measure its success. But while other areas of IT are evaluated by increasingly precise and detailed metrics — availability, mean time to repair, and so on — for security, it typically comes down to a crude binary. If we got breached, we failed. If we haven’t been breached, we’re succeeding. But note the missing word in that last sentence: yet. In reality, every organization will be breached at some point. The only question is whether they’ll know about it. To make a breach less likely to happen and less damaging when it does, we need a better way to know how well we’re doing along the way.

When Pass-Fail Security Fails
The most obvious problem with the binary pass-fail approach to measuring cybersecurity success is that there’s no way to use it to drive incremental improvement. More fundamentally, it’s based on the false idea of perfect prevention as a goal. For years, both vendors and compliance-driven security programs have acted as if the right combination of firewalls and antivirus protection can keep companies entirely safe. As a result, they haven’t focused enough on what happens when an attacker makes it through these perimeter-focused defenses. At that point, with all the emphasis having been on prevention and not detection, it’s often game over for the company as the attacker moves undetected through the environment.

Assume Penetration — and Act Accordingly
Cybersecurity professionals would do well to consider the lessons learned and innovation in the residential security market. In the old days, people counted on locking their windows and doors to protect their homes, and yet burglaries remained common. It only took one forgotten lock and a burglar would soon be carrying away his loot. Perimeter alarms were similarly vulnerable to human error and brute force — until motion detectors enabled alerts no matter how intruders got in. Now, smart home technologies make it possible for people to get instant visibility of a possible home breach, see what’s happening inside and around the house, lock doors and windows remotely, and call the police while there’s still time.

For too long, cybersecurity has been all about preventative controls like the lock while completely missing visibility and detection controls like the camera. But even the best lock is a tactic, not a strategy. What happens when the lock fails to stop a burglar, or when an attacker bypasses the installed antivirus? At that point, the focus must shift to quick detection and reaction in order to limit the damage — and the idea of more and more locks is no longer relevant.

Make Hackers Hate You
If no product can completely prevent a breach, we must consider other metrics of success for security. A three-step approach offers a starting point for this assessment:

1. What are the most common successful attack vectors for my type of company and environment?

2. How likely would we be to detect such an attack should it occur?

3. Can we make this type of attack harder and more expensive for the hacker?

For example: For years, we all put antivirus protection on our laptops, but how would we know if it has been bypassed or if an attack were succeeding against it? If off-the-shelf commands are being run in a malicious manner on the laptop, even if there’s no “malware” in a classic sense, you can be fairly sure there’s an attacker involved — but first, you must be set up to detect this. What other types of attacks pose the greatest risk for your business? What signs could tip you off that they’re underway, and are you able to detect them? These sorts of questions led to a fundamental transformation of the antivirus industry, from legacy players to today’s thriving modern endpoint protection market.

Assume Nothing
Once you have this visibility, the next step is to create a feedback loop to continuously evaluate and improve your countermeasures. Bug bounties and attack simulations by modern penetration test firms will tell you what’s working and what isn’t. Returning to our home security analogy, homeowners often concentrate resources such as cameras and motion detectors around the TV, but a simulation could show that a burglar’s most likely target is the jewelry in the bedroom. To make the most of finite cybersecurity resources, the most effective strategy is to deploy detection and visibility throughout your environment, run simulations to see where what a successful attack will look like, and then use this knowledge to deploy defenses where they’ll do the most good.

Visibility, detection, and the continuous improvement they enable can make it harder to breach your environment. To make it more expensive for attackers as well, you also want to pepper your environment with detection tools that instantly phone home if accessed by an attacker. Historically this was done via hard-to-maintain tools like honeypots, but today it’s easily accomplished via modern products (for instance, the Thinkst Canary).

By shifting from a model of “100% focus on preventative controls with a compliance mindset” to “obtaining visibility and using feedback loops to give us data on where to better allocate our security resources,” we can define and measure cybersecurity success in a way that’s both more realistic and more useful for driving improvement. Are you detecting the attackers that matter most? How quickly and accurately? Strengthen and shift your resources based on what you’ve learned. Repeat.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Zane Lackey is the co-founder and CSO at Signal Sciences and the author of Building a Modern Security Program (O’Reilly Media). He serves on multiple advisory boards, including the National Technology Security Coalition, the Internet Bug Bounty Program, and the US State … View Full Bio

Article source: https://www.darkreading.com/risk/for-real-security-dont-let-failure-be-your-measure-of-success/a/d-id/1335237?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Data Loss, Leakage Top Cloud Security Concerns

Compliance, accidental exposure of credentials, and data control are also primary concerns for senior IT and security managers.

Most (93%) cybersecurity professionals are “moderately to extremely concerned” about cloud security, with data loss and leakage (64%) and data privacy (62%) at the top of the collective list.

To compile the “2019 Cloud Security Report,” commissioned by Synopsys, researchers with Cybersecurity Insiders conducted a survey of its 400,000-person community to see what’s top of mind for senior-level managers in IT and security operations. About one-third said they are “very” or “extremely” confident in their organizations’ cloud security posture, and 47% are “moderately confident.” Still, even those who feel good about security have concerns.

Data loss and confidentiality aside, respondents are mostly worried about legal and regulatory compliance (39%), accidental credential exposure (39%), data sovereignty (35%), and incident response (29%). When asked about the biggest daily operational headaches, respondents pointed to compliance (34%), visibility into infrastructure security (33%), lack of qualified staff (31%), setting consistent security policies (31%), lack of integration with on-prem technology (29%), and security not keeping up with the pace of new and existing applications (29%).

The compliance process is complex, and the greatest challenge for 43% of IT and security professionals surveyed is monitoring for new vulnerabilities in cloud services that must be secured. Other compliance pain points include audit assessments in the risk environment (40%) and monitoring for compliance with policies and procedures (39%).

Respondents use several tactics to protect cloud-based data. More than half (52%) use access controls, 48% use encryption or tokenization, and 45% use security services offered natively or by cloud providers. Less common methods include cloud security monitoring tools (36%), connecting to cloud via protected networks (36%), and third-party security services (25%).

Read more details here.

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/data-loss-leakage-top-cloud-security-concerns/d/d-id/1335277?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

A Password Management Report Card

New research on password management tools identifies the relative strengths and weaknesses of 12 competing offerings.

The phrase “password management” engenders similar reactions from both those responsible for cybersecurity and the individuals who must use passwords. It’s a “trying necessity.”

To address the issues associated with password management, there is a good selection of tools available to teams, businesses, and enterprises. However, these products need to adapt and evolve to win new business, protect against new cybersecurity threats, and support the move toward a “password-less” enterprise. Recent research from Ovum, a UK-based analysis firm, evaluated a dozen of the most prominent players in the account credential market, assessing the relative strengths and weaknesses of 12 competing offerings. Here is a summary of our findings:

  • All products selected for the report offer good deployment and administration capabilities.
  • No single vendor stands out head and shoulders above the rest. However, based on a range of categories, the leading products are: 1Password Business, Dashlane Business, Keeper for Business, LastPass Enterprise, ManageEngine Password Manager Pro, Pleasant Password Server, and RoboForm for Business.
  • The open source products from Bitwarden and Passbolt both show strong potential and demonstrate what can be accomplished by small teams.
  • Bluink deserves a mention for its mobile-first approach to password management, especially the geofencing capabilities of Bluink Enterprise.
  • And finally, kudos to Passwork and TeamPassword for developing easy-to-use password management solutions that address the specific needs of startups and digital marketing agencies.

Advice to Enterprises: User Password Mangers MFA
Among a range of Ovum recommendations for enterprises, adopting any trusted password manager is almost always going to be better than not adopting one at all. Our research reveals that over 80% of major data breaches can be traced back to a single compromised identity, so password management needs to be on the top of the cybersecurity agenda. Ovum also recommends that enterprises evaluate products originating in the consumer market and consider the benefits of offering password management tools that employees can extend for personal use. It could make practical sense to deploy more than one product in larger organizations.

If an enterprise is moving business and productivity workloads to the cloud, give consideration to adding strong authentication to enhance the security of employee user IDs and passwords. Password managers present an obvious target for hackers and cybercriminals, so consider which multifactor authentication mechanisms are likely to work best for staff and employees.

Security should be at the heart of any modern digital workplace strategy; therefore, password management tools must be considered alongside device, operating system, browser, and application management strategies. Microsoft and Google are introducing customers to their password-less strategies, so IT and security teams should consider the relevance of these initiatives as part of any password management-related project.

SaaS Cloud
Organizations adopting password management products need to do their due diligence, especially if they are operating in regulated industries or where strict security protocols are in place. Be aware that it’s the customer organization not the security vendor, who has responsibility for ensuring compliance with applicable laws and regulations. When considering software-as-a-service and cloud-based solutions, businesses and institutions should look for relevant vendor certifications, accreditations, and reporting standards, such as SOC 2 for trust, ISO 27001 for information security management, ISO 22301 for business continuity, PCI DSS for payment security, and ISO 27018 for protection of personally identifiable information.

The FIDO Alliance is an influential industry association from the perspective of the world’s over-reliance on passwords, and it is worth noting that Dashlane, Keeper Security, and LastPass (LogMeIn) are associate-level members. The FIDO Alliance is working to change the nature of authentication with open standards that are more secure than passwords, simpler for consumers to use, and easier for service providers to deploy and manage. That said, among its recommendations, Ovum suggests that businesses give consideration to vendors that support the FIDO Alliance in promoting a password-less future while also addressing the immediate needs of the market.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Maxine leads Ovum’s security research, developing a comprehensive research program to support vendor, service provider, and enterprise clients. Having worked with enterprises across multiple industries in the world of information security, Maxine has a strong understanding of … View Full Bio

Article source: https://www.darkreading.com/cloud/a-password-management-report-card-/a/d-id/1335248?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Researchers hide data in music – and human ears can’t detect it

Researchers have developed a way for data to be secretly transferred inside a music track at a usable rate without turning it into unlistenable mush.

While using sound waves as a data carrier is not new, applying the principle to music has always been a challenge because even small distortions made when adding data will be noticed by the human ear.

If one could overcome this, music would make a good medium for data transfer because it can easily be picked up by the microphones used by smartphones and computers without annoying people by blasting unstructured sound at them.

How does it work?

The technique outlined by Manuel Eichelberger and Simon Tanner of ETH Zurich uses orthogonal frequency-division multiplexing (OFDM) to add data to the musical frequencies humans are less likely to notice whilst avoiding the ones they are sensitive to.

It sounds easy enough in principle but applying it to music tracks with individual harmonic compositions across different genres quickly becomes a highly technical challenge.

Then there’s the problem of being able to transfer enough data at a given distance to make the whole idea worthwhile.

After conducting experiments, the researchers found it was possible to achieve data rates of 300 to 400 bits per second (bps) over distances of up to 24 metres, with a 10% error rate, without affecting the original music when played to a test group of 40 people.

When a modified tune is played back by a speaker, a person listening to it cannot notice any degradation in sound quality but still, a smartphone is able to read out the information carried by the song.

What could you do with it?

Although a low data rate by modern radio frequency standards, the pair reckon this is still sufficient for basic applications, which brings us to the critical question of what such a data-in-music technology might be used for.

Their answer seems to make the everyday movement of small chunks of data such as security keys less of a manual chore. For example:

That would be handy in a hotel room, since guests would get access to the hotel Wi-Fi without having to enter a password on their device.

Granted, encoding useful data in music at bit rates and ranges not yet matched by other researchers is impressive, but to some this will sound like a solution looking for an application.

If something as compact as a Wi-Fi key could be transferred using sound waves, why not do that using a short blast of sound or simple sequence of musical notes?

Inevitably, using music creates the inherent problem of distortion while using sound at all depends on making assumptions about background noise.

Readers can judge for themselves by comparing the original, unchanged track with the one to which data has been added.

On a related note (pardon the pun), in 2015 another set of researchers from ETH Zurich suggested that comparing ambient sounds picked up by a smartphone and a PC to confirm they are in the same vicinity could be used for two-factor authentication.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/AnMj1dA0zuM/

Facebook rolls out anti-scam reporting tool in UK

UK TV celebrity Martin Lewis was all smiles this week after a five-month alliance with Facebook to crack down on scam ads finally bore fruit.

The company has coughed up £3m (around US $3.7m) to help support anti-scam services as well as introducing a tool to report scam ads on the UK version of the site.

Lewis, a TV presenter and journalist who advises people on financial issues, sued Facebook in April 2018 after scammers used his name in fake ads on the platform to con people out of their money. He settled with the company in January 2019, recouping his legal fees and persuading it to donate £3m to Citizens Advice and create a new scam ad reporting tool.

Facebook made good on that promise this week.

One man lost £19,000 ($23,500), he recalled. A woman who was looking after her orphaned grandchildren put the money that was set aside for them into one of these fake schemes and lost everything. They all blamed him, even though he had nothing to do with the ads.

This week Facebook launched a button on its UK site to report scam ads. This lets users click the three dots ‘. . .‘ on the top right of an ad and then select the ‘Report Ad‘ function, followed by ‘Misleading or scam ad‘. Then, they have to confirm that they want to send a detailed scam report. Hopefully, if successful, this will roll out to other markets.

The £3m went to a new Citizens Advice service called Citizens Advice Scam Action (CASA), of which Lewis says:

Its job is to fight scams, prevent them, and give one-on-one help to victims. 

This is an unquestionable win for Lewis, who had his name hijacked and reputation damaged and took on a multinational corporation to try and make things right. He spent $100,000 facing the social media giant down in court and turned the whole thing into something positive that will help consumers. More power to him.

Move slowly and fix things

Before applauding Facebook, though, we should take its contribution in context. Adding a scam ad reporting button in 2019 is a step, but a reactive one, and it took a lawsuit by a celebrity to make it do even that. 

In the meantime, a lot of unsavory people have been making money by committing fraud on its platform.

The FBI Internet Crime Complaint Center’s (IC3) 2018 Internet Crime report said that losses from crimes committed via social media totalled $101,045,973 in 2018, hitting 40,198 victims. That’s just in the US, mind. That amounts to a $2513 loss per victim on average. Not all those scams involve fake ads, and not all have been committed via Facebook, of course, but Statcounter says it has a 63.3% share of the North American social media market as of June 2019, and a 59.6% share in the UK.

If you suspect a scam

Before buying anything via a Facebook ad, ask yourself some questions:

  • Can I pay through a traceable method, eg PayPal? Some payment providers offer recourse if you’ve been scammed. 
  • Does a quick internet search of the company, contact details or product highlight anything concerning? Are there lots of bad reviews? Are you able to find their main website and contact details?
  • Is this too good to be true?

And if you’re in the UK, you can now report the scam with a click of a button, but even in other countries, there are organizations you can contact for advice. If you’re in the US, you can report a rip-off or Facebook scam ad via the FTC’s Complaints Assistant

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/KQfHyzshkKk/

Microsoft, Google and Apple clouds banned in Germany’s schools

Germany just banned its schools from using cloud-based productivity suites from Microsoft, Google, and Apple. The tech giants aren’t satisfying its privacy requirements with their cloud offerings, it warned.

The Hessische Beauftragte für Datenschutz und Informationsfreiheit (Hesse Commissioner for Data Protection and Freedom of Information, or HBDI) made the statement following a review of Microsoft Office 365’s suitability for schools.

Microsoft launched its Azure Deutschland presence in 2016, with a focus on the ‘data trustee’ model. A third party partner, Deutsche Telekom, provided the Azure services and used a private cloud to ensure that none of the resident data went through the public internet. Even Microsoft needed to jump through plenty of hoops to get at its customers’ data. That was a bid to placate German customers who were sensitive about data sovereignty and wanted to keep their data on German soil.

That made HBDI confident enough to allow schools there to use Office 365 in August 2017, just so long as they only used the German cloud.

An issue with data Microsoft is storing, and where

Then, in August 2018, things changed. Microsoft pulled out of the data trustee arrangement in Germany and started using its regular data centre model instead, removing the barrier between the rest of the global Azure cloud and its own German data centres.

School boards in Germany carried on promoting Office 365 in spite of the privacy issues this raised, explained HBDI, prompting it to review the situation. Its conclusions (translated in part below) were dire. It doesn’t have a problem with cloud access for schools in general, it said, just with the data that Microsoft is storing, and where.

The problem is twofold, it explained. Firstly, it isn’t happy with Microsoft storing personal data (especially children’s data) in a European cloud that could be accessed by US authorities, adding:

The digital sovereignty of state data processing must be guaranteed.

Its other issue is with Microsoft’s data slurping. It warned:

With the use of the Windows 10 operating system, a wealth of telemetry data is transmitted to Microsoft, whose content has not been finally clarified despite repeated inquiries to Microsoft. Such data is also transmitted when using Office 365.

HBDI is taking its lead from the Federal Office for Information Security, which posted a technical analysis of Windows 10 telemetry in November 2018 (chapters 1.2 onwards are in English).

Consent won’t cut it

You can’t solve this problem by asking users for consent, the HBDI added. If you can’t be certain what data Microsoft collects or how the company will use it, then you can’t give informed consent.

The problem is that lots of schools in Germany want software like this, HBDI acknowledges. So what can they do? That’s up to Microsoft, it says. The company must satisfy the issue of third-party data access and Windows 10 telemetry, then they can talk. Redmond-based tech giant probably shouldn’t leave things too long, it concludes:

By that time, however, schools may benefit from other instruments such as serving on-premises licenses on local systems.

Google and Apple in the same boat

Although the majority of the report focused on Microsoft Office 365, HBDI explicitly called out other cloud service providers, so schools can’t use Google Docs or Apple’s iWork either:

What is true for Microsoft is also true for the Google and Apple cloud solutions. The cloud solutions of these providers have so far not been transparent and comprehensible set out. Therefore, it is also true that for schools, privacy-compliant use is currently not possible.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2jNV9DDMP9A/

RDP exposed: the wolves already at your door

For the last two months the infosec world has been waiting to see if and when criminals will successfully exploit CVE-2019-0708, the remote, wormable vulnerability in Microsoft’s RDP (Remote Desktop Protocol), better known as BlueKeep.

The expectation is that sooner or later a BlueKeep exploit will be used to power some self-replicating malware that spreads around the world (and through the networks it penetrates) in a flash, using vulnerable RDP servers.

In other words, everyone is expecting something spectacular, in the worst possible way.

But while companies race to ensure they’re patched, criminals around the world are already abusing RDP successfully every day, in a different, no less devastating but much less spectacular way.

Many of the millions of RDP servers connected to the internet are protected by no more than a username and password, and many of those passwords are bad enough to be guessed, with a little (sometimes very little) persistence.

Correctly guess a password on one of those millions of computers and you’re in to somebody’s network.

It isn’t a new technique, and it sounds almost too simple to work, yet it’s popular enough to support criminal markets selling both stolen RDP credentials and compromised computers. The technique is so successful that the criminals crippling city administrations, hospitals, utilities and enterprises with targeted ransomware attacks, and demanding five- or six-figure ransoms, seem to like nothing more.

All of which might make you think – there must be a lot of RDP password guessing going on.

Well, there is, and thanks to new research published by Sophos today, we can take a stab at saying just how much.

Noting the popularity of RDP password guessing in targeted ransomware attacks, Sophos’s Matt Boddy and Ben Jones (who you may have heard on the Naked Security podcast) set out to measure how quickly an RDP-enabled computer would be discovered, and just how many password guessing attacks it would have to deal with every day.

They set up ten geographically dispersed RDP honeypots and sat back to observe. One month and over four million password guesses later they switched off the honeypots, just as CVE-2019-0708 was announced.

The low interaction honeypots were Windows machines in a default configuration, hosted on Amazon’s AWS cloud infrastructure. They were set up to log login attempts while ensuring attackers could never get in, giving the researchers an unhindered view of how many attackers came knocking, and for how long, and how their tactics evolved over the 30-day research period.

The first honeypot to be discovered was found just one minute and twenty four seconds after it was switched on. The last was found in just a little over 15 hours.

RDP honeypots time to first login attempt

Between them, the honeypots received 4.3 million login attempts at a rate that steadily increased through the 30-day research period as new attackers joined the melee.

Login attempts per day

While the majority of attacks were quick and simple attempts to dig out an administrator password with a very short password list, some attackers employed more sophisticated tactics.

The researchers classified three different password guessing techniques used by some of the more persistent attackers and you can read more about them – the Ram, the Swarm and the Hedgehog – in the whitepaper.

What to do?

RDP password guessing shouldn’t be a problem – it isn’t new, and it isn’t particularly sophisticated – and yet it underpins an entire criminal ecosystem.

In theory, all it takes to solve the RDP problem is for all users to avoid really bad passwords. But the evidence is they won’t, and it isn’t reasonable to expect they will. The number of RDP servers vulnerable to brute force attacks isn’t going to be reduced by a sudden and dramatic improvement in users’ password choices, so it’s up to sysadmins to fix the problem.

While there are a number of things that administrators can do to harden RDP servers, most notably two-factor authentication, the best protection against the dual threat of password guessing and vulnerabilities like BlueKeep is simply to take RDP off the internet. Switch off RDP where it isn’t absolutely necessary, or make it accessible only via a VPN (Virtual Private Network) if it is.

Want to learn more?

To learn more about what the researchers discovered, the tactics the attackers used and the way that different regions were affected, read the full report: RDP Exposed – The Threat That’s Already at Your Door.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/CkrHEe86eU0/

Turning it off and on again IN SPAAACE! ISS animal-tracker kit needs oldest trick in the book

Icarus – the ambitious project to track hundreds of thousands of animals from space – has hit an unexpected delay after a specialised computer installed on board the International Space Station (ISS) refused to work as intended.

In order to fix the system, which was only switched on last week, astronauts will have to apply the oldest workaround in the book, practised by IT staff since time immemorial: they will have to turn it off and on again.

The actual process is slightly more complicated in space than it is in a corporate server room: before they can have a go at it, detailed procedures will have to be drawn up, and busy schedules will have to be cleared.

ISS

Boffins ready to go live with system that will track creatures great and small from space

READ MORE

Meanwhile, teams on the ground will have to engage in a bit of tech support, with negotiations taking place between Germany’s SpaceTech, which made the Icarus computer, and RRSC Energia, which leads the development of the Russian corner of the space station – in this context, essentially a space-borne data centre.

“The system was switched on, and then it turned out that the fans were running intermittently. That means that they have to be switched off and turned on one by one,” professor Martin Wikelski, director of the Max Planck Institute for Ornithology and leader of the project, told El Reg.

“Apparently it’s something that the Russians have experienced before,” he added.

“The engineers told us that space projects always take longer than expected because each manufactured unit is a unique thing, so you always have some initial problems with systems, especially the ones we are dealing with, because this is a new CDMA communications system, basically IoT via space, and I think everybody is keen to see how it works.”

The Icarus (International Cooperation for Animal Research Using Space) project has been in the works since 2002. A collaboration between German and Russian scientists, it will attempt to establish migration patterns, response to environmental changes, and whether (and how) animals can predict natural disasters like earthquakes and volcanic eruptions, among other things.

In the project, thousands of animals will be tagged with tiny transmitters that weigh less than 5g. The tiny gizmos are powered by the Sun and equipped with GPS, accelerometers, temperature, pressure and humidity sensors.

To save energy, they will stay in sleep mode most of the time, only activating when the ISS is passing overhead; this information would be transmitted by the station in advance and stored on the device.

The massive Icarus antenna block was shipped to the ISS aboard the Progress cargo spacecraft in February 2018, with the hardware weighing in at 200kg – including three receiving antennas and one transmitting antenna. The onboard computer arrived at the station during one of the previous missions in 2017.

The antenna was then installed on the exterior of the Zvezda (star) module, in the Russian sector of the station, during a spacewalk in August 2018. The computer was scheduled to be switched on last Tuesday, but the cooling fans the machine was equipped with consumed too much power when spinning up at the same time – the ISS has a very strict power budget. To fix the issue, astronauts will have to disconnect and reconnect individual fans one by one – something that could happen in “days or weeks”, according to Wikelski.

Icarus is in a hurry because it is not the only project of its kind, and is competing against Centre National d’Etudes Spatiales and NASA’s Argos system, which has used commercial satellites for animal tracking and environmental monitoring for decades.

Argos currently has six operational satellites and tracks about 8,000 animals at any given time. It is about to launch its first nanosat, codenamed Argos Neo, with another 24 expected in orbit by 2022. A spokesperson told El Reg that Argos has developed its own miniature transmitters too, with the smallest weighing in at 2g.

“There are two different directions: people think they can do nanosats in space, and do the same thing, but communications engineers and space engineers say, well, you can’t beat physics. If you want to have small tags on the ground, it’s hard to also have small satellites in space,” Wikelski said.

“Argos is certainly a great system: we have used it a lot from the beginning, and we are still using it. Yes, it is old, and they have to be backward-compatible, in many cases, and they have some restrictions, and we have a different set of requirements. We’re the new kid on the block and we have to show that we are good.” ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/17/icarus_outage_turning_it_off_and_on_again_aboard_the_iss/

In-Depth