STE WILLIAMS

Windows 7 users get fix for latest updating woe

Microsoft has vexed its users with another misbehaving update.

The latest problem occurred on 8 January when enterprise users running Windows 7 or Windows Server 2008 R2 with a Key Management Service (KMS) started complaining on Microsoft’s TechNet forums and Reddit that they were seeing two errors, the first relating to licensing, the second networking.

In the first, users were seeing a “Windows is not genuine” error dialogue after logging in, which allowed them to run their copy with this message embedded as a desktop watermark.

The second error appears to have been a problem with different symptoms resulting in users not being able to access SMB2 shares or start remote desktop connections through both admin and non-admin accounts.

At first it was assumed that the problems were connected to separate security and feature updates for Windows 7 – KB4480960 and KB4480970 – which were issued as part of Patch Tuesday.

It later transpired that the problem wasn’t with either of those updates and was instead connected to a change made to the Microsoft Activation and Validation servers affecting anyone who had installed an old update, KB971033, which originally appeared last April.

On its update page, Microsoft states that anyone running KMS or Multiple Activation Key (MAK) volume activation should not install this, pointing out:

We generally recommend to NOT install this update in their reference image or already deployed computers. This update is targeted at consumer installs of Windows using RETAIL activation.

Moreover:

We strongly recommend that you uninstall KB 971033 from all volume-licensed Windows 7-based devices. This includes devices that are not currently affected by the issue that is mentioned in the ‘Symptoms’ section.

Users encountering the license error should deinstall this update. Anyone having problems with accessing network shares should start by downloading the new update, KB4487345.

Update-itis

One could dismiss this incident as something to worry yesterday’s Windows base, if it weren’t for the fact that Windows 7 is still estimated to be installed on more than a third of all Windows machines.

And that’s despite the OS being only a year from its official end-of-life date in January 2020.

There are now six versions of Windows in meaningful use stretching back to Vista, three of which are still being updated (and that’s excluding seven Windows 10 releases, which also have individual updates).

All this complexity might explain why Microsoft had so many update glitches during 2018, including the awkward coincidence of a similar activation issue affecting Windows 10 users in November.

All that after the company had to delay the Windows 10 1809 update after users reported suffering problems that took until December to iron out.

Such was last year’s woe that in August the company was sent an open letter by patchmanagement.org moderator and Microsoft Most Valuable Professional (MVP) Susan Bradley expressing unhappiness with the state of its security patching.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/srQyTQJEATY/

Want to get rich from bug bounties? You’re better off exterminating roaches for a living

Security researchers looking to earn a living as bug bounty hunters would to do better to pursue actual insects.

Using data from bug bounty biz HackerOne, security shop Trail of Bits observes that the top one per cent of bug hunters found on average 0.87 bugs per month, resulting in bounty earnings equivalent to an average yearly salary of $34,255 (£26,500).

That’s a bit less than the median wage for a pest control worker in, say, Mississippi, according to the US Bureau of Labor Statistics. It’s also lower than the average UK salary of £27,000.

And these are the top cyber exterminators. who bring in the big bucks. Newbies make considerably less.

Citing MIT Press’ New Solutions for Cybersecurity, Trail of Bits argues that bug bounty programs appeal mainly to developers in labor markets where wages are significantly lower than in the US, or students learning cybersecurity. Suprisingly enough the biz suggests that other options, like hiring security consultants and penetration testers (which, suprise surprise is Trail of Bits’ own business,) may make more sense for companies than a bug bounty program.

“It’s nice to think that you have 300,000 sets of eyes scrutinizing your code, but this number includes zombie accounts and people who never find a single bug,” the company said in a blog post Monday. “In reality, only an elite few are getting the work done and cashing in.”

Marten Mickos, CEO HackerOne, took issue with Trail of Bits’ figures. “This study is not representative,” he said in an email to The Register.

“If it is based on HackerOne data, it is only based only on a fragment of it. The hacker community is indeed power-law distributed. The top performers are orders of magnitude more productive than newcomers. The beauty is that many newcomers rise very quickly in the ranks. Within this merit-based system, there is unlimited opportunity for one with skill and will.”

Bug bounty botox

However, in a phone interview with The Register, Katie Moussouris, founder and CEO of Luta Security, creator of Microsoft’s first bug bounty program, and contributor to the MIT book, concurs with Trail of Bits’ conclusions, noting that internal security talent tends to be a better investment.

“There’s a natural cap on the amount of money you can put in defensive bounties,” she said, noting that the market for offensive bounties is a different kettle of fish. “A bounty price can’t really exceed what an in-house security person will make.”

Bug bounty programs, said Moussouris, aren’t necessarily helpful or right for every organization.

“A lot of organizations have heard the term ‘bug bounty’ and they see glossy marketing materials highlighting the best possible outcomes but not covering the worst and most disastrous ones,” she said.

Companies often think bug bounty programs are as safe as hiring a penetration tester but they’re absolutely not, said Moussouris.

steam

I found a security hole in Steam that gave me every game’s license keys and all I got was this… oh nice: $20,000

READ MORE

“I call it ‘bug bounty botox’ when people are more interested in seeming like they’re better on security than actually improving,” she said.

The risks, she explained, include not attracting top bug hunters, attracting too many reports of trivial bugs, and getting more bug reports than the sponsoring organization can actually fix.

“If you don’t have internal capabilities, bug bounties won’t do you any good and neither will a penetration test,” she said.

The UK government, she said, is not going to start a bug bounty program. Instead, she said, it’s working with her company to improve its internal processes to the point where various government agencies can all sustainably fix incoming bug reports.

In short, a bug bounty program may be useful, but don’t outsource 100 per cent of security of your product to strangers on the internet. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/15/bugs_bounty_salary/

New year, new career? How some Sophos experts got into cybersecurity

We asked a number of people working in different roles at Sophos how they made their way into cybersecurity.

1. Music making to malware fighting

Sales Engineer, Benedict Jones

I graduated from university with a first class BSc honours degree in Sound Technology and Digital Music. I have always pertained a profound interest in music and technology, but during my degree it dawned on me that I wasn’t quite “musically creative” enough to be a Music Producer. That’s when logic kicked in, and I decided to tailor my degree more towards my skillset and passion… technology.

I chose to create my own Android application for my dissertation so bought an O’Reilly book on Java and read all 726 pages by the poolside when on holiday in Spain. While creating my application, I managed to cause the device to overheat and/or crash – sometimes intentionally, and others not! This inspired my interest in mobile application malware, which was very much an emerging threat at the time. A year on from this, my dissertation was complete and my very own mobile application was published.

After graduating, I pursued a career in IT and my first job was as a Graduate Network Support Engineer. After 3 years in technical support, I had worked my way up to the top tier as a Senior Technical Support Consultant. I’d received plenty of first-hand insight into the devastating effect that malware can have on an organisation, having spent many an evening dealing with the aftermath of an outbreak. My research into various threats led me to understand how best to remediate security, which was to fix the flaws that had been exploited by the attackers in the first place.

Protecting organisations against cyberthreats left me with a sense of heroic satisfaction that ultimately inspired me to change my career path into cybersecurity. Which takes me to today – I am a Channel Sales Engineer at Sophos and continue to help organisations to protect themselves against cyberthreats.

2. Romancing malware analysis

Threat Researcher, Luca Nagy

The first time I came across programming languages was during high school and it was love at first sight. Our romance continued at university and I decided to dedicate myself to programming. While there, I also developed an interest in IT security and it was a huge dilemma to choose which direction to take my studies in.

For this reason, I started a CEH (Certified Ethical Hacker) course where we studied several techniques and tools to find and exploit weaknesses in various systems. This required prior experience but also creativity to solve new problems. The combination was really appealing, and it helped me to reach the decision to take the IT security route in my career.

During an internship at Telekom, I was working with intrusion detection/prevention systems (IDS/IPS) and became acquainted with malware. I decided to dig deeper, and in my thesis I introduced a malware analysis procedure through a ransomware analysis. I became passionate about malware analysis and have been ever since.

Now at Sophos, I spend my time reverse engineering emerging threats and creating detections against them.

3. Keeping on top of moving targets

Senior Threat Researcher, Rowland Yu

After majoring in Computer and Telecommunication Engineering at Tongji University, I graduated and got my first ‘proper’ job at Siemens Shanghai Mobile Communications Co. I was a Shift Leader and Technical Specialist and was responsibile for a production team working on a rotating schedule, who diagnosed, troubleshot, and repaired mobile equipment.

Two years later I started my postgraduate degree at the University of Wollongong, Australia, and was really attracted to the magic of computer and network security. I dedicated myself to research projects, including the design and analysis of secure systems with an emphasis on network and communication security, under the direction of Professor Reihaneh Safavi-Naini – the first-ever Australian winner of a security funding research grant.

I started as a spam analyst for SophosLabs in 2006, before moving into the role of Virus Threat Researcher for advanced threat research, reverse engineering and remediation. I then led anti-spam and DLP (data loss prevention) research in the Australian SophosLabs. When the first Android malware was discovered in 2012, I believed Android would become ‘the new Windows’ for malware and dedicated most of my time to Android security. Today I am a Senior Threat Researcher L2 at Sophos and the primary researcher leading the Android team for malware analysis and emerging threats.

Cybersecurity is a constantly moving target. Over the past decade, there have been so many different major cyberattacks targeting many different platforms. At the same time, we’ve seen advanced threat prevention techniques introduced too, such as generic detection, behavior monitoring (HIPS), memory scanning, sandbox, EDR (Endpoint Detection and Response), and deep learning.

I’ve had many interesting and unforgettable moments throughout my career. I read a great book called ‘Network Security: Private Communication in a Public World’, and attended an excellent lecture, ‘Advanced Network Security’, and have used both as a learning stepping stone in my career. Ultimately, I’d like to give credit to Sean McDonald (one of SophosLab Directors), who contributed to my career success. It’s great that I have worked with so many truly talented people in Sophos.

4. Discovering a love of cybersecurity

Software Engineer, Bogdana Avadanei

I recently graduated with a degree in Computer Science and, like many students, I didn’t initially know what particular area I wanted to specialise in. Unfortunately, my undergraduate degree didn’t offer a cybersecurity module, and the only reason I discovered how fascinated I was by the topic was because I had to write an essay in my first year that I left until the night before the deadline.

The essay topic was anything computer science related, so I decided the early methods of encryption would be a good idea, as I had just heard about a security breach. I spent all night in the library reading all the interesting books I could find on the subject. I knew then what path I wanted to follow, and my placement year at Sophos was an excellent opportunity to find out more about the industry. I enjoyed it and learned so much that I came back after I graduated as a software engineer.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/c8L4DMLBsAE/

Royal Bank of Scotland, Natwest fling new bank cards at folks after Ticketmaster hack

The Royal Bank of Scotland and NatWest have issued customers with replacement cards as a result of last year’s Ticketmaster breach that hit around 40,000 Brits.

The banks said on social media they were swapping out the plastic used by punters on Ticketmaster’s website as part of efforts to ensure “significant levels of security”.

The letter states that replacement cards are being sent to anyone who used their card at Ticketmaster, noting it is a “precaution” and that in some cases there is no indication that person’s information has been accessed.

In a statement, RBS said: “Our priority is to make sure our customers’ data is secure. Following the data breach disclosed by Ticketmaster, we are proactively reissuing cards to all impacted cardholders.”

Ticketmaster admitted in June that its website’s payment pages had become infected by Magecart malware. At the time, it blamed third-party supplier Inbenta Technologies – but that firm said the custom JavaScript it had written for Ticketmaster should not have been used on payment pages.

“Ticketmaster directly applied the script to its payments page, without notifying our team. Had we known that script would have been used in that way, we would have advised against it, as it poses a security threat,” said Inbenta CEO Jordi Torras.

Ticketmaster said all cards used between February and June 2018, for UK customers, and between September 2017 and June 2018, for international customers, were at risk.

Some 40,000 Brits were estimated to have been hit in the incident, which exposed people’s names, addresses, email addresses and payment details.

In December, one Reg reader told us that two of his cards – both of which were linked to his Ticketmaster account – were being used for unauthorised transactions.

The decision by RBS – which is the parent company of NatWest – to replace affected customers’ cards, while welcomed by customers, comes about nine months after online banking upstart Monzo took similar action.

When Ticketmaster went public, Monzo piped up to say its internal fraud detection systems had “spotted signs” as early as April, blocking a number of cards that had also been used at Ticketmaster. The firm said it told Ticketmaster and then “proactively replaced the cards of all Monzo customers who could have been affected”.®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/14/banks_issue_cards_ticketmaster/

Intel’s Software Guard caught asleep at its post: Patch out now for SGX give-me-admin hole

While admins were busy wrangling with the mass of security patches from Microsoft, Adobe, and SAP last week, Intel slipped out a fix for a potentially serious flaw in its Software Guard Extensions (SGX) technology.

Chipzilla’s January 8 update addresses CVE-2018-18098, an issue Intel describes as an “improper file verification” that can be exploited on Windows machines to escalate privileges. In effect, the security blunder can be leveraged by malware running on a system, or rogue logged-in users, to gain administrator rights and take over a vulnerable box.

Intended to protect sensitive information from snooping, SGX allows applications to lock off areas of memory, dubbed enclaves, that cannot be accessed by the operating system nor other processes. The idea is that you run cryptographic or anti-piracy digital rights management code within an enclave so that it cannot be spied upon by even the administrators of the machine.

In the case of CVE-2018-18098, the technology potentially allows an attacker to game SGX to gain admin clearance. The problem lies not within the processor’s SGX hardware, though, but in the software layer above it. When enclave code is installed by a normal user on a Windows system, it is possible to hijack the installer, via a process injection attack, to gain admin rights on the box.

It’s another case of fancy hardware protections sunk by vulnerable management code running on top.

The vulnerability was discovered by SaifAllah benMassaoud, a 24-year-old security researcher from Tunisia who told The Register that the exploit could be written in something like a .bat file that a victim could be tricked into opening from an email. When run, the script file could gain admin access on the mark’s machine.

“Once the file is opened by the victim who uses the affected software, it will automatically download and execute a malicious code from attacker’s server to the vulnerable setup version of Intel SGX SDK and Platform Software on the victim’s machine,” the bug-hunter told El Reg.

Below is a video demonstrating the attack. No proof-of-concept exploit has been released to our knowledge.

A PoC for the vulnerability

Also addressed in the update was the less serious CVE-2018-12155, a data-leakage bug that would potentially let an attacker with local access retrieve information used by the Intel IPP (Integrated Performance Primitives) libraries.

Users running SGX Platform or SGX SDK on Linux and Windows are being advised to update to the latest version (2.2.100 on Windows and 2.4.100 on Linux) in order to get the fixes. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/14/intel_patches_sgx_flaw/

Cops told: No, you can’t have a warrant to force a big bunch of people to unlock their phones by fingerprint, face scans

A US judge last week denied police a warrant to unlock a number of devices using biometrics identifiers like fingerprints and faces, extending more privacy to device owners than previous recent cases.

The order comes from Northern California Federal District Judge Kandis Westmore in response to a request by the government to search and seize the devices found at a premises in Oakland, California, connected to two suspects.

The suspects are believed to be involved an attempt to extort payment from a victim through Facebook Messenger by threatening to release an embarrassing video.

Judge Westmore in her order found authorities did have probable cause to apply for search warrant but denied the request because it was overly broad for not being restricted to the two individuals under investigation.

Instead, the warrant would have allowed authorities to seize and search any device found at the location in question, and to force the unlocking of said devices – no matter who they belonged to – by compelled use of biometric controls. In this case applying the owner’s finger to a fingerprint sensor if present or holding the device up to owner’s face if it relies on a system like Apple’s Face ID or Samsung’s iris scanner.

No more dragnets

The government, the judge said, is free to submit a more narrow search warrant application, but she made clear that she believes device owners should not have to testify against themselves, in accordance with US Fifth Amendment protection.

“Even if probable cause exists to seize devices located during a lawful search based on a reasonable belief that they belong to a suspect, probable cause does not permit the Government to compel a suspect to waive rights otherwise afforded by the Constitution, including the Fifth Amendment right against self-incrimination,” she wrote in her order.

Last June, the US Supreme Court ruled in Carpenter v. US that the government violated the Fourth Amendment – which protects against unreasonable searches and seizures – by collecting cell phone location records without a warrant. It was an major privacy victory and a sign that the Supreme Court is willing to address issues raised by changing technology.

Privacy advocates have also argued that authorities should not be able to force people to unlock their devices because compelled production of passwords is testimonial.

Courts have accepted that where the evidence being sought was not already known to authorities and other exceptions don’t apply. But they’ve not extended protection to biometric identifiers; authorities have been allowed, for example, to apply people’s fingerprints to device fingerprint sensors to open seized devices.

Where in the past judges have drawn a distinction between forcing a person to reveal a known password and the act of applying a person’s finger to a sensor, Judge Westmore sees no difference in this instance. “In this context, biometric features serve the same purpose of a passcode, which is to secure the owner’s content, pragmatically rendering them functionally equivalent,” she wrote.

It’s going to take the Supreme Court to sort this

In a phone interview with The Register, Greg Nojeim, director of the Freedom, Security and Technology Project at the Center for Democracy Technology, said the courts disagree about whether faceprints and fingerprints can be considered testimonial.

“It is a significant ruling because the other cases in which judges held that fingerprints or faceprints were not testimonial came before the Supreme Court’s decision in Carpenter case,” he said. “Now you’ve got a magistrate judge relying in the reasoning of the Supreme Court that digital is different.”

Legal scholars have suggested the ruling could be challenged. Last year, when an Ohio man was forced to provide access to his Face ID-protected iPhone (US v. Grant Michalski), USC Gould law professor Orin Kerr, said at the time via Twitter that he saw no Fifth Amendment issues. He showed similar skepticism in a tweet on Monday by questioning the judge’s Fifth Amendment reasoning.

Nojeim suggests the Supreme Court may have to clarify the situation.

“Nowadays most everybody is protecting the contents of their cell phones and other devices with passwords,” he said. “It’s therefore likely this issue will be of growing importance to law enforcement and that makes the chance of Supreme Court review higher than it has ever been.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/14/biometric_device_access/

This must be some kind of mistake. IT managers axed, CEO and others’ wallets lightened in patient hack aftermath

The Singaporean government-owned biz responsible for that country’s patient database has fined senior executives, including the CEO, and dismissed two managers, after blunders allowed hackers to siphon off private records.

The punishments were meted out by Integrated Health Information Systems (IHiS), which run a patient record database for Singapore healthcare organization SingHealth, a database system that was hacked in 2018. Miscreants gained access to the network, and stole 1.5 million citizens’ health records, including those of prime minister Lee Hsien Loong, who is presumed to be the ultimate target of the attack.

The debacle was probed by a committee of inquiry, which among other blunders, revealed last week IHiS had left Citrix systems needlessly exposed to the internet, and that admin accounts lacked two-factor authentication protection.

Cheesy pic of man holding face in shame as accusatory finger emerges from display. Photo via Shutterstock

If you wanna learn from the IT security blunders committed by hacked hospital group, here’s some weekend reading

READ MORE

IHiS yesterday announced it had dismissed two managers over the incident: a Citrix team lead, and a security incident response manager. The company’s announcement said the two “were found to be negligent and in non-compliance of orders.”

The announcement said the Citrix team lead’s “attitude towards security … introduced unnecessary and significant risks” which should have been mitigated.

IHiS is even harsher towards the security incident response manager, saying he “persistently held a mistaken understanding of what constituted a ‘security incident,’ and when a security incident should be reported.”

The statement continued: “His passiveness even after repeated alerts by his staff resulted in missed opportunities which could have mitigated or averted the effect of the cyber-attack.”

A third individual holding the title “cluster information security officer,” was found to be “unsuitable for the role” and has been demoted and re-assigned.

Five executives were sanctioned over the hack, including CEO Bruce Liang who is also the CIO of Singapore’s Ministry of Health, with what IHiS called “significant” financial penalties, while the middle managers who supervised the pair that were sacked will pay “a moderate financial penalty.”

The statement reiterated IHiS’s November commitment to 18 security measures which should better protect its systems, and said staff training will be increased to improve vigilance, and make defences more robust.

Three staff – one from database management, one from the software configuration management team, and one security management staffer – not only escaped criticism, but were given letters of commendation for “diligence in handling the incident beyond their job scope and responsibilities.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/14/ihis_singhealth_breach/

Oh, SSH, IT please see this: Malicious servers can fsck with your PC’s files during scp slurps

A decades-old oversight in the design of Secure Copy Protocol (SCP) tools can be exploited by malicious servers to unexpectedly alter victims’ files on their client machines, it has emerged.

F-Secure’s Harry Sintonen discovered a set of five CVE-listed vulnerabilities, which can be abused by evil servers to overwrite arbitrary files on a computer connected via SCP. If you use a vulnerable version of OpenSSH’s scp, PuTTY’s PSCP, or WinSCP, to securely transfer files from a remote server, that server may be able to secretly tamper with files on your local box that you do not expect the server to change.

It’s a subtle threat because a malicious SCP server can vandalize any files you fetch. After all, if you download ~/example.txt, the received data may be modified just before transit by a malicious server. The key thing here, though, is that a malicious SCP server can alter files on your local machine other than the ones you fetched, or change access permissions, or download extra documents.

Sintonen explained that because rcp, on which scp is based, allows a server to control which files are sent, and without the scp client thoroughly checking it’s getting its expected objects, an attacker can do things like overwrite the user’s .bash_aliases file. This, in turn, would allow the attacker to run arbitrary commands on the victim’s box when the user does routine stuff, like list a directory.

“Many scp clients fail to verify if the objects returned by the scp server match those it asked for. This issue dates back to 1983 and rcp, on which scp is based,” Sintonen explained in his disclosure this month.

“A separate flaw in the client allows the target directory attributes to be changed arbitrarily. Finally, two vulnerabilities in clients may allow server to spoof the client output.”

Here’s a summary of the cockups:

  1. CVE-2018-20685 (scp): “The scp client allows server to modify permissions of the target directory by using empty (‘D0777 0 n’) or dot (‘D0777 0 .n’) directory name.”
  2. CVE-2019-6111 (scp) and CVE-2018-20684 (WinSCP): “The server chooses which files/directories are sent to the client. A malicious scp server can overwrite arbitrary files in the scp client target directory. If recursive operation (-r) is performed, the server can manipulate subdirectories as well (for example overwrite .ssh/authorized_keys).”
  3. CVE-2019-6109 (scp and PSCP): “The object name can be used to manipulate the client output for example, to employ ANSI codes to hide additional files being transferred.”
  4. CVE-2019-6110 (scp and PSCP): “A malicious server can manipulate the client output, for example to employ ANSI codes to hide additional files being transferred.”

Here’s a table of affected versions and CVE numbers:

The researcher said he would post proof-of-concept examples for the bugs at a later date. He alerted the developers of scp, WinSCP, PSCP in August last year.

CVE-2018-20685 (vulnerability 1) was patched in OpenSSH’s scp in mid-November though this has not been formally released. That bug plus CVE-2019-6111 (2), CVE-2019-6109 (3), and CVE-2019-6110 (4), apparently remain unpatched in the latest version, 7.9, released in October. If you’re worried about an evil server pwning you, he has a source-code fix here you can apply by hand, although there are caveats. Alternatively, systems can be configured not to use SCP.

WinSCP was fixed in version 5.14, released last October, and the current version is 5.14.4. It seems unlikely that PuTTY has been fixed yet, since the last release was version 0.7 in July 2017.

Sintonen has a history of January disclosures: last year, he broke news of a bug in Intel’s Active Management Technology, and the previous year it was QNAP NAS boxen. ®

Additional reporting by Richard Chirgwin.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/15/scp_vulnerability/

Cryptomining Continues to Be Top Malware Threat

Tools for illegally mining Coinhive, Monero, and other cryptocurrency dominate list of most prevalent malware in December 2018.

Enterprise organizations appear unlikely to get respite from cryptomining attacks anytime soon if new threat data from Check Point Software is any indication.

For the thirteenth month in a row, attacks involving the use of cryptomining malware topped the security vendor’s list of most active threats worldwide in December. Malware for mining the Coinhive cryptocurrency once again emerged as the most prevalent malware sample impacting 12% of the organizations worldwide in Check Point’s report.

Out of the top 10 most prevalent malware samples in Check Point’s latest monthly threat summary, the four most active tools—and five in total—were cryptominers.

The persisting attacker interest in crypto malware—despite the overall decline in the value of major cryptocurrencies—is not entirely surprising.

“The main advantage of cryptomining malware for the attacker is its ability to create direct profit without any user interaction and without elaborate mechanisms such as in the cases of ransomware and banking Trojans,” says Omer Dembinsky, data research team leader at Check Point.

In many cases, users with systems infected with cryptocurrency malware don’t even realize they have a problem until hardware performance gets severely degraded. Crypto tools running on higher-end enterprise servers and endpoint systems can be hard to spot for the same reason.

“It works in the background on personal computers, mobile phones, servers, and basically any machine with computing power—so anyone and everyone is a potential target,” Dembinsky says.

Not surprisingly, many of the most exploited vulnerabilities in December 2018 were also related to illegal cryptomining activity. Topping the list was CVE-2017-7269, a buffer-overflow vulnerability in a Microsoft IIS component that was first disclosed nearly two years ago and long ago patched as well.

The reason the vulnerability remains a popular exploit target is because it gives attackers a way to infiltrate high-end servers with lots of processing power for cryptomining, Dembinsky said. “Organizations should make sure they apply the most recent updates and patches on their systems in order to not be susceptible to attacks by known vulnerabilities.”

Crypto tools are the most prolific, but not the only threat that Check Point observed last month. Also noteworthy was the sudden reemergence of SmokeLoader, a malware downloader tool that attackers have previously used to distribute especially pernicious malware tools, such as Trickbot and Panda banking Trojan and the AZORult information-stealer. Security researchers have been tracking the threat since at least 2011 but it has never broken into Check Point’s list of the 10 most active threats.

A surge of activity involving SmokeLoader in Ukraine and Japan propelled the malware from 20th spot just last month to the ninth spot in Check Point’s list. But Dembinsky says Check Point researchers have not been able to figure the specific reason for the renewed interest in the malware.

For businesses, the sudden re-emergence of a malware tool last seen some eight years ago highlights the need for constant vigilance. “This means that organizations should have the most up to date and advanced security measures applied as the next surge could come from any of the numerous threats out there—or from something brand new,” Dembinsky notes.

The remaining malware samples on Check Point’s top 10 list are all multi-purpose code being distributed in multiple ways. They include Emotet, a Trojan that is being used for malware distribution, and Ramnit, a banking Trojan that has been around for some time.

While malware on Check Point’s list fall out of the top 10 spot over a period of time, there is surprisingly little churn over short periods. The same threats tend to remain on the list month after month, though occasionally there are sudden surges of specific threats, Dembinsky says.

“We see there is a very wide range of threats, coming from multiple attack vectors—Web, emails, vulnerabilities,” he notes. “Organizations must use a multi-layered and advanced cybersecurity strategy, both on the technical side and on the educational side.”

Related Content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/cryptomining-continues-to-be-top-malware-threat/d/d-id/1333651?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Goddamn the Pusher man: Nominet kicks out domain name hijack bid

Nominet has thrown out an attempt at reverse domain name hijacking after some, er, pushy Brits tried seizing their old web address from a fast-fingered fellow in Romania.

Pusher Ltd failed in its attempt to take control of the domain pusher.co.uk from Lee Owen, a resident of Romania, after forgetting to renew its registration.

The UK firm’s later attempt to use Nominet’s quasi-legal Dispute Resolution Service (DRS) to get it back was ruled to be an attempt at reverse domain name hijacking – a complicated way of saying Pusher had tried to trick the DRS into giving it something it wasn’t entitled to have.

Normally DRS cases are brought against domain-squatters: the sort of people who register domains such as hmrcsubmitareturn.co.uk with the obvious intent of doing something dodgy. However, the dispute resolution service also deals with domain-based trademark infringement cases, which is what Pusher claimed Owen was doing when he registered pusher.co.uk on 15 October 2018.

Unfortunately for the Pusher people, Nominet didn’t have tombstones in its eyes when it ruled that the company had simply forgotten to renew its domain and missed out on buying back control of it.

“While the Complainant has trade mark rights, they are very narrow and are restricted to computer software and related goods and services,” ruled Nominet expert Tony Willoughby, who delivered a written judgment of the case. “Nothing that the Respondent has done infringes the Complainant’s trade mark rights.”

While Pusher had complained that Owen’s ownership of pusher.co.uk was an abusive registration “clearly made with the intention of seeking payment from us as the registered trademark owners” (which would be a breach of Nominet’s terms and conditions for dot-UK domain names), even Willoughby commented: “These contentions are long on bare assertion and very short on supporting evidence.”

Owen shot back by complaining that Pusher was trying to use the quasi-legal process “in bad faith in an attempt to deprive him of the domain name”. Willoughby agreed, ruling that Pusher had brought a “speculative complaint” because it was “frustrated at the consequences of its failure to renew its registration of the Domain Name”.

The simple fact of the matter is that the Complainant ignored all available advice and numerous warnings and has suffered the consequences. All of it is down to the Complainant’s failures. None of it can sensibly or fairly be laid at the door of the Respondent.

Pusher’s domain name hijacking attempt was dismissed. The ruling for case number D00020783 can be read on the DRS website.

Another day, another mass domain hijacking

READ MORE

Domain name hijacking is a term traditionally associated with hacking, as in the case of Swiss domain registrar Gandi last summer, which lost control of 751 customer domains after an unidentified person got hold of login creds allowing them to wreak havoc. In a way, misuse of the DRS could be seen as trying to do the same thing through legal means, also known as “lawfare”.

Back in 2003, Britain’s Driver and Vehicle Licensing Agency tried using a multinational forerunner of the Nominet DRS to deprive an American company, DVL Automation Inc, of the domain dvla-dot-com (warning: do not visit the URL, it has long passed out of DVL’s control and is now serving auto-install malware). That also failed. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/14/pusher_ltd_domain_name_hijacking_fail/