STE WILLIAMS

US Banks Targeted with Trickbot Trojan

Necurs botnet spreads Trickbot malware to US financial institutions, while new Emotet banking Trojan attacks discovered – signalling increasingly complex attacks on the industry.

The Necurs botnet has begun delivering the Trickbot banking Trojan to financial institutions in the United States, a sign of increasingly larger and more complex attacks on the industry.

Trickbot, which specifically threatens businesses in the financial sector, has been behind man-in-the-browser (MitB) attacks since 2016. Until now, its webinject configuration was only used to hit organizations outside the US.

Researchers at Flashpoint discovered a new Trickbot spam campaign, called “mac1” on July 17. The latest iteration is fueled by the Necurs botnet and was developed to hit 50 additional banks including 13 companies based in the US. Necurs, one of the largest spamming botnets in the world, emerged in 2012 and has since become known for propagating spam campaigns.

“It sends a massive amount of spam email, one of which is recently starting to spread Trickbot,” says Flashpoint malware researcher Paul Burbage, who has been monitoring Necurs for the past couple of years.

Mac1 has an expanded webinject configuration, which it uses to hit customers of financial institutions both in the US and abroad. Other victim countries include the UK, New Zealand, France, Australia, Norway, Sweden, Iceland, Canada, Finland, Spain, Italy, Luxembourg, Switzerland, Singapore, Belgium, and Denmark.

So far, Mac1 has driven at least three different spam waves, Flashpoint reports. The first contained an HTML email disguised as a bill from an Australian telecommunications company. These contained a Zip-archived Windows Script File attachment with obfuscated JavaScript code. When clicked, the files download and execute the Trickbot loader.

One of the main concerns with Trickbot is account takeover and fraud, which may increase among US financial institutions as the malware spreads. Burbage says the main significance of Mac1 is how far and wide it’s being spread.

While its primary focus is financial institutions, experts anticipate other companies will eventually be at risk.

“We think it’s capable of developing new features in the future,” says Flashpoint senior intelligence analyst Vitali Kremez. “For now, it’s a banking Trojan with potential to move beyond that.”

The latest iteration of Trickbot, and its spread to the United States, indicate its authors’ sophistication, Kremez continues. Flashpoint believes a Russian-speaking gang is behind the malware.

“I’ve been constantly amazed by the sophistication and resourcefulness of the Trickbot gang,” he says, noting Necurs is only used among advanced actors. “They constantly develop means to proliferate the malware and bypass spam filters … they also have the infrastructure to proliferate the malware at scale.”

Trickbot is considered the successor to the Dyre banking Trojan, says Kremez, citing similarities between their infrastructure and setup of their configuration files. It’s possible the Trickbot author was either deeply familiar with Dyre or reused old source code. The threat actors behind Dyre have historically targeted Western financial institutions in the US, UK, and Canada.

Burbage and Kremez anticipate Trickbot will continue to evolve and target financial customers both in the US and around the world.

Trickbot’s expansion isn’t the only sign pointing to more dangerous banking Trojans. Fidelis Cybersecurity today released findings on its analysis of the Emotet loader, which was initially used for credential theft but is now used to deliver banking Trojans.

Emotet is an active threat using mass email spam campaigns to deliver malware. Fidelis found Emotet uses spam for propagation using basic social engineering techniques. Some samples have internal network propagation components, or spreaders, built in – a new strategy for banking malware authors that hasn’t been seen much in recent years.

Early banking Trojans were crude and built to work against as many targets as possible, says John Bambenek, threat systems manager at Fidelis Cybersecurity. Emotet and others now use injects to customize the threat to specific banks’ look and feel, he explains.

“An entire ecosystem has developed around this type of malware involving exploit writers, malware delivery systems, inject writers, money mules, and underground criminal marketplaces” like Alphabay, Bambenek says.

Fidelis reports it’s not surprising to see cybercriminals include spreaders in their campaigns after widespread attacks WannaCry and Petya demonstrated their effectiveness in driving infections across enterprises. More malware authors are adding functionality based on attacks in the news, which could indicate a trend we’ll see more of in the future.

Black Hat USA returns to the fabulous Mandalay Bay in Las Vegas, Nevada, July 22-27, 2017. Click for information on the conference schedule and to register.

 

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/us-banks-targeted-with-trickbot-trojan/d/d-id/1329417?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Healthcare Industry Lacks Awareness of IoT Threat, Survey Says

Three-quarters of IT decision makers report they are “confident” or “very confident” that portable and connected medical devices are secure on their networks.

Healthcare networks are teeming with IoT devices from glucometers to infusion pumps, but a study found that the majority of IT decision makers may be operating with a false sense of security regarding their ability to protect these devices from cyber attacks.

According to a survey of more than 200 healthcare IT decision makers, more than 90% of healthcare IT networks have IoT devices connected to the systems, according to a report released Wednesday by ZingBox.

“Typically you will see 10 to 15 IoT devices per bed in a hospital,” says Xu Zou, CEO of ZingBox, defining a healthcare IoT device as anything that is portable and connected to the Internet.

But despite the large presence of these devices, the healthcare industry is largely operating under the assumption that traditional security measures for laptops and servers are fine for securing the medical Internet of Things, the report finds.

For example, 70% of survey participants give their seal of approval for using traditional security technology for medical IoT devices, and 76% say they are “confident” or “very confident” that these portable and connected medical devices are secure on their networks.

The difference in using traditional IT security verses IoT-specific security is that attackers’ aim is to gain access to the portable devices, even though the payout of information could be greater if they attacked a server storing a database, Zou says.

“A lot of IoT medical devices are not protected and secure, so they are easier to gain access and control,” Zou explains. “The attackers can control them and use them as a botnet.”

This type of attitude may explain the recent results in a Ponemon Institute study, which found that while 67% of medical device makers expect an attack on their devices within the next 12 months, only 17% are taking significant steps to prevent it.

Mobile Security IoT’s Answer?

Although there are number of similarities between mobile devices and medical “things,” the security needs between the two are different, Zou says.

One of the key differences is that security patches and updates can be pushed to a mobile device, but the same is usually not the case for IoT medical devices that cannot receive software pushed to them, says Zou.

“You can push security software onto a laptop or server, but you can’t do that with an infusion pump,” he notes.

Additionally, medical devices that are used for direct patient care are heavily regulated by the Food and Drug Administration (FDA). As a result, a medical device maker may be hesitant to receive third-party software pushed to the device for fear it would nullify or affect the FDA certification it receives for its device, Zou says, noting the certification process can sometimes take five years or so to achieve.

As a result of these regulatory hoops, not one medical device maker uses third-party security software for their IoT devices, he says, adding that, according to Gartner, by 2020, 25% of attacks will be directly on the device.

“CISOs need to find a way to figure out how many IoT devices they have on their network,” suggests Zou. “The No. 1 challenge is gaining visibility into this. You cannot protect what you can’t see.” 

Related Content: 

Black Hat USA returns to the fabulous Mandalay Bay in Las Vegas, Nevada, July 22-27, 2017. Click for information on the conference schedule and to register.

 

 

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/healthcare-industry-lacks-awareness-of-iot-threat-survey-says/d/d-id/1329412?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

BSidesLV: What’s on the agenda in Las Vegas

Those attending Black Hat and DEF CON in Las Vegas next week should also check out Security B-Sides (BSidesLV), where talks will range from threats against industrial control systems and mobile apps to how big data and deep learning can be used to mount a stronger defense. The event will also be heavy on talks about how to develop one’s career in the industry.

Two of Sophos’ data scientists will give talks at the event, held July 25 and 26 at Tuscany Suites. Other talks include “Something Wicked: Defensible Social Architecture in the context of Big Data, Behavioral Econ, Bot Hives, and Bad Actors,” by San Francisco-based security professional Allison Miller, and “Your Facts Are Not Safe with Us: Russian Information Operations as Social Engineering,” by Meagan Keim, a graduate student from the University of Maryland University College.

Sophos talks

Sophos chief data scientist Joshua Saxe will present “The New Cat and Mouse Game: Attacking and Defending Machine Learning Based Software,” about ways the bad guys can manipulate machine learning to go on the attack. Saxe describes it this way in his talk description:

Machine learning is increasingly woven into software that determines what objects our cars recognize as obstacles, whether or not we have cancer, what news articles we should read, and whether or not we should have access to a building or device. Thus far, the technology community has focused on the benefits of machine learning rather than the security risks. And while the security community has raised concerns about machine learning, most security professionals aren’t also machine learning experts, and thus can miss ways in which machine learning systems can be manipulated.

My talk will help to close this gap, providing an overview of the kinds of attacks that are possible against machine learning systems, an overview of state-of-the-art methods for making machine learning systems more robust, and a live demonstration of the ways one can attack (and defend) a state-of-the-start machine learning based intrusion detection system.

Principal Sophos data scientist Richard Harang will present “Getting insight out of and back into deep neural networks.” He describes the talk this way:

Deep learning has emerged as a powerful tool for classifying malicious software artifacts, however the generic black-box nature of these classifiers makes it difficult to evaluate their results, diagnose model failures, or effectively incorporate existing knowledge into them.  In particular, a single numerical output – either a binary label or a ‘maliciousness’ score – for some artifact doesn’t offer any insight as to what might be malicious about that artifact, or offer any starting point for further analysis.  This is particularly important when examining such artifacts as malicious HTML pages, which often have small portions of malicious content distributed among much larger amounts of completely benign content.

In this applied talk, we present the LIME method developed by Ribeiro, Singh, and Guestrin, and show – with numerous demonstrations – how it can be adapted from the relatively straightforward domain of “explaining” text or image classifications to the much harder problem of supporting analysts in performing forensic analysis of malicious HTML documents.  In particular, we can not only identify features of the document that are critical to performance of the model (as in the original work), but also use this approach to identify key components of the document that the model “thinks” are likely to contain malicious elements.

 Other features

BSidesLV will also include a lockpick village, resume reviews and The New Hacker Pyramid, a contest that used to be presented at DEF CON but moved to BSidesLV a couple years ago.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3CjqcUanOcw/

Twitter users targeted by an army of 86,262 sex-starved bots

Last week, an army of sex-crazed robots invaded Twitter, looking to be #fondled, for somebody to take their #virgin, and asking a young man if he wants a vulgar. And perhaps they can be forgiven for the broken English, as the more than 8.6m tweets, spurted from more than 86,262 accounts, were apparently coming from eastern Europe.

That’s according to ZeroFOX, the security firm that ratted out the chirpy bots to security teams at Twitter and Google. The teams promptly took down the accounts and the links they were sending, which led to a network of spam porn websites.

In fact, it’s the same porn/dating websites network – linked to Deniro Marketing, which owns the domains the tweets were pimping – that a large porn spam campaign was linking to, as uncovered by security journalist Brian Krebs a month ago.

Deniro Marketing is based in California. It hasn’t responded to requests for comment from news outlets, including from Krebs and Gizmodo.

In 2010, the company was part of a class action lawsuit in which plaintiffs claimed to have been sold a raw deal, drawn to an online dating site via “spam, internet pop-up ads, or social networking scams”, induced to sign up for free, and then encouraged to upgrade to paid memberships.

Online dating services – or adultery, for that matter – are legal. Dragging people in via all those alleged scams is not.

At any rate, Deniro still hosts websites and runs affiliate marketing programs.

If it’s based in California, why the choppy English and cyrillic letters? ZeroFOX says that “a large chunk” of the Twitter accounts’ self-declared user languages were Russian.

Zack Allen, manager of threat operations at ZeroFOX, told Krebs that the humans behind the bots probably aren’t part of Deniro Marketing. Rather, they’re likely affiliates.

Krebs gave an example of a dating affiliate program, the NSFW site datinggold[dot]com. It invites marketers to make big bucks by bringing in signups for its array of online “dating” sites that promote cheating, hookups and affairs: “AdsforSex”, “Affair Hookups” and “LocalCheaters”, to name a few.

Datinggold is, in fact, behind two of the five domains that the sex bots’ Twitter links eventually resolve to, after a series of redirects meant to obscure the links’ destinations from Twitter detection.

ZeroFOX has dubbed this entourage of sexbot sadsackery Siren, as in, the half-chicken, half-buxom-babes from Greek mythology whose dulcet tones lured horny sailors into jagged rocks.

The 86,262 accounts all had profile pictures of young women whose tweets included sexually suggestive invitations to join them for a sex chat. The vast majority – 98.2% – followed this pattern:

  1. Sexy phrase (see ZeroFOX’s First Phrase image below).
  2. Exclamation point!
  3. Social engineering phrase luring recipients to click a link (see Second Phrase image below).
  4. A shortened goo.gl URL.

About 30m Twitter users fell for it. That number can be gleaned from the trackable, Google-shortened URLs that the bots were using. Some portion of victims undoubtedly forked over their payment card information, as well. If they did, they fell for what the FBI dubs a romance scam, though these Twitter honeys aren’t trying to lure you into a romance.

We’ve given out plenty of tips for how to avoid forking over money to internet cutie pies.

The same tips apply here, though what the bots are peddling isn’t meant to tug on your heartstrings, per se. They’re going after another part of your anatomy.

Note that the links in the Siren tweets didn’t contain malware. Nor did they appear to be phishing attempts. That’s the good news: with more than 30m clickthroughs, that could have been one nasty malware tsunami.

All that’s wrong with those sexy babes is that they were liars.

As in, not babes, and most decidedly not even human.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/C2ARSx8aq9I/

Facebook has got your number – even if it’s not your number

Do you value your Facebook account? Have you linked your phone number to your Facebook account? You could lose access to it if you aren’t careful, according to James Martindale, who discovered a worrisome Facebook authentication vulnerability.

Facebook encourages you to give it your phone number “to help secure your account”, and you can link multiple numbers to your account. That means that you – or anyone with access to your number – can take control of your account.

However, phone numbers, especially cellphone numbers, are re-assigned to other people and businesses on a constant basis – which means if you change your number, your old number may well be re-assigned to someone else.

And that means that whoever has your old number could potentially take over your Facebook account if that old number is still linked to it.

Martindale discovered the bug while trying to port his phone number to Google Voice, explaining:

I got a really photogenic phone number from a VoIP phone carrier called FreedomPop. I wanted to move this number to Google Voice. Unfortunately Google Voice can’t port in from landline numbers, and VoIP numbers are pretty much landline numbers. In order to pull this off, I signed up for a prepaid plan from T-Mobile. The plan was to port my number from FreedomPop to T-Mobile, and then from T-Mobile to Google Voice.

My T-Mobile SIM card arrived and I stuck it into my phone. While I looked over the activation instructions that came with the SIM card, I got two texts. The first is from somebody I don’t know, and the second is one of those texts Facebook sends out when you haven’t logged in for a while… except I hadn’t added this phone number to Facebook yet.

Knowing that you can search Facebook with a phone number, Martindale looked for the number, which was associated with an account, and then tried to sign in to that account.  He went on:

Of course it didn’t work. So I clicked on Forgot your password. The recovery option with the completely visible phone number was the one I entered. Facebook texts me a code, I enter it, and I’m logged in.

Facebook didn’t consider the vulnerability worthy of its bug bounty program. Here’s the message they sent Martindale:

There are situations where phone numbers expire and are made available to someone other than the original owner. For example, if a number has a new owner and they use it to log into Facebook, it could trigger a Facebook password reset. If that number is still associated with a user’s Facebook account, the person who now has that number could then take over the account.

While this is a concern, this isn’t considered a bug for the bug bounty program. Facebook doesn’t have control over telecom providers who reissue phone numbers or with users having a phone number linked to their Facebook account that is no longer registered to them.

Facebook accounts are often sold on the Dark Web, so Martindale concluded that this vulnerability can be exploited to take over millions of Facebook accounts and make a lot of money.

If you want to avoid having your Facebook account hijacked, make sure that any phone numbers you’ve linked to your account are currently being used by you, and make sure that you have set Facebook to alert you about unrecognized logins. Martindale hopes that by publicizing this vulnerability, Facebook might take action.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/II4HcANFxxU/

Yeah, WannaCry hit Windows, but what about the WannaCry of apps?

WannaCrypt crippled 230,000 Windows PCs internationally, hitting unpatched Windows 7 and Windows Server 2008 and computers still running Microsoft’s seriously old Windows XP, though the latter wasn’t responsible for its spread.

The initial reaction was a predictable rush to patch Microsoft’s legacy desktop operating systems.

Laudable, if late, but slow down: while the impact of WannaCrypt was huge, it was also relatively exceptional: Windows 7 ranks as number 14 and XP number 17 in the top 20 of software stacks with the most “distinct” vulnerabilities administered by the US government-funded and supported CVE Details.

Putting aside the Linux kernel, which tops the CVE list this year (in previous years it struggled to make the top 10), what is instructive is the healthy presence of applications on that list versus the number of operating systems like Windows. There are eight in the top 20. Indeed, over the years, it’s been Internet Explorer, Word, Adobe’s Flash Player who’ve played an active part in throwing open systems and leaving IT pros rushing to patch.

If applications are nearly half the problem what are our safeguards? Operating systems tend to be supported for a surprisingly long time – ten years or more for some Linux distributions. Can the same be said for the applications? Let’s look at a couple of staples.

Microsoft’s SQL Server 2005, for instance – hugely popular and well known. It was released in … well, 2005. The end of mainstream support – so long as you’d kept up with your Service Pack installs – was in 2011, with extended support finishing in 2016.

An 11-year lifecycle for an application is really not that bad, and it means that if you’re so inclined you only have to think about the upheaval of a full version upgrade every decade or so. Assuming, that is, that you don’t feel that the effort of an upgrade is worth the performance benefits it might give you, as per the comment that SQL Server 2016 “just runs faster” than what came before.

And Microsoft isn’t alone in supporting its codebases for a fair while. The other obvious place to look is Oracle’s product lifecycle doc. Now, Oracle has jumped about a bit. Looking at its core database: version 8.1.7 had a lifetime for extended support of a tad over six years; 9.2 was eight years; 11.2 leapt to more than 11 years; and with 12.2 we’re back to eight (right now both 11.2 and 12.2 are within extended support).

But you need to patch

What we’re seeing, then, is that at least for a small sample of well-known apps, there’s not a huge worry about having to risk moving to a new version every few years. But it’s very easy to get complacent (yes, that word again) and use this as an excuse not to perform updates. Particularly with applications that sit inside the corporate network and aren’t accessible (directly, at least) from the internet, it’s understandable that a company would do a risk assessment and decide that the inconvenience of an outage outweighed the benefit of installing a particular security patch. And that may be true, but patches and updates wouldn’t exist if there was no need for them.

We’ve all had “important” users (this generally means “self-important”, actually) who demand special treatment because (say) they don’t want their machines to reboot overnight. The correct response to such individuals is a two-word one, the second being “off”

Let’s pick a couple from Microsoft’s SQL Server 2014 Service Pack 2 page: KB3172998 is labelled: “FIX: A severe error occurs when you use the sys.dm_db_uncontained_entities DMV in SQL Server 2014”, for example; or there’s KB3170043 which fixes “Poor performance when query contains anti-join on a complex predicate in SQL Server 2014 SP1”. Patches fix stuff and/or make stuff better, so why wouldn’t you want to partake?

Patching equals supported

If you don’t patch your systems, there’s also a decent chance that the vendor will stop supporting them. Moving away for a moment from applications, I vividly remember working for a company that had a big pile of kit from a particular manufacturer. If something broke, the support call went like this:

Us: “Hi, we have a fault on a server.”

Vendor: “OK, please send us a [diagnostics] log.”

Us: “Here you go.”

Vendor: “Ah, some of your firmware is out of date; please upgrade it all and then call us back if it’s still broken.”

The vendor of the phone system we used was also very strict, but happily was much less restrictive: you were supported if and only if you were running the current version of the software or the immediately previous one. Anything older and you were on your own.

Now, if you’ve looked at the Microsoft SQL Server support lifecycle link from above, you’ll have noticed that product’s supportedness depends on the service packs you’ve installed.

SQL Server 2014’s support end date has already passed, you’ll see. But Service Pack 1 is supported until later this year, and Service Pack 2 until 2019 (or 2024 if you go for extended support). So at the very least you need to be keeping up with the service packs on your apps, or you’ll find yourself unsupported and the patches – functional and security – will dry up before you know it. You need to be similarly careful with non-Microsoft apps, too: check out the minor version numbers on Oracle 12, for instance and you’ll see that 12.1 is about to hop into its death bed (for basic support, anyway) next year while 12.2 lives on until 2022.

Designing in upgradeability

Back in my network admin days, I became used to being able to upgrade Cisco ASA firewalls with zero downtime. This was because: (a) we ran them in resilient pairs; and (b) the devices would continue to run as a cluster so long as the versions weren’t too dissimilar.

The same applies to many applications: the manufacturers know that you hate downtime and so they make their products such that you can do live or near-live upgrades, and all you have to do is design your instances of those products to exploit all that funky upgrade-ability they included. Back at our SQL Server example, for instance, there’s a hefty tome on how you go about upgrading, which discusses minimising downtime for different types of installation.

When I talked last time about ongoing OS patching, I pointed out that in these days of virtual systems the hypervisor writers have handed you a big stack of get-out-of-jail-free cards when it comes to patching and updating. The same logic applies to applications: you can test updates in a non-live setting by cloning your systems into a test network, and you can protect against duff patches by snapshotting the live virtual servers before you run the update. The are few things easier in this world than rolling back a server to the previous snapshot. So even when you can’t entirely eliminate downtime, you can at least minimise it.

Accepting the downtime

Patches for big, core applications aren’t a light-touch affair (if you have legacy stuff there’s a good chance that it wasn’t designed and installed with live updates in mind. I’ve seen people do core application patching (particularly stuff like Microsoft Exchange servers) over a number of days, with multiple downtimes, and so you have to plan and communicate properly.

What you shouldn’t do, though, is be beaten into postponing such controlled outages indefinitely just because the business users moan. Of course, you need to avoid unnecessary outages but the point is that some outages aren’t just advisable, they’re essential. We’ve all had “important” users (this generally means “self-important”, actually) who demand special treatment because (say) they don’t want their machines to reboot overnight. The correct response to such individuals is a two-word one, the second being “off”.

Yes, you need to keep outages to a sensible level, but you absolutely shouldn’t be put off having the right number of them. Not least because in many cases you can demonstrate how few people were using a particular app at the time of an update, and hence hopefully persuade the users that the impact really isn’t as bad as they think.

So… the complacency of application patching

Merriam-Webster defines “complacency” as: “Self-satisfaction especially when accompanied by unawareness of actual dangers or deficiencies” or “an instance of usually unaware or uninformed self-satisfaction.”

In a security context that’s a very scary thing… in fact in an IT manager or sysadmin context it’s equally scary. On one hand it might be that you have systems that you have left unpatched because you decided it was OK to do so. On the other hand you may be completely patched up-to-date and sit there smugly not realising that you’ve (say) made something insecure or unstable through a configuration error or some such.

So just as it was inexcusable for operating system maintenance, complacency is also a sin for the apps. Most of the updates you’ll do are relatively straightforward, and the rollback is often equally simple. So you’ll usually point at the business people as your excuse – either they’ve not given you the resource you need in order to do all the updates, or they’ve told you you can’t have the downtime.

And the day your unpatched system gets ransomwared or DDoSed and you’re down for a week will be the day you wish you’d been a little bit more insistent on a few hours of reboots here and there. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/20/application_level_patching/

HMS Frigatey Mcfrigateface given her official name

The first of the Royal Navy’s new Type 26 frigates has been named HMS Glasgow, recycling the name for the fourth time in the last 100 years.

“The name Glasgow brings with it a string of battle honours. As one of the world’s most capable anti-submarine frigates, the Type 26 will carry the Royal Navy’s tradition of victory far into the future,” said the First Sea Lord, Admiral Lord Philip Jones, naming the as-yet-unbuilt warship this morning.

All future Type 26s will be named after cities, making them the City class – a step up from when the names were previously used as part of the Town class of yore. Numerous wags on Twitter suggested that the ship would be named HMS Frigatey McFrigateface, in a nod to the Natural Environment Research Council’s epic public naming contest blunder.

“This is great news for the workers on the Clyde: first-in-class builds are always special, but I know from visiting BAE Systems earlier this year that they are raring to go on a world-class project that will showcase their skills and the ‘Clyde built’ brand for a new generation,” Martin Docherty-Hughes, the Scottish Nationalist Party MP for West Dumbartonshire, told The Register.

The Type 26s are the future of British sea power, being intended to replace the venerable old Type 23 frigates that make up the backbone of the Navy’s warfighting fleet. In British service, frigates are broadly equipped to fight other surface warships and as anti-submarine vessels, a particular British speciality.

Although today marks a fresh milestone in the Type 26 project – at the beginning of this month the Ministry of Defence finally got over itself and placed the £3.7bn order for the first three ships of the planned class of eight – the whole ordering-new-ships thing is, predictably, mired in delays and cost overruns, mainly caused by MoD dithering and government wanting to avoid spending large sums of money at politically inopportune moments.

In addition, the number of ships has been slashed: Blighty was originally set to receive 13 Type 26s, split between specialised anti-submarine variants and general purpose ships, until some bright spark in the MoD decided that the GP version should be dropped and replaced with a cheap ‘n’ cheerful equivalent named the Type 31e (“e” for “export”). Despite that, BAE Systems, the Type 26’s builders, still hope they can sell their ships to other nations such as Canada.

The last HMS Glasgow was a Type 42 air defence destroyer which was decommissioned in 2005. She had an eventful career, including serving in the Falklands War, where she took a direct hit from an Argentine bomb that mercifully failed to explode.

Further back in time, the Second World War had an HMS Glasgow, a Town-class light cruiser and a sister ship to the preserved HMS Belfast, which is permanently moored in central London. In the First World War an HMS Glasgow fought in the little-known Battle of the Falkland Islands in 1914.

Naming warships is an inherently political process. The Royal Navy has, particularly in the latter part of the 20th century, tried to pick names that guarantee it support from the important parts of society – see the Hunt-class mine countermeasure vessels, named after the packs of well-off Hooray Henrys who spend their free time galloping around Blighty’s fields in search of foxes. More recently, a Cold War-era frigate was named HMS London, which worked well until she was flogged off to Romania in 2002, complete with a few crates of unwanted L85A1 rifles. Type 23 frigate HMS Westminster continues flying the flag for the RN near the corridors of power, courtesy of a feature wall in Westminster Tube station.

The name Glasgow was officially bestowed to recognise the shipbuilding heritage of the Clyde area. In reality, it’s more of a sop to try and damp down the fires of Scottish nationalism; apparently, patriotic names are all that now stands between the United Kingdom and its breakup. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/20/hms_glasgow_first_type_26_frigate_named/

No one still thinks iOS is invulnerable to malware, right? Well, knock it off

The comforting notion that iOS devices are immune to malicious code attacks has taken a knock following the release of a new study by mobile security firm Skycure.

Malicious mobile apps in Apple’s App Store are mercifully rare (XcodeGhost aside) compared to the comparative “Wild West” of the Google Play store, which has come to exist despite the Chocolate Factory’s best efforts to clamp down on the problem. However, hackers have found other ways to get malware installed, Skycure points out.

For one thing, Apple provides for sideloading apps as part of its support for enterprises and their proprietary business apps. But hackers have adopted social-engineering strategies to trick users into installing malicious apps through this route. Jailbroken iOS devices are more at risk of attack. Adware apps using approved certificates, malicious cables and the leveraging of vulnerabilities have also been exploited at times.

The number of iOS vulnerabilities patched in the first quarter of 2017 is already greater than the total number of iOS vulnerabilities discovered in all of 2016, according to Skycure. “Fortunately, Apple is still very fast at patching the OS and distributing updates,” their report added.

Ways to infiltrate an iOS device [source: Skycure blog post]

The study found that since iOS has become more popular as a platform, especially for enterprise executives and government agency officials, the rate of attacks and incidents of malware have increased. According to the report, the percentage of enterprise iOS devices that have malicious apps installed today has more than tripled since Q3 2016. In comparison, the rate of Android malware infections has stayed relatively flat.

Skycure declined The Register‘s requests to offer absolute figures for either the number of iOS vulnerabilities or frequency of iOS malware detections. Independent experts are sceptical about the angle taken by Skycure in its report.

“Android malware is still far more common,” said Martijn Grooten, editor of industry journal Virus Bulletin. “The whole report looks like the authors are desperate to make iOS security sound as bad as possible.”

The number of iOS nasties ever discovered is numbered in the scores. By contrast, Android malware runs into the millions and Windows nasties extend into the tens of millions. Skycure argues that despite the numeric insignificance of iOS nasties, they still need to be considered, not least because of the Bring Your Own Device (BYOD) trend.

Yair Amit, co-founder and CTO of Skycure, commented: “The iPhone ushered in the trend of BYOD, and the concept of apps and the app store, changing how IT manages corporate networks and equipment. The impact of iPhones and iPads on work productivity means more employees are choosing iOS devices for BYOD, and that makes iOS a valuable target for hackers.

“The number of vulnerabilities and malware does not indicate how secure a platform is, but it does indicate how often hackers are attempting to break into it.”

Skycure’s critical take on the security record of iOS, 10 years on from the creation of the mobile OS, can be found here. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/20/ios_security_skycure/

Profile of a Hacker: The Real Sabu

What’s This?

There are multiple stories about how the capture of the infamous Anonymous leader Sabu went down. Here’s one, and another about what he is doing today.

The capture of Sabu was perhaps the most spectacular fall from grace this century — at least in the security world. He went from being the most beloved figure in the hacktivist group, Anonymous, to being its most hated.

From 2011 to 2012, Sabu was the unofficial leader of the online activist group. He organized effective distributed denial-of-service (DDoS) campaigns and enforced meaningful discipline within Anonymous where there hadn’t been any before — and hasn’t been since.

During Sabu’s reign, Anonymous became adept at handling the media, making effective use of Twitter to claim victory (even if they were hollow victories at best). Screenshots of “site down” pages were taken, tweeted, and trumpeted to the media, which eagerly wrote about the fearsome prowess of Anonymous. These were the salad days of Anonymous, when they seemed untouchable and everywhere.

To maximize the glory, Sabu collected a smaller cadre of hacktivists from Anonymous and named it LulzSec, which became famous very quickly for a series of high-profile hacks. Where many people passively supported the egalitarian goals of Anonymous, they were turned off by the actions of LulzSec, which were seen as creating much collateral damage to innocent citizenry.

The LulzSec attack of Sony Pictures is an illustrative example. Sony Pictures was running several prize giveaways as part of a marketing campaign. LulzSec used a basic SQL injection to breach the SonyPictures.com database and grabbed the usernames, passwords, and personal profiles of over one million registered users. They then dumped the data to Pastebin. LulzSec’s justification at the time was that Sony Pictures’ security was “… disgraceful and insecure: they were asking for it.” But the justification seemed little more than braggadocio to the community. When someone asked LulzSec why they would compromise the credentials of so many innocent television watchers, they replied “we do it for lulz” (the laughs).

Well, LulzSec wasn’t going to keep laughing for long.

By that time, Sabu had achieved an almost messianic following among Anonymous, and his twitter account, @anonymouSabu, had hundreds of thousands of followers. He was number one on the FBI’s most wanted cybercriminal list.

 

If that weren’t enough heat, Sabu had also attracted the attention of his polar opposite: the famous pro-U.S., ex-Special Ops service member and hacker known as The Jester. The Jester, too, was known for distributed denial-of-service attacks and had been spending months attacking Jihadist websites in order to drive their users into more centralized, resilient networks where they could be monitored by the various agencies that track terrorist activity.

As an ex-military operative, The Jester loathed Sabu. The two stood at opposite sides on nearly any given topic: WikiLeaks, Anonymous, the Occupy movement, the forum 4chan, the CIA, and the Palestinian/Israeli conflict, to name just a few. One notable exception was the Westboro Baptist Church (WBC), which is known for conducting anti-gay protests at military funerals. Both Sabu and the Jester agreed about this group, and they both attacked the WBC repeatedly.

During the first half of 2011, Sabu and The Jester tried repeatedly to uncover each other’s identity. The conflict between Sabu and the Jester reached a fever pitch at DEF CON 19, the nineteenth annual security convention in Las Vegas. Both hackers claimed to be in attendance along with the 20,000 other hackers, researchers, and undercover FBI agents. The Jester taunted Sabu to come out and meet him face-to-face. Sabu replied that of course he would not. The Jester was suspected to be in collusion with, or at least sanctioned by, the U.S. government. Sabu protested that if he were to expose his own identity, even privately, to The Jester, he would be immediately pounced upon by the authorities.

Sabu did not come out to meet The Jester, and a few months later we found out why. Sabu had already been nabbed and turned by the FBI. There are multiple stories about how the capture of Sabu went down. The simplest one goes like this: Of course, Sabu used anonymization networks to hide his identity and make source tracing impossible. Network anonymization would have been a basic precaution for the most-wanted cybercriminal at the time.

According to one story, Sabu forgot to activate his Tor link a single time, and logged into a server using his real IP address. The authorities traced his real IP address, and Sabu was quickly and quietly detained.

Sabu’s real name, as it turns out, was Hector Xavier Monsegur, from the Puerto Rican island of Viecques.  Monsegur had been implicated in, or bragged about dozens of illegal, high-profile hacks, not to mention multiple DDoS attacks. Facing a sentence of 25 to 100 years in prison, he struck a deal in which he agreed to turn over his friends from LulzSec to the authorities.

As part of Monsegur’s plea deal, the authorities were given access to his Twitter account and used it to collect information about Anonymous and LulzSec sympathizers. The judge in Monsegur’s case praised him for his “extraordinary cooperation” with the FBI. Armed with their informant’s information, the authorities apprehended the members of LulzSec. Many are now serving long jail sentences and owe hundreds of thousands of dollars in restitution to the organizations they once brazenly penetrated. Many in Anonymous felt betrayed by Monsegur’s cooperation with the authorities and publicly called him out. He has had little comment about it since.

Monsegur himself was freed on May 27, 2014 after time served. He now lives in New York City, where he, on occasion, gives interviews. He no longer Tweets as Sabu, but instead as Hector X. Monsegur.

With LulzSec members behind bars, and Monsegur neutralized, The Jester went back to attacking Jihadist websites and gathering intel on ISIS. He blogs vociferously against the Trump Administration and maintains a store of “JesterGear” when he’s not running his own Minecraft server.

The Jester remains undoxxed to this day.

Get the latest application threat intelligence from F5 Labs.

David Holmes is the world-wide security evangelist for F5 Networks. He writes and speaks about hackers, cryptography, fraud, malware and many other InfoSec topics. He has spoken at over 30 conferences on all six developed continents, including RSA … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/f5/profile-of-a-hacker-the-real-sabu-/a/d-id/1329359?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

DevOps & Security: Butting Heads for Years but Integration Is Happening

A combination of culture change, automation, tools and processes can bring security into the modern world where it can be as agile as other parts of IT.

DevOps has been a hot topic now for the better part of a half-decade – and IT security has been on fire for longer than that. However, the two disciplines have been going down parallel paths for years, never to meet, because infrastructure teams and application development groups tend to work in their own little silos and claim ignorance as to what they others group does.

Why? There appears to be no good reason other than, “that’s the way it’s always been.” I believe there is some element of CYA in here, where if something doesn’t work, it’s easy to point a finger to a group you have nothing to do with. But by in large it’s just a legacy IT mindset. 

This sentiment seems to be changing and IT leaders are actually attempting to bring infrastructure and applications together. I attended a recent conference where Mike Giresi, the CIO of Royal Caribbean Cruise, discussed this issue and noted that “the fact [that] the infrastructure and application teams don’t work together is completely insane.” He said there was a mandate inside Royal Caribbean that the DevOps and infrastructure teams would be goaled on the same thing – the success of applications. The common goal was in place to ensure collaboration and avoid finger-pointing. 

I get why DevOps teams may be hesitant to work with infrastructure groups. The primary focus of DevOps is speed and continuous innovation. Infrastructure, particularly security, thrives on keeping the lights on so “if it ain’t broke don’t fix it,” which is in stark contrast to the agile mindset of DevOps. 

Recently, DigiCert, a vendor of encryption solutions for enterprise and Internet of Things (IoT) security, ran their “2017 Inviting Security into DevOps Survey” to find the status of enterprises integrating security with DevOps. 

Before I get into the survey results, le me get on my soapbox regarding the need to bring infrastructure, particularly security, and DevOps together. Digital transformation requires businesses to move with speed. Speed requires IT agility. IT agility requires app development and infrastructure to be agile and, in most organizations, security is anything but agile.  In fact, security is often left to the very end so companies can build a new app and then have to wait months until the security teams have made their changes and are ready. A better approach is to build security into the application development process.

The survey shows that a surprising 49% of organizations have completed the process of integrating security and DevOps, and another 49% are in the process of doing so. I suspect the group that says they were “finished” simply does not know what they do not know, so there’s likely more work to be done that they aren’t aware of it.  From personal interviews, I think that the number of companies that have completed the process is about 25% or less. 

Whatever the real number is, the survey shows that the results have been positive as respondents are 22% more likely to report they are doing well with information security, 21% more likely to report they are meeting application delivery deadlines, and 21% more likely to lower application risk.

The study also looks at the ramifications of not changing, and these results really hit home.  Respondents to the survey were concerned that failure to integrate security and DevOps would add to the following already existing problem:

  • 78% cite increased costs
  • 73% cite slower application delivery
  • 71% cite increased security risk.

I find it interesting that more respondents are concerned with cost and speed than increased security risks but that’s likely a function of how critical agile development has become to organizations. Another way to think about this result is that security doesn’t matter if costs skyrocket or application development is too slow, as the organization will fall behind its competitors. 

To DigiCert’s credit, the company didn’t just run the survey and show the results.  The company also provided some recommendations and best practices on how to bring these formerly independent worlds together:

  • Appoint a social leader. Putting a leader in place to drive cultural change across the company is extremely important to success. This needs to be a top-down initiative where all parties understand the importance and the consequences of failure. Personally, I like the approach Royal Caribbean took of shifting to an outcome-based approach as it gives everyone a common lighthouse to row to.
  • Bring security to the table.  A security lead must be present on all DevOps initiatives – and be involved from the outset of projects. DigiCert suggests limiting access, signing and encrypting everything in the network using automated PKI. This makes sense given DigiCert’s solution but baking security into the development process ensures success at every step.
  • Invest in automation. This is music to my ears as I’ve long been a proponent of automating everything possible.  People work too slowly to keep up with digital trends, and automating things like patching, vulnerability scanning and certificate management is the only way to keep up with the speed of business today.
  • Integrate and standardize. Standardization and repeatability is the key to on-going success. Doing things ad hoc is a sure way to lead to failure.

DevOps and security have been butting heads for years but they don’t have to.  A combination of culture change, automation, tools and processes can bring security into the modern world where it can be as agile as other parts of IT.  The DigiCert survey shows the importance of going down this path and the repercussions of not doing so. There’s never been a better time to bring security and DevOps together, so let’s start now. 

Black Hat USA returns to the fabulous Mandalay Bay in Las Vegas, Nevada, July 22-27, 2017. Click for information on the conference schedule and to register.

Related Content:

 

Zeus Kerravala provides a mix of tactical advice and long term strategic advice to help his clients in the current business climate. Kerravala provides research and advice to the following constituents: end user IT and network managers, vendors of IT hardware, software and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/devops-and-security-butting-heads-for-years-but-integration-is-happening/a/d-id/1329407?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple