STE WILLIAMS

Yahoo! knew! 2014! data! breach! was! severe!, but! failed! to! respond! properly!

Yahoo!‘s board has decided CEO Marissa Mayer should not be paid her bonus, after investigating the 2014 hack that has so besmirched the company’s reputation and finding the company knew about the gravity of the situation but failed to act properly to address the situation. Mayer has also decided to forego an award of equity due to her this year.

News of the decisions and Yahoo!‘s investigation into the hacks emerged today with the publication of the company’s Form 10-K, the warts-and-all documents US public companies are required to file each year to disclose just about any risk they face.

The 10-K summarises the results on and Independent Committee’s investigation of the 2014 hack and the news isn’t good for Yahoo! because the investigators “… concluded that the Company’s information security team had contemporaneous knowledge of the 2014 compromise of user accounts, as well as incidents by the same attacker involving cookie forging in 2015 and 2016.”

“In late 2014, senior executives and relevant legal staff were aware that a state-sponsored actor had accessed certain user accounts by exploiting the Company’s account management tool,” the 10-K says, explaining that while the company “took certain remedial actions, notifying 26 specifically targeted users and consulting with law enforcement” those efforts weren’t sufficient.

The filing offers this observation about Yahoo!‘s conduct:

…it appears certain senior executives did not properly comprehend or investigate, and therefore failed to act sufficiently upon, the full extent of knowledge known internally by the Company’s information security team.

It gets worse, as the 10-K also offers the following analysis:

Specifically, as of December 2014, the information security team understood that the attacker had exfiltrated copies of user database backup files containing the personal data of Yahoo users but it is unclear whether and to what extent such evidence of exfiltration was effectively communicated and understood outside the information security team.

There’s a tiny ray of sunshine in that the Independent Committee “did not conclude that there was an intentional suppression of relevant information.”

But the investigators did find “… that the relevant legal team had sufficient information to warrant substantial further inquiry in 2014, and they did not sufficiently pursue it. As a result, the 2014 Security Incident was not properly investigated and analyzed at the time, and the Company was not adequately advised with respect to the legal and business risks associated with the 2014 Security Incident.”

And those risks were substantial, because the 10-K reveals that the forensic experts it hired to look into the creation of forged cookies that could allow an intruder to access users’ accounts without a password has found that “an unauthorized third party accessed the Company’s proprietary code to learn how to forge certain cookies.”

“The outside forensic experts have identified approximately 32 million user accounts for which they believe forged cookies were used or taken in 2015 and 2016.”

The good news is that Yahoo! has “invalidated” those cookies “so they cannot be used to access user accounts.”

The bad news is that the investigation found “…. failures in communication, management, inquiry and internal reporting contributed to the lack of proper comprehension and handling of the 2014 Security Incident. The Independent Committee also found that the Audit and Finance Committee and the full Board were not adequately informed of the full severity, risks, and potential impacts of the 2014 Security Incident and related matters.”

Marketers for information security companies and governance educators probably want to have those remarks framed.

The rest of us won’t: Mayer’s bonus is US$2m and her equity grant is usually about $12m of stock. That’s peanuts compared to the US$350m Verizon has trimmed from its offer to buy Yahoo!. Mayer’s lost haul is probably also well below the company’s bill for lawyers to fight the “approximately 43 putative consumer class action lawsuits” the form 10-K says have been filed to date regarding the 2014 security breach.

Yahoo! doesn’t think they will amount to much: the filing says “… the Company does not believe that a loss from these matters is probable and therefore has not recorded an accrual for litigation or other contingencies relating to the Security Incidents.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/02/yahoo_internal_hack_investigation_is_daming_marissa_mayer_loses/

New Version Of Dridex Banking Trojan Uses ‘AtomBombing’ To Infect Systems

It’s the first malware to use a newly disclosed code-injection method to break into to Windows systems

Security researchers at IBM have discovered a new version of the Dridex banking Trojan that takes advantage of a recently disclosed code injection technique called AtomBombing to infect systems.

The modified version of the malware is already being used in online banking attacks across Europe and poses a fresh threat to organizations because it is harder to detect than previous versions.

“The new code injection method shuffles things up on detection mechanisms,” says Limor Kessem, executive security advisor at IBM Security. “It means that unless adapted protection layers are added to endpoints, it’s going to be much harder to detect what Dridex does once its deployment flow starts rolling,” Kessem says.

AtomBombing is a technique that security vendor enSilo demonstrated last October for injecting malicious code into the “atom tables” that almost all versions of Windows uses to store certain application data. It is a variation of typical code injection attacks that take advantage of input validation errors to insert and to execute malicious code in a legitimate process or application. Attackers have long used such code injection tactics to try and bypass security controls and carry out malicious activity without being detected.

What enSilo demonstrated was a method to sneak malicious code into Windows atom tables without being detected by the usual security mechanisms and then to get applications to retrieve and execute the code.

enSilo has stressed that its approach does not exploit any vulnerability in Windows and instead simply takes advantage of how the operating system functions. Since the technique does not rely on flawed or broken code, there is little that Microsoft can do to patch against it, the company has previously noted.

The new version of Dridex (Dridex v4) is the first malware that uses the AtomBombing process to try and infect systems. It uses atom tables to copy its payload and some other related data into the memory space of a target process. But then, in a departure from the rest of enSilo’s approach, the new version of Dridex uses a different method to ensure it gets executed.

“From [previous] experience with Dridex, its authors favor writing their own code, using their own ideas,” Kessem says. In this case, since a lot of details about the AtomBombing technique is already out there, the authors of Dridex probably felt it was safer to put a twist on it, he said. “Also, many times developers who know the code most intimately choose the features that work best with it, or that will suit future development plans.”

The code injection feature is one of several tweaks, including new encryption and persistence mechanisms, that the authors of Dridex have made available with the latest version of the malware.  But it is the most important one because it allows Dridex a way to propagate in an infected system in a minimally observable manner, an alert on the new malware noted.

In a statement, a Microsoft spokeswoman said for malware like Dridex to be able to use code-injection techniques, the user’s system needs to have already been compromised. “To help avoid malware infection, we encourage our customers to practice good computing habits online, including exercising caution when clicking on links to web pages, opening unknown files, or accepting file transfers.”

Tal Liberman, a security researcher at enSilo, says it is no surprise at all that malware authors are attempting to use the AtomBombing method. “I’m actually surprised that it took so long for something like this to surface,” he says.

Typically, when 0-day vulnerabilities are disclosed, attackers try to use them as soon as possible, before software vendors roll out patches. “This pattern holds true for new injection techniques such as AtomBombing,” he says. In fact, others have likely used the technique already and the latest version of Dridex is only the first to be detected using it, he says.

AtomBombing takes advantage of Microsoft Windows’ built-in atom tables that allow specific API calls to inject code into the read-write memory space of a targeted process, he says. This is a legitimate part of the operating system performing as designed and cannot be patched against, Liberman says.

However, average security products can block most known code injection techniques. When new techniques like AtomBombing are used, the products need to be updated to neutralize that specific technique, he says.

Related stories:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/new-version-of-dridex-banking-trojan-uses-atombombing-to-infect-systems/d/d-id/1328299?_mc=RSS_DR_EDT

Is E2EMail a new beginning or the end for Google’s End-to-End?

Google’s end-to-end email encryption project that it started back in 2014 has left home. But has the Chrome extension really “flown the nest” as Google claimed last week? Or has it simply been abandoned and left to fend for itself?

Turn back the clocks to 2013. Google promises end-to-end encryption in an effort to regain users’ trust following Edward Snowden’s revelations about global surveillance conducted by government law-enforcement agencies.

And Google did made good on that promise in March 2014, switching Gmail to HTTPS only and encrypting emails internally too, shouting from the rooftops that these changes were

something we [Google] made a top priority after last summer’s revelations.

A few months later, in June 2014, Google then announced extension for its Chrome browser called “End-to-End”. Still only in the very early stages of development, the new extension would allow users to send and receive emails securely.

“End-to-end” encryption means data leaving your browser will be encrypted until the message’s intended recipient decrypts it, and that similarly encrypted messages sent to you will remain that way until you decrypt them in your browser.

Google promised to make the extension available in the Chrome Web Store once it was “ready for primetime”.

Yahoo quickly jumped on board, announcing its support for the project at the Black Hat security conference in Las Vegas in August 2014.

Later that year, Google revealed that it was making the source code for the Chrome extension open source via GitHub, while proclaiming it still intended one day to release the extension on the Chrome Web Store:

We’re excited to continue working on these challenging and rewarding problems, and we look forward to delivering a more fully fledged End-to-End next year.

After that, however, everything went quiet. Google’s commitment to the project became questionable.

An article in Tom’s Hardware points out that “Yahoo even demoed a preview version of the extension ahead of Google” – in spring 2015. A full year later, the project still remained a work in progress. Motherboard sums the progress up well:

Google and Yahoo’s projects on secure end-to-end encrypted email have yet to see the light of day. That’s why some are starting to question how much Google and Yahoo really care about making this happen.

Neither Google nor Yahoo’s project managers for the email encryption project responded to Motherboard’s request for comment. But Yan Zhu, a former lead developers on the end-to-end project at Yahoo told Motherboard that engineers at both Google and Yahoo were “all really committed to it”.

Last Friday, Google quietly announced that End-to-End was no longer a Google effort.

E2EMail is not a Google product, it’s now a fully community-driven open source project, to which passionate security engineers from across the industry have already contributed.

Careful to make it clear that it’s not completely giving up on the project, which is now called E2Email, Google added that was looking forward to “working alongside the community to integrate E2EMail with the Key Transparency. If you’re interested, you can check out the e2email-org/e2email repository on GitHub.server, and beyond.”

Talking to Wired, Google’s Somogyi explained the reasons for the move:

We want to put this into the open-source community is precisely because everyone cares about this so much. We don’t want everyone waiting for Google to get something done.

Not everyone’s convinced. Matthew Green, a cryptographer and computer scientist at Johns Hopkin University who has closely studied tech firms’ messaging encryption products, told Wired:

The real message is that they’re not actively developing this as a Google project any more. It’s definitely a bit of a disappointment, given how much hype Google generated around this project.

The University Herald fears that without Google’s backing, the project might now simply go nowhere. Highlighting the uphill battle that the project now faces, it notes:

The open source environment is known for being littered with abandoned software and coding projects due to lack of developers’ interest, strong backings, and last, project goals.

Three years on and with no easy-to-use extension available, only time will tell whether or not the open-source community has enough interest to keep this project alive. This project is full of technical challenges that need to be overcome.

Yet it offers so much opportunity too. As ZDNet points out, the open source community has been given the project “with no strings attached”. The primary goal for now, according to the description on GitHub, is to “improve data confidentiality for occasional small, sensitive messages”, where “even the mail provider, Google in the case of Gmail, is unable to extract the message content”.

The question has to be: is there the will and the leadership needed to make it happen?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/kLOggYTVK-Q/

Unholy trinity of AKBuilder, Dyzap and Betabot used in new malware campaigns

In recent weeks, SophosLabs has published papers outlining threats from AKBuilder and Betabot. Now, it appears the bad guys are combining the two in new attack campaigns.

SophosLabs principal researcher Gábor Szappanos said the lab has received and analyzed a handful of AKBuilder-generated documents in the past week. The documents initially drop a file that contains two payloads: one, the popular Dyzap credential stealer; the other, something that appears to be a version of Betabot. “We thought that Betabot was pretty much dead, with no activity in the past months – up until last week,” Szappanos said.

The malware and tools in question

Before looking at the new developments, a review of the malware is in order.

  • AKBuilder is an exploit kit that generates malicious Word documents, all in Rich Text. Once purchased, malicious actors use it to package malware samples into booby-trapped documents they can then spam out. SophosLabs has analyzed and defended customers against two versions – AK-1, which uses two exploits in the same document: CVE-2012-0158 and CVE-2014-1761, and AK-2, which uses a single exploit: CVE-2015-1641.
  • Dyzap is a banking Trojan commonly found along with malware called Upatre. The Upatre component is typically delivered in bulk, via spam, and then used to install the banking Trojan on infected computers.
  • Betabot is a malware family used to hijack computers and force them to join botnets. It has been used to steal passwords and banking information, and has most recently been used in ransomware campaigns. Betabot has been around for a long time. Its code is easy to duplicate and attackers have turned to a cracked builder to produce it on the cheap.

The three converge

Szappanos shared his findings with Naked Security Wednesday morning. He said the malware is delivered in email messages like this:

screen-shot-2017-03-01-at-8-04-41-am

And this:

screen-shot-2017-03-01-at-8-04-59-am

The attachment of the messages are Rich Text Format documents generated by AKBuilder. These documents drop the additional malware components. Here’s some basic information about the documents:

 

The exploited documents drop two files, which are two different Trojans:

%USERHOME%AppDataRoamingwin32.exe : Dyzap credential stealer

%USERHOME%AppDataLocalTempnbot.exe :Betabot/Neurevt Trojan

Dyzap is executed automatically on startup by %STARTUP%win32.vbs

The CC addresses were used by Dyzap to send credentials to:

monetizechart.me

dfoxinternashipoop.top

conticontrations.com

mdelatropsopc.info

scopeclothingsltd.pro

The malware submits stolen info to a php script on the server, the name of which is fre.php by default

The login of the CC panel looks like this:

screen-shot-2017-03-01-at-8-09-36-am

The builder of Dyzap (at least a cracked version) looks like this:

screen-shot-2017-03-01-at-8-10-54-am

It calls itself Loki stealer, but to avoid confusion with the Locky ransomware, it is simply called Dyzap.

Defensive measures

Since Betabot has most recently been used to serve up ransomware, a reminder of our tips on that front should be useful:

To protect against AKBuilder activities, simply applying recent patches for Microsoft Office should be enough to disarm the attack.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/oJMBsyZbHIw/

News in brief: AWS fail hits services; YouTube launches streaming service; Windows gets ‘walled garden’ option

Your daily round-up of some of the other stories in the news

AWS outage hits apps, web, platforms

A three-hour outage at Amazon Web Services on Tuesday not only took down millions of websites but also many huge services, apps, back ends and platforms, highlighting the silent but vital role the web giant plays in keeping the lights on for countless services, vast and tiny.

Amazon hasn’t yet said what took down its AWS S3 service, but its failure left services from Mailchimp, Slack and Trello to IFTT returning errors – and leaving people unable to control smarthome devices such as connected lightbulbs.

Even Amazon’s own dashboard was taken down by the outage, meaning that users couldn’t get updated information, as was the isitdownrightnow site that monitors the status of big sites and platforms.

As with the Mirai attack last year on the DNS service provided by Dyn, this outage is a sobering reminder that much of the web relies on a small group of huge infrastructure providers.

YouTube: all your eyeballs r belong to us

Hard on the heels of its announcement that viewers watch a billion hours of video every day, YouTube also said that it would be moving in to the space now dominated by Netflix and Amazon by launching a live streaming service in the US.

Costing $35 a month, the service will offer live streaming from the biggest US networks, including ABC, CBS, Fox and NBC, as well as cable channels such FX and MSNBC.

What’s interesting here – apart from the way YouTube is parking its tanks on the lawns of the established streaming providers – is that it will be offering what it calls a “cloud DVR”, an unlimited way to record and store live TV, that will be available on a range of devices.

That’s one heck of an aggregation of datapoints for YouTube’s parent, Google – and doubtless means that that figure of 1bn hours of video consumed every day will go up – and up.

Windows to allow enforcing of ‘walled garden’

Windows 10 admins will soon be able to block users from installing apps from outside the Windows Store.

The new feature, spotted by a developer, and rolling out to Windows Insiders with build 15042, will offer three options for installing software via the Apps features section of the Settings app: Allow apps from anywhere, Prefer apps from the Store, but allow apps from anywhere, and Allow apps from the Store only.

The “walled garden” approach, which Apple enforces for its iOS devices (unless you jailbreak, of course) and is on by default in Android devices (though it’s easy to toggle off) is often preferred from a security point of view, though others dislike the thought of apps being “vetted”.

The option will apparently be available in all flavours of Windows 10.

Catch up with all of today’s stories on Naked Security


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/KiP51YVM8oQ/

Cloudbleed: Big web brands ‘leaked crypto keys, personal secrets’ thanks to Cloudflare bug

Updated Big-name websites potentially leaked people’s private session tokens and personal information into strangers’ browsers, due to a Cloudflare bug uncovered by Google researchers.

As we’ll see, a single character – ‘‘ rather than ‘=‘ – in Cloudflare’s software source code sparked the security blunder.

Cloudflare helps companies spread their websites and online services across the internet. Due to a programming blunder, for several months Cloudflare’s systems slipped random chunks of server memory into some webpages, under certain circumstances. That means if you visited a website powered by Cloudflare, there’s a small chance you may have ended up getting chunks of someone else’s web traffic bunged at the bottom of your browser page.

For example, Cloudflare hosts Uber, OK Cupid, and Fitbit, among thousands of others. It was discovered that visiting any site hosted by Cloudflare would sometimes cough up sensitive information from strangers’ Uber, OK Cupid, and Fitbit sessions. Think of it as sitting down at a restaurant, supposedly at a clean table, and in addition to being handed a menu, you’re also handed the contents of the previous diner’s wallet or purse.

This leak was triggered when webpages had a particular combination of unbalanced HTML tags, which confused Cloudflare’s proxy servers and caused them to spit out data belonging to other people – even if that data was protected by HTTPS. The webpages would also need to use a particular set of Cloudflare services as a catalyst for the spillage.

Leaked … Some unlucky punter’s Fitbit session slips into another a random visitor’s web browser. Click to enlarge (Source: Google Project Zero)

Normally, this injected information would have gone largely unnoticed, hidden away in the webpage source or at the bottom of a page, but some leaks were spotted by security researchers – and the escaped data made its way into Google and Bing caches and the hands of other bots trawling the web.

Timeline

The blunder was primarily discovered by Tavis Ormandy, the British bug hunter at Google’s Project Zero security team, when he was working on a side project last week. He found large chunks of data including session tokens and API keys, cookies and passwords in cached pages crawled by the Google search engine. These secrets can be used to log into services as someone else.

“The examples we’re finding are so bad, I cancelled some weekend plans to go into the office on Sunday to help build some tools to clean up,” he said today in an advisory explaining the issue.

“I’ve informed Cloudflare what I’m working on. I’m finding private messages from major dating sites, full messages from a well-known chat service, online password manager data, frames from adult video sites, hotel bookings. We’re talking full https requests, client IP addresses, full responses, cookies, passwords, keys, data, everything.”

Ormandy said that the Google team worked quickly to clear any private information and that Cloudflare assembled a team to deal with it. He provisionally identified the source of the leaks as Cloudflare’s ScrapeShield application, which is designed to stop bots copying information from websites wholesale, but it turns out the problems ran deeper than that.

On Thursday afternoon, Cloudflare published a highly detailed incident report into the issue: it happens that Cloudflare’s Email Obfuscation, Server-Side Excludes and Automatic HTTPS Rewrites functions were the culprits.

The problem occurred when the company decided to develop a new HTML parser for its edge servers. The parser was written using Ragel, and turned into machine-generated C code. This code suffered from a buffer overflow vulnerability triggered by unbalanced HTML tags on pages. This is the broken pointer-checking source that is supposed to stop the program from overwriting its memory:

/* generated code. p = pointer, pe = end of buffer */
if ( ++p == pe )
    goto _test_eof;

What happens is that elsewhere p becomes greater than pe, thus it avoids the length check, allowing the buffer to overflow with extra information. This eventually leads to the above web session leaks. We’re told this bug is not Ragel’s fault; instead, it stems from the way Cloudflare used the state machine tool, apparently.

“The root cause of the bug was that reaching the end of a buffer was checked using the equality operator and a pointer was able to step past the end of the buffer,” said Cloudflare’s head of engineering John Graham-Cumming in the biz’s incident report.

“Had the check been done using = instead of == jumping over the buffer end would have been caught.”

According to Graham-Cumming, for data to leak, the final buffer had to finish with a malformed script or img tag, be less than 4KB in length (otherwise Nginx would crash), and be running the three functions.

The new cf-html parser was added to Cloudflare’s Automatic HTTP Rewrites function on September 22, 2016, to its Server-Side Excludes app on January 30 this year, and partially added to the biz’s Email Obfuscation feature on February 13. It was only in the Email Obfuscation case that significant memory leakage appears to have happened, which tipped off Ormandy.

Cloudflare does have a kill switch for the more recent of its functions and shut down Email Obfuscation within 47 minutes of hearing from Ormandy. It did the same for Automatic HTTPS Rewrites a little over three hours later. Server-Side Excludes couldn’t be killed, but the company says it developed a patch within three hours.

Logs on Cloudflare systems show that the period of greatest leakage occurred between February 13 and 18, and even then only 1 in every 3,300,000 HTTP requests through Cloudflare leaked data. We’re told the proxy server bug affected 3,438 domains, and 150 Cloudflare customers. The biz said it held off disclosing the issue until it was sure that search engines had cleared their caches. Ormandy reckons those caches are still holding onto sensitive leaked info.

Ormandy also noted that the top award for Cloudflare’s bug bounty program is a t-shirt. Maybe the web giant will reconsider that in the future. ®

Full disclosure: The Register is a Cloudflare customer. You can find other sites hosted by Cloudflare listed here. If you use one of the affected websites, now would be a good time to log out or otherwise invalidate your session tokens, get new API keys if necessary, and log back in.

Updated to add on March 1

Cloudflare CEO Matthew Prince has blogged some more about the impact of Cloudbleed, attempting to down play the seriousness of the blunder in terms of the amount of sensitive information leaked. Prince reckons only a sliver of data was emitted by the parser bug. Code auditing biz Veracode is now scrutinizing Cloudflare’s source.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/24/cloudbleed_buffer_overflow_bug_spaffs_personal_data/

Prisoners’ ‘innovative’ anti-IMSI catcher defence was … er, tinfoil

Exclusive Prisoners at a Scottish jail evaded an IMSI catcher deployed to collar them making illegal phone calls – by putting up tinfoil after bungling guards left the spy gear visible to inmates.

“As you are also aware the invisible grabber at HMP Shott [sic] was visible,” Maurice Dickie of the Scottish Prison Service wrote in an internal email of May 2014.

This referred to the trial of an IMSI catcher in North Lanarkshire prison HMP Shotts that year.

The idea was to use the IMSI catcher to find out which prisoners were making illegal calls using smuggled mobile phones from within the jail. Officially, the trial was declared a failure, having evidently not caught any lags making unlawful mobile calls, because prisoners were said to have developed “innovative countermeasures”.

The Register understands these “countermeasures” were just tinfoil used to block line of sight to the IMSI catcher after prisoners spotted the device, which appeared to have been placed on the “inside of the prison perimeter”.

Improperly redacted copies of emails seen by The Reg revealed the cockup. UK communications watchdog Ofcom, which regulates the use of IMSI catchers in Blighty, declined to comment. The Scottish Prison Service had not responded by the time of publication.

IMSI catchers are known as Stingrays in the US. They are fake mobile network base stations used to fool nearby mobile phones into connecting to them, thus revealing the handset’s unique International Mobile Subscriber Identity number. This allows investigators to track people by their device fingerprint. They are used extensively in America, where law enforcement agencies must apply for a court warrant to use them. In the Shotts case, the IMSI would simply alert guards to the fact that a phone was being used nearby.

In the UK, new proposals in the Prisons and Courts Bill before Parliament will allow British mobile network operators to deploy them under authorisation from the Justice Secretary.

Similar authorisations for mobile network snooping, though required by law, are normally given on a blanket basis and for practical purposes do not provide any meaningful safeguard against misuse. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/01/imsi_catcher/

WordPress photo plugin opens ‘a million sites’ to SQLi database feasting

A critical flaw has been found in the third-party WordPress NextGEN Gallery plugin that is, according to wordpress.org, actively used by more than a million websites.

If you’re using this plugin, patch now to version 2.1.79 or greater. If you’re a cyber-scamp, well, here’s a surefire way to compromise a lot of tardy sites. The changelog for the update does not mention the security fix.

Researchers at Sucuri spotted that the plugin was flawed in such a way that a carefully crafted SQL injection could extract sensitive information, such as scrambled passwords, secret keys, and other website database records. The biz rates the flaw as critical and says it is relatively easy to exploit.

“This issue existed because NextGEN Gallery allowed improperly sanitized user input in a WordPress prepared SQL query; which is basically the same as adding user input inside a raw SQL query,” Sucuri said. “Using this attack vector, an attacker could leak hashed passwords and WordPress secret keys in certain configurations.”

Thankfully a fixed version of the plugin is available now and site admins are strongly advised to use it. However, WordPress users aren’t known for being the savviest, and it’s highly likely that there are a lot of unpatched sites out there.

That said, WordPress admins are used to patching. There was a major zero-day flaw found in the site last month, and in January it patched 11 holes in its code. While paying users of WordPress should already be patched, there are likely to be a lot of free users who aren’t up to speed on security. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/01/wordpress_nextgen_gallery_sqli/

‘Insider Sabotage’ among Top 3 Threats CISOs Can’t yet Handle

What’s This?

These five steps can help your organizations limit the risks from disgruntled employees and user errors.

Although insider sabotage is among the top three security threats companies face, 35% of chief information security officers in the US still lack the best practices to handle it properly, according to a Bitdefender study.

Insider sabotage – whether by a former employee who still has network access and is bent on sabotage or a careless staff member who clicks on phishing links when using company devices, or even a contractor or associate – can be particularly devastating because it’s usually not detected until the damage is done.

As the bring-your-own-device (BYOD) to work trend becomes even more widespread, CISOs should conduct regular security trainings to make current employees vigilant toward cyber hacks and schemes. Did they receive a suspicious email? Then they shouldn’t click on any URL or download attachments. Because hackers can expertly impersonate company email addresses and templates, employees need to be trained about address typos that could signal a scam.

Increasing cloud adoption raises other concerns about cloud security for a growing number of companies that have lost proprietary data across a longer timeframe by disgruntled former or current employees, who should have to think twice about acting out against their employers.

If caught, those who deliberately harm a business may be in for some tedious prison time. A sysadmin from Baton Rouge, for example, was sentenced to 34 months in federal prison for causing substantial damage to his former employer, a Georgia-Pacific paper mill, by remotely accessing its computer systems and messing with commands. Obviously, access from all systems and networks associated with the company should have been revoked when the man was fired.

“To limit the risks of insider sabotage and user error, companies must establish strong policies and protocols, and restrict the ways employees use equipment and infrastructure or privileges inside the company network,” recommends Bogdan Botezatu, senior e-threat specialist at Bitdefender. “The IT department must create policies for proper use of the equipment, and ensure they are implemented.”

Here are five steps CISOs can take to avoid insider sabotage:

  1. Enforce a strict information security policy, and run regular training sessions with employees to prevent malware infection of company networks.
  2. Immediately revoke all access and suspend certificates for former employees to prevent them from leaving the company with backups and confidential data, or from making administrative changes before leaving the company.
  3. Keep a close eye on internal systems and processes, and set up notifications for any changes that should occur.
  4. Implement role-based access control to restrict access to unauthorized employees.
  5. Never rely solely on usernames and passwords to safeguard confidential company data. Instead, implement multiple authentication methods such as two-factor, two-person or even biometric authentication.

Luana Pascu is a security specialist with Romanian antivirus vendor Bitdefender. After writing about NFC, startups, and tech innovation, she has now shifted focus to internet security, with a keen interest in smart homes and IoT threats. Luana is a supporter of women in tech … View Full Bio

Article source: http://www.darkreading.com/partner-perspectives/bitdefender/insider-sabotage-among-top-3-threats-cisos-cant-yet-handle/a/d-id/1328286?_mc=RSS_DR_EDT

DNSSEC: Why Do We Need It?

What’s This?

The number of signed domain names has grown considerably over the past two and a half years but some sectors are heavily lagging behind.

DNSSEC is short for Domain Name System Security Extensions. It is a set of extensions that add extra security to the DNS protocol. This is done by enabling the validation of DNS requests, which is specifically effective against DNS spoofing attacks. DNSSEC provides the DNS records with a digital signature, so the resolver can check if the content is authentic.

My reason for writing this post was a recent SIDN report that concluded that the DNSSEC security status in the Netherlands left a lot to be desired.  The banking sector, ISPs, and others were lagging behind, according to the report. That’s especially true in comparison to the government sector, which has to be fully compliant by the end of 2017 and is now at a level of 59% of all domain names required to be cryptographically secured and signed.

The investigation only looked at .nl domains, so companies of a more international nature, that might be using other Top Level Domains (TLDs), were not included in the research. Let’s hope that companies of this nature are more advanced in this regard. On a grand total of approximately 5.7 million domains, 46% were signed.

Additional security
Not only is DNSSEC a security feature by itself, it also provides a platform for additional features like:

  • DKIM (DomainKeys Identified Mail)
  • SPF (Sender Policy Framework)
  • DMARC (Domain-based Message Authentication, Reporting and Conformance)
  • DANE (DNS-based Authentication of Named Entities)

DANE is a protocol that allows Transport Layer Security (TLS) certificates to be bound to Domain Name System (DNS) names. It is considered a major step forward in security, notably after the breach of some Certificate Authorities (CA) providers, which meant any CA could issue a certificate for any domain name. This is why we say that the green padlock is required, but not enough. Going forward it’s important to know that all the popular browsers support DNSSEC, and most of them support DANE (for some browsers you may need a plug-in), so implementation of this extra security should put a major dent in the possibilities for DNS spoofing.

Image Source: Malwarebytes

Conclusions
Personally I was surprised, almost shocked, to find out that only 6% of the banking sites had their domains signed, the worst of all the investigated groups of domains. This is especially worrying because of the recent progression from physical to on-line banking. The percentage for all financial corporations was at 16%. Other sectors where we would expect better figures:

  • Internet Service Providers, 22%
  • Stock exchange listed companies, 12%
  • Internet shops, 30%
  • Telecom providers 33% and – worst of all – of the 4 biggest providers with an .nl domain, none contributed to that score.

The only group scoring somewhat satisfactory results were government sites at 59%, pushed by regulations that require compliance by the end of this year (2017).

Even though the number of signed domain names has grown considerably over the past two and a half years (the previous report on this subject), some sectors are heavily lagging behind – in particular some sectors where we would hope and expect otherwise.

If you have any similar figures about these numbers in your country, comment in the original post on Malwarebytes Labs. I would like to make some comparisons.

Check out additional posts from Pieter Antz here!

 

Was a Microsoft MVP in consumer security for 12 years running. Can speak four languages. Smells of rich mahogany and leather-bound books. View Full Bio

Article source: http://www.darkreading.com/partner-perspectives/malwarebytes/dnssec-why-do-we-need-it/a/d-id/1328291?_mc=RSS_DR_EDT