STE WILLIAMS

Monster patch day for Juniper customers

Clear the diaries, Juniper sysadmins, a van-load of patches landed today.

I suggest you join me in getting a coffee and settling in while we go through the list. The security fixes cover six fixes to Junos, one for the company’s EX Series switches, BIND fixes for SRX, vSRX and J-Series units, and multiple fixes for the NorthStar controller.

Ready? Let’s go.

BIND: Junos OS on SRX, vSRX and J-Series has been upgraded to tick the boxes on five vulnerabilities.

All four CVEs (CVE-2016-2776, CVE-2016-8864, CVE-2016-9131, CVE-2016-9147 and CVE-2016-9444) offer attackers a shot at hosing the vulnerable boxes if they’re running the DNS proxy service.

IPv6 ND advertisement handling: Any Juniper M or MX router running Junos OS with DCHPv6 can have its packet forwarding engine (PFE) crashed.

Keyboard driver overflow: Yes, you read that right. To quote from the advisory: “Incorrect signedness comparison in the ioctl(2) handler allows a malicious local user to overwrite a portion of the kernel memory.”

That ends in privilege escalation, and affects any product or platform running Junos OS.

NorthStar Controllers: Controllers running versions older than 2.1.0 Service Pack 1 need to upgrade to protect against nine third-party bugs.

These include fixes to BIND, Qemu’s floppy disc controller and PCNET controller, Node.js’s HTTP server, Linux and Xen’s KVM subsystems, and the 2015-era “Bar Mitzvah” bug in the RC4 algorithm (which reasonable people probably assumed was dead and gone).

There’s also a long list of Juniper-specific bugs fixed in the NorthStar Controller application.

Atomic fragments: Junos OS running IPv6 inherited a bug from the protocol’s specification, allowing fragmentation attacks leading to a denial of service.

Even more denial of service: A crafted BGP update can crash Junos OS 15.1 or later on any platform.

Also, anything running unpatched Junos OS with LDP enabled can be hosed by a crafted packet.

NTP: Junos has also been hardened against a bunch of 2016-era Network Time Protocol bugs.

NDP: Finally, EX Series switches running IPv6 are vulnerable to a crafted Neighbour Discovery Packet. A memory leak means attackers can packet-flood the units, leading to “resource exhaustion and a denial of service.”

Phew. It’s probably time for another coffee now. Or perhaps some gin. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/13/monster_patch_day_for_juniper/

SAP’s TREX exposed HANA, NetWeaver

SAP has rushed out a patch for its TREX search engine, after security researchers found bugs in a 2015 patch.

TREX is a search engine used in several SAP products, including its HANA database and its venerable NetWeaver application and integration platform.

According to ERPScan, SAP thought it had patched the code injection vulnerability in December 2015.

Not so: ERPScan’s Mathieu Geli looked into the TREXNet communication protocol and found it ran without authentication.

He’s quoted in the ERPScan advisory as saying “I reversed a protocol for HANA and then for the TREX search engine. As they share a common protocol, the exploit has been easily adapted. SAP fixed some features, but not everything affecting the core protocol. It was still possible to get full control on the server even with a patched TREX.”

The post says CVE-2017-7691 lets an attacker send a crafted request to TREXNet ports to read or create operating system files.

The bug was one of fifteen patched on Tuesday in SAP’s April security release. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/13/sap_trex_patch/

DTMF replay phreaked out the Dallas tornado alarm, say researchers

Strap yourself into the DeLorean: researchers from Duo reckon the Dallas tornado alarm incident was a case of old-style DTMF phreaking.

On Friday night, someone figured out how to activate all 156 of the city’s sirens in a stunt hack.

It turns out the sirens, from Federal Signal, use one of the oldest signalling techniques around: Dual Tone Multiple Frequencies, or DTMF, originating back in the analogue telephony era. The earliest phreaking attacks exploited the tones used to route phone calls to make free long-distance and international calls.

For those who’ve never noticed the beeps that happen when you press buttons on a fixed-line phone, DTMF represents its symbols with pairs of beeps in this layout:

DTMF tones from Wikipedia

Image: Wikipedia

Telephone network have long been secured against phreaking, but apparently not the Federal Signal sirens in Dallas. It looks like the system was set off by a simple replay attack: record the signal sent during a system test, and play it back.

Duo’s blog post notes that the DTMF signals, carried over 450 MHz radio carriers, aren’t encrypted, so an attacker wouldn’t even need to try and interpret the symbols.

The other big compromise, according to Duo, was that someone got access to the computers that control how long the sirens would sound when they were activated. That compromise also made it harder for city officials to shut the system down.

Bootnote

Duo is surprised that the attacker was able to work out the radio frequency in use, which sits oddly with the author’s theory that a disgruntled insider is the most likely attacker.

The Register notes that an insider would probably know what frequency the system used, and 450 MHz is in a band familiar with UHF hobbyists. If the sirens’ radio used licenced bands, the FCC has the database online.

Even for the 700 MHz band, reserved for public safety in the USA, it’s easy enough to buy suitable transmitters. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/13/dtmf_replay_phreaked_out_the_dallas_tornado_alarm_say_researchers/

News in brief: NATO cyberthreat centre launches; Yahoo ‘hacker’ denied bail; Samsung delays AI assistant

Your daily round-up of some of the other stories in the news

NATO launches centre to tackle cyberthreats

NATO formally launched its centre to tackle non-military cyberthreats, fake news and hacking in Helsinki earlier this week, with the US, the UK as well as France, Germany, Sweden, Poland, Latvia and Lithuania signing up as members.

Other nations are expected to sign up to the snappily named Centre of Excellence for Countering Hybrid Threats. The setting up of the centre comes as concern grows about Russian involvement in last year’s presidential election, which Russia denies, and further fears that Russian-led efforts could disrupt elections in France and Germany due this year.

The centre will be based in Helsinki, the capital of Finland, which shares a 1,300km border with Russia: Finland last year said it was concerned about what Reuters described as “an intensifying propaganda attack” against the country from Russia.

Countering what NATO calls “hybrid attacks” – those that “combine political, diplomatic, economic and cyber and disinformation measures” – is “a priority”, said the organisation.

Alleged Yahoo hacker denied bail

Lawyers for a Canadian man the US wants to extradite to face charges of being involved in the Yahoo breaches that were revealed last year said they were “very disappointed” that his bail had been denied by a Canadian judge, Reuters reported earlier this week.

Karim Baratov, 22, who denies the US charges that include conspiracy to commit computer fraud and wire fraud, and identity theft, was remanded in custody until May 26. Justice Allen Whitten said that Baratov, who’s alleged to have worked with Russian agents who paid him to break into at least 80 email accounts, posed a flight risk.

Whitten said: “Why would he stick around?” as he told Baratov that he’d be remanded in custody until his extradition hearing in May. Whitten added that if Baratov were at liberty, he could “ply his trade from anywhere in the world”.

Baratov’s parents had said they would watch their son 24 hours a day if he were freed, but the judge rejected that suggestion, and added that he doubted they would be able to keep their son off the internet and from re-offending.

Delay to Samsung’s AI assistant

If you’re waiting impatiently to add Samsung‘s Bixby AI assistant to your collection of digital helpmeets, you’re going to have to wait a little longer: Samsung said on Wednesday that it won’t work out of the box when the first device to feature it, the Galaxy S8, goes on sale later this month.

Or rather, bits of it won’t work: Samsung said that some parts of Bixby, “including Vision, Home and Reminder, will be available with the global launch” of the S8.

The announcement is a blow for Samsung, which is aiming to rebuild its reputation after last year’s exploding-batteries debacle with the flagship Galaxy Note 7, which eventually was withdrawn from sale.

Bixby is an important feature for Samsung as it, like all mobile phone makers, struggle to differentiate their devices in an increasingly commodified market. Bixby, a late entrant to the crowded AI assistant space, is up against Siri, Google Assistant, Amazon’s Alexa devices and Microsoft’s Cortana.

Catch up with all of today’s stories on Naked Security


 

 

 

 

 

 

 

 

 

 

 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cQkCeN0cBQg/

India to world+dog: Go ahead, please hack our elections … if you can

Following demands for an investigation into the security of India’s electronic voting machines, the country’s election watchdog has invited all comers to hack its e-ballot boxes.

A kerfuffle over the machines kicked off after a round of recent elections: some in the Indian parliament claimed tallies were maliciously altered by miscreants meddling with the devices. A big bunch of politicians called for the electronic voting machines to be junked and replaced with paper ballots in light of these suspicions.

“We have substantial evidence” of tampering, said Congress Party spokesperson Ghulam Nabi Azad, Times of India reports.

“We gave substantial proof to the Electoral Commission. It did not say that our objections were wrong. It has said that it will probe it. We have asked (the commission) to see how to rectify the flaws in electronic voting machines (so) that the people should have faith that their vote goes to those they vote for.”

While the Indian government rejected these calls for a return to paper ballots, the commission said it will organize a hackathon to probe the boxes. The ten-day competition will be held in May and everyone is invited to subvert the voting machines and the backend systems that support them.

The hacking contest will be complicated. The world’s largest democracy uses a variety of different types of electronic voting machines, as is the case in the US, and so a selection of different types of system will be entered into the hackathon.

The commission held a similar hacking competition in 2009 after concerns of election fraud were raised. More than 100 machines were tested ad none of them proved susceptible to hacking.

In the meantime, the fate of cyber-election boxes on the subcontinent remains in the balance. Senior Congress Party politician Veerappa Moily said in parliament that he would oppose any move to reintroduce paper ballots in the country.

“It is not a progressive step and we have to move forward on technology. There is no question of going back to manual methods,” he said, but added: “I will go by the party’s views on the issue.”

He may have to. Some other senior Congress Party officials immediately tried to roll back on his statements, saying paper ballots are not ruled out. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/12/india_electronic_election_hacking/

Cybersecurity & Fitness: Weekend Warriors Need Not Apply

It takes consistency and a repeatable but flexible approach to achieve sustainable, measurable gains in both disciplines.

There are several corollaries between the health and fitness industry and the cybersecurity industry. In both cases, people are looking for a “quick fix” such as a simple pill that bypasses the need for self-control, dedication, and rigor. The “weekend warrior” approach doesn’t work, and often simply results in frustration and little to no improvement.

There is also, in both cases, a steady stream of products or features flooding the market on a continuous basis, each with a slightly nuanced set of promises, gimmicks, and buzzwords. Consequently, despite all of these promises and good intentions, our overall levels of physical fitness and cybersecurity resilience are on the rapid decline. 

A Tactical Approach
Many of us grew up with our parents asking us the question, “Did you take your vitamins?” It’s an approach to health, which, while there is nothing wrong with taking vitamins, is simply a tactical part of an overall health and wellness program. This mindset is unfortunately too often the approach of a cybersecurity program. Disparate tools (“vitamins”) are purchased and deployed to address a specific vertical portion of the application development and delivery life cycle. Meanwhile, the number of breaches continues to increase at an alarming rate. In both fitness and cybersecurity, there are no shortcuts. One has to put together a comprehensive strategy and be diligent about deploying and adhering to it. Consistency and a repeatable, but flexible, approach is what will result in sustainable, measurable gains.

A Strategic Plan 
Let’s say you decide that you want to run a marathon. You don’t start by going out and running 26.2 miles, but instead work on an overall plan that builds up your strength and resiliency over time. The same approach is needed for cybersecurity. First, you should assess the overall current state of your security posture. Second, collaborate with the various stakeholders and partners on the strategic plan to start embedding security best practices into all areas of the business, including the software development life cycle. Finally, make sure that you continue to communicate any updates to the plan, and more importantly, the ongoing results and subsequent improvements in security posture.

Sticking with It
The key to any long-term fitness program is sticking with it and being consistent. One simply doesn’t run a marathon and then stop working out in the hopes that the benefits stay with you indefinitely. The same applies to cybersecurity, and unfortunately the current compliance and regulations aren’t helpful with consistent rigor. For example, PCI Compliance only requires pen-testing/AppSec testing two times per year. Meanwhile hackers are continuously scanning your application infrastructure for vulnerabilities. The status quo simply won’t help, and new, disruptive approaches need to be adopted.

Image Source: Pixabay

As with fitness, gains are made by periodic, disruptive changes, and security professionals need to start thinking in those terms in hopes of actually driving significant change. The base portion of your cybersecurity program should start with code security and static code analysis, and be a seamless step in the early part of the software development life cycle.

The next phase is at the continuous integration build step. This is where the automated scanning of third-party and open-source components and libraries should occur, as well as any additional static code analysis.

Finally, application security scanning and penetration testing should be performed before updates are delivered to production to discover any vulnerabilities and immediately remediate them. Visibility and assurance are now provided across all of your code repositories and application environments.

Continuous Improvement
Now that you’ve established a strong level of baseline resiliency, it’s time to think about different ways to continually improve your cyber fitness and not become lazy or complacent. It’s highly unlikely that your enterprise environment is remaining static, and, in fact, with today’s high-velocity approach to business outcomes, you need to remain agile and adaptive in your approaches to ensure that cybersecurity keeps pace. Collaboration and holding each other accountable works very well in sticking with a fitness program, and the same approach will make sure your cybersecurity posture stays in shape too.

[Check out the two-day Dark Reading Cybersecurity Crash Course at Interop ITX, May 15 16, where Dark Reading editors and some of the industry’s top cybersecurity experts will share the latest data security trends and best practices.]

Related Content:

 

Mike D. Kail is Chief Innovation Officer at Cybric. Prior to Cybric, Mike was Yahoo’s chief information officer and senior vice president of infrastructure, where he led the IT and global data center functions for the company. Prior to joining Yahoo, Mike served as vice … View Full Bio

Article source: http://www.darkreading.com/operations/cybersecurity-and-fitness-weekend-warriors-need-not-apply/a/d-id/1328615?_mc=RSS_DR_EDT

Nation-State Hackers Go Open Source

Researchers who track nation-state groups say open-source hacking tools increasingly are becoming part of the APT attack arsenal.

Nation-state hacking teams increasingly are employing open-source software tools in their cyber espionage and other attack campaigns.

For some of these threat groups, it’s a cost-saving move and a more efficient early-stage attack method. Using the same hacking tools used by security researchers and penetration testers to root out security weaknesses and exploit holes in enterprise networks saves on development costs. For others, it’s purely for camouflaging purposes, providing cover as a legitimate penetration test, for instance.

Kurt Baumgartner, principal security researcher for Kaspersky Lab’s Global Research and Analysis Team, says he noticed a spike over the past year or so in the use of open-source hacking tools by some infamous APT groups. Tools such as Metasploit Meterpreter, Cobalt Strike, BeEF (Browser Exploitation Framework), Mimikatz, Pupy, and Unicorn, have all been spotted in use by nation-state hackers.

“APT actors are incorporating more open source into their tooling, and in some cases, abandoning custom and private toolsets in favor” of open source, Baumgartner said in an interview at Kaspersky Lab’s Security Analyst Summit in St. Martin last week, where he gave a presentation on a yearlong snapshot he took of this trend.

Newscaster – aka NewsBeef and Charming Kitten – the nation-stage hacking group believed to be out of Iran, in the past year has relied heavily on several open-source hacking tools: BeEF for exploiting holes in browsers, on Unicorn for PowerShell-type attacks, and on Pupy, for planting a remote administration tool, or RAT. This represents a major shift in MO for the attack group, which traditionally had relied on social engineering accounts to target its victims. “They gave up on their old techniques,” Baumgartner says. “Now they’re using spear phishing with email lures using these toolsets … NewsBeef is not well-resourced, so this enables them to up their game.”

Interestingly, one of the most sophisticated and well-oiled attack groups, the Russian-speaking Sofacy, aka Fancy Bear/APT28/Pawn Storm, also has taken a liking to BeEF. Baumgartner says Sofacy/Fancy Bear has employed BeEF to rig a watering-hole attack on a website likely frequented by its targets. In one big attack campaign in July of 2016, they targeted geopolitical targets in the former Soviet republic with a malicious but realistic-looking Adobe Updater on the site, he says. “They were serving their own backdoor from this domain to visitors,” he says. The group, which is reportedly linked to the Russian military unit GRU, even included a progress bar with the phony Adobe updater.

Researchers at CrowdStrike and FireEye say they’ve seen a similar spike in open-source hacking tool usage by nation-states. “It’s becoming fairly commonplace,” says Adam Meyers, vice president of intelligence at CrowdStrike.

“We’ve seen a fair amount from Cozy Bear in the past couple of weeks” as well as Charming Kitten using Pupy heavily, he notes. Iranian nation-state group Rocket Kitten also has been spotted running CORE Impact, a commercial pen-testing tool.

Meyers says not only are these groups using open-source hacking tools for obfuscation, but they’re also using them to fill gaps in their own toolsheds or as a phase-one attack tool. “Some actors are using this as Phase One” for recon, and then executing their own custom tools for the next phases of the attack, he says. “Their [custom] implants are for collecting and pulling data and long-term continuous monitoring,” of the target, he says.

In some cases, it may be more that the attackers are merely leveraging their own training on open-source hacking tools, he says. “They may be receiving commercial-style training.”

John Hultquist, manager of the cybersecurity analysis team at FireEye, says the Iranian nation-state Newscaster group also runs Metasploit, and other groups, Cobalt Strike, an emulation tool that mimics red-team operations and attacks. FireEye has seen Iranian, Chinese (APT19), and Palestinian nation-state groups relying on open-source hacking tools for their attacks. “Some actors never created their own tools, so they’ve relied on outside tools,” Hultquist says.

Some notorious Chinese nation-state groups historically have employed open-source tools like Poison Ivy, Ghost Rat, and others, later customizing them as they became too conspicuous. The recent wave of relatively newcomer nation-state groups likely has contributed to the popularity of open-source hacking software, he says.

“The biggest advantage of using those [open source] tools is they forgo tremendous amounts of development time, energy, and money. And from an intel perspective, they provide another level of obfuscation. There’s no permanent attribution because these tools are passed around,” Hultquist says.

The typical next step for a young nation-state attack group is customization of an off-the-shelf tool. “We’ve seen Mimikatz customized on many occasions,” he says.

Mimikatz, which has been used frequently by the Chinese APT10 cyber espionage group, is used for pilfering credential information from Windows.

Seasoned nation-state attackers have been known to use open-source software as the basis for their hacking tools, building out custom versions. Take the recent finding by researchers from Kaspersky Lab and Kings College of a link between the 1990’s cyber espionage attacks against NASA, the US military, Department of Energy, and other government agencies, and the stealthy Russian-speaking attack group Turla. The thread that ties the two attack groups together: the use of the open-source data extraction tool LOKI2, albeit each with their own custom versions of the tool.

Open Source Unmasked

There’s some risk to nation-states using open-source tools, however. Take BeEF: its logging feature is very chatty and relatively detailed, so it can inadvertently expose intelligence on the attacker. “Logging is fairly verbose. It keeps track of geolocation” and IP addresses, for instance, Kaspersky Lab’s Baumgartner notes.

“The dilemma for [attackers] is they may not realize how much logging is going on,” he says. “It’s a double-edged sword.”

Targeted organizations also can more easily spot and block those tools, which aren’t as stealthy as a custom tool. “One of the biggest disadvantages is that we [defenders] know your tool very well… We can build a signature for it,” FireEye’s Hultquist says.

On the flip side, it’s difficult to discern whether a Meterpreter exploit is a legitimate penetration test, a cyber espionage attack, or a financially motived cyberattack. “It’s really going to be a problem for attribution and understanding who’s targeting your network,” he says. “The same actor interested in one employee’s credit card may be using the same tool as an actor there to take millions of dollars in intellectual property. Every incident is not equal; sometimes it’s clean the machine and move on with your life, and the other can change your life and they can look pretty similar.”

Related Content:

 

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: http://www.darkreading.com/threat-intelligence/nation-state-hackers-go-open-source/d/d-id/1328619?_mc=RSS_DR_EDT

Google boosts verification after wave of Maps fake listings fraud

When someone needs a locksmith, plumber or electrician, they usually need one in a hurry. These days, Google Maps is the smartphone-happy solution: type in the service needed and the map of any locality quickly fills with company names, street images and even starred user ratings.

It’s a far cry from the paper directories of old – but how do punters know these digitally advertised businesses are legitimate?

The short answer is they must take on trust that search providers have checked out companies, which in the case of Google is done through the free-to-list Google My Business programme.

Launched in 2014, this combined search, Maps and Google+ identities, banishing the previous tendency to see them as separate. Other providers such as Microsoft’s Bing Maps did likewise.

Google now admits it has been fighting a rearguard action against a small but determined group of fraudsters gaming this system and has even gone to the bother of publishing a study it commissioned into the problem from the University of California, San Diego. As the study describes this type of fraud:

These listings attempt to siphon organic search traffic away from legitimate businesses and instead funnel it to profit-generating scams.

Which is a Google-centric way of describing con merchants who turn up in person to fix something (a broken pipe, say) but then charge inflated rates. Fraudsters also hijack legitimate companies to siphon business from them or gain referral fees.

After analysing 100,000 listings deemed bogus, the researchers calculated a fraud rate of about 0.5% of searches in the year to June 2015, which has since declined by 70% as Google tightened its vetting.

That sounds small – until you realise that even fractions of a percent is still a huge number in the context of millions of local searches via Maps.

Three quarters of the fraudulent businesses were in the US and India, followed distantly by France and the UK, usually popping into existence for days before being suspended. However, fraudsters could cycle through thousands of fake businesses in weeks, often tied to PO box addresses and disposable VoIP numbers.

Why is Google telling us about a problem most people have probably never heard of? Because it thinks it’s fixed the problem with better vetting.

The company is now piloting more sophisticated vetting for easily abused trades such as locksmiths and plumbers. Addresses are verified more carefully while bulk registrations from single addresses have been banished.

Not before time, you might say, even if Google still admits:

Fake listings may slip through our defenses from time to time.

Maps fraud is perhaps the latest example of how Google’s technology (and mis-steps) can warp the real world both in terms of the victims of fraud but also the legitimate businesses that must live with heightened suspicion.

But as Google is finding out with news, fakery and fraud are not as hard to pull off as its brightest minds once thought it might be. The algorithms are weak and the next big wave looks as if it could be human verification. It’s the unprofitable, manual process Google once hoped to banish from the world forever.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/fNg3XW2Yhp8/

Malware, Sir? Jenkins ‘software butler’ tool gets many security fixes

These days, programmers often work in large, collaborative teams that produce dozens of different deliverables at the same time.

If you’re working on an image editor, for example, some of the components in it might also be released as standalone programming libraries that other people can use.

You might have a GUI version of your product, as well as command line tools that do the same sort of work in a different way.

And you might publish for several different platforms, including mobile devices, each of which needs its own build of each version.

For example, imagine that you’re part of an team that makes image editing software, and that you’re working on a low level function called multiply_brightness_matrix(), trying to make it run faster.

The code you’re looking at might be no more than 100 lines of code in one file, out of 10,000,000 lines in 10,000 files in the whole project.

Such localised changes are probably fairly easy to test on their own: you should be able to confirm that your new code produces the same output as the old code, at least when used in isolation.

But now also imagine that your code is used by an image filter called add_sparkle_highlight() that will end up in a library called ITEFFECT.DLL that will, in turn, be built into your own company’s flagship ImageTouchup software on Windows, Mac and Android.

How will your modest and apparently unproblematic changes affect everyone else in the ecosystem? Will all the products that use your new code still work correctly? If they work differently, will the changes be an improvement or a step backwards? Will your changes affect any other recent changes that your colleagues are working on?

As a popular proverb warns us:

For want of a nail the shoe was lost.
   For want of a shoe the horse was lost.
For want of a horse the rider was lost.
   For want of a rider the message was lost.
For want of a message the battle was lost.
   For want of a battle the kingdom was lost.
And all for the want of a horseshoe nail.

To address this sort of problem, modern software development processes often rely on what’s called continuous integration (CI), or frequent integration, whereby you automatically test the impact even of small changes in your code on your entire ecosystem, instead of waiting for a daily or weekly build that tries everyone’s recent updates at the same time.

The theory is obvious and community-centric: the earlier you find a problem, the sooner you can fix it and the fewer other people you’ll affect in the troubleshooting process.

You can’t do that by hand, so a number of toolkits are available these days that can automatically rebuild all your software, and at least partially re-test it, almost any time any programmer tries out any change.

CI is an impressive luxury that engineering and manufacturing companies don’t have. For instance, it would be an absurd waste of time and raw materials for an automotive company to build a new car from scratch every time a designer changed the look of the fuel filler cap. Rebuilding software isn’t free, because it requires servers, hard disks, networking, air conditioning and the electricity to run them all, but you don’t consume real materials every time you compile a new version of your software, so that continuous integration can, more or less, be just that.

One popular and widely used toolkit for the CI process is called Jenkins:

Jenkins can not only rebuild projects after each change, but even automatically approve, sign and deploy newly-built versions into one or more test environments…

…from where, if they pass all the necessary tests, they might even flow on automatically into your distribution system – in the physical world, what would be called your “supply chain”.

Try bribing the butler

You can see where this is going.

A bug or security flaw in a toolkit like Jenkins could have a massive impact, because Jenkins is part of the infrastructure that helps you decide whether to trust your new software or not.

And Jenkins not only presents a large attack surface on its own, it can be bolstered, boosted and extended by dozens of popular plugins, not part of the core software, that can bring their own security risks to the party.

The most recent Jenkins security update, for example (2017-04-10), addressed at least 32 arbitrary remote code execution bugs, both in the software itself and in many of its plugins.

We’ve warned about the risk of server-side plugins many times before, for example when the popular WordPress plugin TimThumb opened up holes even on servers where WordPress itself was up-to-date.

Application plugins are easy to forget about: once you’ve dealt with the operating system itself and your apps, it often feels as though your patching work is done.

An an aside, we’ve also warned for years against browser plugins, notably Flash and Java, because they have been fertile bug-hunting grounds for cybercrooks who couldn’t get past the browser: if you can’t bribe the Laird, try bribing the butler instead.

What do do?

If you’re part of a programming team that uses Jenkins, make sure you’ve applied all needed patches, or stopped using any plugins that are now known to be vulnerable but haven’t yet been patched.

Bugs of this sort may sound harmless on the surface – after all, rogue programmers with the right to submit files to your build system could just write rogue instructions right into the code they upload.

But you can think of these holes as “metacoding” bugs, where a rogue programmer could submit perfectly legitimate code changes that would be passed as improvements, yet could at the same time sneakily subvert the build process itself.

That could leave you with official software, built officially from the official source code…

…but with some unofficial “secret sauce” mixed in.

🌶 MORE SECRET SAUCE: When genuine Mac software goes rogue ►

🌶 MORE SECRET SAUCE: World’s biggest Linux distro infected with malware ►

🌶 MORE SECRET SAUCE: 36 Android devices ship with malware ►

🌶 MORE SECRET SAUCE: XCodeGhost turns your Mac into an iPhone virus generator ►


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/olsHxz988tQ/

Low fines for charities misusing donors’ data was ‘a masterstroke’

Guest post: Jon Baines is the Chair of the National Association of Data Protection and Freedom Of Information Officers

When the Information Commissioner’s Office (ICO) recently fined 13 charities, including the RSPCA, the British Heart Foundation, the British Legion and the Battersea Dogs and Cats Home, a total of £181,000 for breaches of the Data Protection Act 1998 (DPA), outrage erupted in several forms and from several places.

A lot of the outrage was directed at the ICO themselves, but I’m inclined to think that it was actually a bit of a masterstroke by them – laying down a data protection marker, while reducing the likelihood of a legal challenge. Whether this masterstroke was witting or unwitting is a matter for debate.

The breaches in question were primarily of the DPA principle that processing of personal data be “fair” (broadly, that it should be transparent and not outside people’s range of reasonable expectations).

Some of the outrage caused, then, was from those who thought the fines were much too low (the highest individual one was £25,000, against a maximum limit of £500,000); some, especially from those in the fundraising community, was from those who thought the fines were wrongly imposed altogether; and some was from those (I’m thinking in particular of some regular charitable donors I’ve spoken to) astounded that charities had been screening and profiling them, without their knowledge, to assess their wealth and donor-potential.

It’s important to note that the ICO actually considered the breaches by the various charities to be highly serious ones, of a kind likely to cause substantial distress, and in every case the fine could actually have been many times higher. However, the Commissioner herself (Elizabeth Denham, relatively newly in position) decided to exercise her discretion to reduce the fines, because of “the risk of adding to any distress caused to donors by the charities’ actions” .

Ms Denham clearly has such discretion, and it has been exercised before (for instance in 2011 when her predecessor reduced a potential fine of £200,000 to £1000 because the recipient was a private individual with limited means) but I do not recall an incident where the potential of distress to those effectively funding the data controller reduced the fine: for instance, when local authorities, or NHS bodies, have received large fines, there has not been a suggestion that they should be reduced because of potential further distress to taxpayers, or patients.

So what else lies behind this reduction in the fines?

Fines for breach of the “fairness” principle are relatively novel – normally a fine will result because of failings in security. The DPA says that the processing of personal data should be “fair and lawful”, and although it contains further provisions which provide a gloss on this, ultimately “fairness” is difficult to assess on an objective basis. And if processing is said to have been “unfair”, how does one go on to quantify the likelihood of “substantial distress” occurring as a result? Not easily, is the answer.

I think the ICO made a good effort in these cases, and I think the practices uncovered deserved to result in fines.

Let us not overlook that the charities engaged in behaviour such as

  • Using third parties to investigate and assess donors’ and potential donors’ incomes, property values and lifestyle, without informing them
  • Using third parties to find missing information about donors (which they might have chosen not to share with the charities) without informing them
  • Sharing donors’ data with other charities without explaining which ones, and for what purposes.

And we are talking about millions of affected donors. I think the ICO will now be feeling that they have made an important and prominent statement on the importance of being fair and transparent about how people’s personal data is handled (by charities, but also by other data controllers).

But any recipient of a fine has an automatic right of appeal to an independent tribunal, and on occasions in the past, this tribunal has overturned an ICO fine. Any such appeals are potentially costly and time-consuming, for both sides, and – it stands to reason – the more novel the issue, the more costly and time-consuming an appeal is likely to be. T

his is where the ICO masterstroke comes in: the reduction in the amount of the fines (to what are, in reality, relatively small sums) makes the option of an appeal for a charity distinctly unattractive. Put yourself in the position of a trustee – would you be likely to approve potentially expensive litigation, when you could “settle” a case cheaply and quickly by paying the fine? (Even where you might think a point of principle for your future fundraising is at stake).

As I say, maybe the ICO didn’t intend to play this tactical move, but intended or not, it will certainly have lessened the chance of appeals being lodged. This is not to say one or more appeals won’t transpire, but so far, the affected charities, and associated fundraisers have had to resort to the airing their dissatisfaction through their PR departments.


 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/4wxLwOWs2C0/