STE WILLIAMS

Why The Java Deserialization Bug Is A Big Deal

Millions of app servers are potentially open to compromise due to how they handle serialized Java apps, researchers say.

A recent blog post by FoxGlove Security that described remotely executable exploits against several major middleware products including WebSphere, WebLogic, and JBoss has focused attention on what some say is an extremely dangerous but wholly underrated class of vulnerabilities.

The so-called Java deserialization vulnerability affects virtually all apps that accept serialized Java objects and gives attackers a way to gain complete remote control of an app server. Security researchers believe that potentially millions of applications — both commercial and internally developed– are susceptible to the issue, which is not easily mitigated.

Though researchers have been aware of the vulnerability for some time, few have paid much attention to it because there have been no working public exploits against applications until now. But FoxGlove’s demonstration of how the flaw can be exploited using a tool released about nine months ago has heightened concerns around the issue.

“It’s a big deal because many enterprise applications are vulnerable,” says Jeff Williams, chief technology officer at Contrast Security, which released a free tool for addressing the issue.

The vulnerability allows attackers to completely take over the server on which the application is hosted and create all sorts of havoc.

“They could steal or corrupt any data accessible from that server, steal the application’s code, change the application, or even use that server as a launching point for further attacks now that they are inside the data center,” Williams says.

Here’s what you need to know:

What is the vulnerability called?

That depends on whom you ask. Some of have called it the Java Deserialize vulnerability, others have it as Java Unserialize flaw, while some simply call it the Java Object Serialize flaw.

What exactly is the vulnerability all about?

The vulnerability exists in the manner in which many Java apps handle a process known as object deserialization. As Williams describes, serialization is a technique that many programming languages use to transfer complex data structures over the network and between computers.

It’s a process in which a Java object is essentially broken down into a series of bytes to make it easier to transport and then reassembled back into an object at the other end. The disassembling process from an object into a sequence of bits is called serialization, while the reassembly from the bits back to an object is called deserialization or unserialization.

The problem lies in the fact that many apps that accept serialized objects do not validate or check untrusted input before deserializing it. This gives attackers an opening to insert a malicious object into a data stream and have it execute on the app server.

“In this attack, special objects are serialized that cause the standard Java deserialization engine to run code of the attacker’s choosing,” Williams says. “It’s not exactly a problem in Java, or in any particular libraries. It’s just a powerful functionality that organizations shouldn’t expose to untrusted users.”

What exactly did FoxGlove do?

FoxGlove showed how the vulnerability could be exploited in WebSphere, WebLogic, JBoss, Jenkins, and OpenNMS.

Each of these applications includes a Java library called “commons-collections” that provides a method that leads to remote code execution when data is being deserialized, says Stephen Breen, principal consultant and developer of the attacks against the five middleware apps. Ideally, when data is being deserialized, no code should execute during the process.

Breen generated the payloads for his exploits using a tool called “ysoserial” released about 10 months ago by security researchers Chris Frohoff and Gabriel Lawrence at AppSec California 2015. In a presentation titled Marshalling Pickles, the two researchers demonstrated proof of concept code for exploiting Java object unserialization vulnerabilities and showed four different ways they could do it using the ysoserial tool.

“The bug is on both sides in my opinion; but others may disagree,” Breen says. “The commons-collections [library] should not provide a method that leads to remote command execution simply by deserializing untrusted data” he said. “This is unsafe due to the history of the way serialized objects have been used in Java.”

At the same time, app vendors and ad developers should not be unserializing untrusted data. “There could be other libraries besides the commons collection that allow for exploitable scenarios, and it’s generally not a good idea.”

What are the implications for enterprises?

Williams says the first thing enterprises need to do is find all the places where they are using deserialization on untrusted data, and harden it against the threat. “Searching [the] code is only a partial solution, because frameworks and libraries that they are including in their applications might also create this exposure.”

He pointed to a tool released by Contrast Security called Runtime Application Self-Protection (RASP) that adds code to the deserialization engine that prevents it from being exploited.

Removing commons collections from app servers running the library will not help entirely because other libraries could have a similar problem, Breen said. “It may be a good mitigation, but does not address the core problem.”

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: http://www.darkreading.com/informationweek-home/why-the-java-deserialization-bug-is-a-big-deal/d/d-id/1323237?_mc=RSS_DR_EDT

Drone makers put their devices on a geofenced leash

droneinsunset

How do you keep drones from flying where they shouldn’t?

You can try beseeching the public, like California firefighting services did during the summer’s drought-fueled, brutal fire season.

You could go the legal route, like California tried (and failed) to do with legislation that would have exonerated emergency workers who take out drones.

You can shoot them down. But not safely, or, at least most of the time, not legally.

Then again, you can keep drones from flying where they shouldn’t by building technological limits into the units themselves.

To that end, both 3D Robotics and DJI, two of the most popular consumer drone manufacturers, have announced new geofencing systems with real-time access to no-fly zones provided by AirMap.

DJI, a Chinese company that the New York Times says makes more small-scale drones than any other company, announced on Tuesday that its new “geofencing” system will help to keep drones from flying in forbidden airspace – including, for example, over forest fires.

The company said that its Geospatial Environment Online (GEO) will feature continually updated airspace information. GEO will be available on the current versions of the Phantom, Inspire and Matrice drones.

From the release:

For the first time, drone operators will have, at the time of flight, access to live information on temporary flight restrictions due to forest fires, major stadium events, VIP travel, and other changing circumstances.

The system is built on flying restrictions that DJI came out with in 2013.

DJI says that GEO will provide “a measure of accountability in the event that the flight is later investigated by authorities” – which could be read as CYA for the company, which is still leaving it up to operators to decide whether to fly in certain cases.

Brendan Schulman, DJI’s Vice President of Policy and Legal Affairs, who led the development of the new system:

We believe this major upgrade to our geofencing system will do even more to help operators understand their local flight environment, and to make smart, educated decisions about when and where to fly their drones.

The buck still stops with the operator in certain scenarios. The drones aren’t going to autonomously shut down and refuse to fly over a protected wildlife area, for example.

What’s more, authorized operators will still be able to fly drones around airports, but not without first taking steps to “unlock” such a zone using a verified account – i.e. one that’s backed up with a credit card, debit card, or mobile phone number.

Finally, unlock won’t be available at all in locations that raise security or safety concerns.

The drones will neither be able to fly into nor take off in zones that are restricted due to national security concerns, including Washington DC.

The system’s built-in restrictions include locations such as prisons and power plants, where potential payloads, not the flying itself, have increasingly become a concern, as contraband including drugs, knives, cameras and phones keep getting dropped into prisons.

In fact, the US Federal Bureau of Prisons recently called for information on drone-killing systems.

DJI’s GEO, which is proprietary, is scheduled to go live in December in the US and Europe via a firmware update to the drones and via an update on the DJI Go app.

For its part, 3D Robotics on Tuesday announced that it was also partnering with AirMap, adding the safety information software to its Solo smart drone app in anticipation of the holiday shopping season.

The Solo app will present basic information about federal guidelines: for example, stay 5 miles away from an airport. It will also include national parks and airbases.

The AirMap software will also continually pull in airspace information and display the restricted, warning and informational areas on a map.

If Solo users open their Solo app in a restricted area, they’ll see a warning. Tapping on the warning will bring up a map that displays any airspace information in the area, including real-time temporary flight restrictions around wildfires, major sporting events and other sensitive places.

These companies’ moves to include geofencing will get them both out in front of various calls for a no-fly zone.

One such: US Senator Chuck Schumer (D-NY) in September said he was moving on a requirement for drone manufacturers to include geofencing technology that would prevent newly built devices from flying over restricted areas.

Image of drone flying in the sunset courtesy of Shutterstock.com

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/j4hKUQRJrJI/

Forget BadBIOS, here comes BadBarcode…

What a difference a cool name makes!

A security research paper entitled Putting control characters into one-dimensional barcodes to trip up sloppily coded apps probably wouldn’t grab your attention.

But BadBarcode would, so that’s what a Chinese security researcher who goes by the name Hyperchem Ma called his paper at the recent PacSec 2015 conference.

The paper has received a lot of publicity, including some dramatic headlines like “Poisoned barcodes can be used to take over systems” and “Customized barcodes can hack computers”.

So we thought we’d take a look at what BadBarcode really is, so you can decide how dangerous the problem is likely to be.

A one-dimensional study

Ma looked only at so-called one-dimensional, or 1D, barcodes, which are the ones you typically find on products on a supermarket shelf.

The barcode runs in a single line, printed from left-to-right, though thanks to the arrangement of the stripes, it can be read upside down.

There are two main sorts of 1D barcode, known as Code 39 and Code 128.

The names are a curious mix of history and peculiarity.

Code 39 can represent 43 different characters these days, but it was originally limited to 40 symbols, with one reserved as a start/stop marker, and so the number 39 (40 minus 1) stuck in the name

Code 128, curiously, can represent 108 symbols, only 103 of which are actual data characters, but it has 3 control symbols that choose which data bytes are represented by each of the 103 encodings.

You can mix control symbols and data symbols inside a barcode, with the control symbols acting a bit like the Caps Lock key on a keyboard to toggle between different parts of the ASCII character set.

In short: Code 128 can represent all 128 characters in the 7-bit ASCII set, including characters like Ctrl-C, Ctrl-M (Carriage Return) and Ctrl-[ (Escape).

The barcode “keyboard”

The reason this matters is that most barcode readers are implemented as plug-and-play keyboards, just like old-school credit card magstripe readers.

That way, you can read barcodes into your app simply by reading from the keyboard, as you would if the operator typed in the characters printed underneath the barcode.

Now, imagine that your app expects Code 39 barcodes: you might well assume that the input from the pseudo-keyboard barcode reader will only ever include A-Z, 0-9, space and one of -$%./+.

So, even if your app is written using a programming library that processes, say, Ctrl-O as a shortcut to open a file dialog, or Ctrl-R to run a new program, and so on, you might assume that you don’t have to worry about those special characters turning up in a maliciously-generated barcode.

Code 39 doesn’t support those characters, so they can’t show up.

So you might be inclined to trust the input from the barcode implicitly, for example when a user wants to scan an item at one of the price check stations that many supermarkets provide.

But if a crooked customer shows up with a Code 128 barcode that reads something like…

[Ctrl-R]CMD.EXE[Enter]DEL /Y /S C:*.*[Enter]

…then many barcode readers will nevertheless recognise it as a valid barcode, choose the right decoding algorithm, and return the characters anyway.

As a result, your app might wander into trouble.

Validate your input

To work around that, you’re probably thinking that validating your input is a good idea, and you’d be right.

In other words, you accept a line of input from the barcode scanner but check through it first for anything out of place.

If you’re expecting digits only, for example, then when letters, punctuations or control characters appear, you can trigger an error and refuse the input, instead of going ahead with something definitely unexpected and potentially dangerous.

However, that might not be enough on its own, because the operating system itself – or at least what’s called the window manager – might detect and act on some special characters immediately, before your input validation algorithm is even called.

Window managers are needed when several apps share the keyboard and screen, to make sure that the right apps send and receive the right content, and to deal with special keystrokes that should be consumed directly by the window manager itself, such as Alt-TAB on Windows.

So if you want to protect your barcode-reading app from unusual, unexpected or even malicious “kestroke” data inside a barcode, you also need to familiarise yourself with the low-level programming functions that allow you get the first look at every keystroke, even before the window manager gets its chance.

On Windows, for example, the function SetWindowsHookEx() is your friend.

With this function, you can instruct Windows to call a special procedure inside your app, known as a LowLevelKeyboardProcHook, giving you first look at the keystroke that’s coming next, and allowing you to process it (or ignore it, or change it) before anyone else gets a chance.

That way, you can improve the safety and security of programs that need to accept input from untrusted outsiders, yet are forced by the available hardware to consume that input as if the potential attacker were typing away at a keyboard.

By the way, there’s a whole slew of 2D barcodes as well, such as Data Matrix, PDF417 and – perhaps the best known sort – QR codes.

The 2D barcodes typically let you store much more data in the same space, so are increasingly widely used – and increasingly widely supported by barcode readers.

For all you know, your Code 39-based app, programmed to assume digits only, might some day be confronted by hundreds of bytes of data from a QR code, simply because you can’t control what an untrusted outsider might hold up to the reader.

What to do?

Briefly put:

  • Always validate input before using it.
  • Always understand how untrusted input might affect the underlying operating system before you see it.
  • Assume that specialised input devices (e.g. barcode scanners) can be made to behave like general-purpose ones (e.g. keyboards).
  • Expect the unexpected.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JfV1BIoyb_E/

Chipotle’s human resources emails made job applicants phishing bait

Applying for jobs can be painful, but at least your interactions with “human resources” don’t put you at risk of anything worse than dashed hopes.

Actually, that’s not the case for those applying for a role at Chipotle. Until recently, the giant Mexican fast food restaurant chain was putting its job applicants at risk of identity theft and phishing attacks.

That’s because Chipotle was sending emails to new job applicants from an email address using a domain – chipotlehr.com – it didn’t own.

The domain wasn’t owned by anyone, in fact, until an unemployed IT worker applied for a job at Chipotle and found out the chipotlehr.com domain wasn’t registered, and bought it for $30.

The IT worker, Michael Kohlman, tipped off security blogger Brian Krebs, who went to work and did what he does so well – exposing just how badly a major company has bungled security.

Once Kohlman owned the domain, he started receiving all emails people sent to [email protected], which could have been disastrous if a cybercrook had got to the domain first.

If he’d wanted to, Kohlman could have stolen personal information from those job applicants such as their names, email addresses, phone numbers, and so on.

Or he could have used the [email protected] email to go phishing for more information from the applicants, perhaps by asking them for Social Security numbers or bank information for supposed “background checks.”

There are many ways the domain could have been abused in the wrong hands, as Kohlman told Krebs:

In nutshell, everything that goes in email to this HR system could be grabbed, so the potential for someone to abuse this is huge.

This wasn’t a goof where Chipotle forgot to renew the domain registration – a screw-up that even big companies like Google and Microsoft aren’t immune to.

Chipotle had never owned the domain – it was just using the email address for emails that it told job applicants not to reply to.

But many people did reply to those emails, or emailed an address on the chipotlehr.com domain in hopes of finding someone at Chipotle HR.

Kohlman said he discovered the unregistered domain when he replied to the email he received after submitting an application and got an error message.

Kohlman went to Chipotle and offered to give them the chipotlehr.com domain for free, but they expressed no interest.

Now the website shows only a black screen with the message:

This is NOT the Chipotle Human Resources Page

chipotlehr.com

Perhaps most concerning is the fact that Chipotle still doesn’t see how sending emails from an unregistered domain was a security no-no.

In an emailed statement, Chipotle told Krebs that the chipotlehr.com domain was “never functional,” and therefore there has “never been a security risk of any kind associated with this,” and it is “really a non-issue.”

Charitably, Kohlman said he wanted to help Chipotle and others “learn from their mistakes,” rather than causing Chipotle any “real damage.”

They didn’t get the message.

Maybe Chipotle – a $3.5 billion company that says it’s “on a mission to change the way people think about and eat fast food” – should start by hiring someone to think about security for a change.

Image of mexican food courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/xerTq84H5v8/

Facebook finally lets a woman named Isis back into her account

FBsignup

Little girls named after the Egyptian nature goddess Isis may well be struggling at the moment, tormented by people over their name.

That’s bad enough. But shutting down Facebook accounts of women named Isis?

That’s just straight up facepalm.

As she tweeted on Monday, Isis Anchalee, a San Francisco-based engineer, recently became yet another Isis to suffer from the media’s reliance on the term ISIS – an acronym for the terrorist organization Islamic State of Iraq and Syria.

Of course, Facebook’s real-name policy has meant that this has happened to plenty of other people: Native Americans, drag queens, Mr. Something Long and Complicated, and Ms. Jemmaroid von Laalaa, to name a few.

Anchalee said that she sent Facebook a copy of her passport to prove that her “real” name is really Isis, but that apparently wasn’t good enough:

Facebook thinks I’m a terrorist. Apparently sending them a screenshot of my passport is not good enough for them to reopen my account.

— Isis Anchalee (@isisAnchalee) November 17, 2015

The third try did the trick:

Facebook then did the classy thing: researcher Omid Farivar publicly apologized, tweeting that he doesn’t know why it happened, but they’re on it:

Privacy watchdogs have been after Facebook for years on the real-name policy.

In July, a German privacy watchdog ordered Facebook to allow users to take out accounts under pseudonyms.

Then, in October, the Nameless Coalition, consisting of 75 human rights, digital rights, LGBTQ, and women’s rights advocates – including the Electronic Frontier Foundation (EFF) and the American Civil Liberties Union (ACLU) – went on to pen an open letter to Facebook in which it explained why its “authentic name” policy was broken and how Facebook could mitigate the damages it causes.

Earlier this month, Facebook acknowledged the letter and said that change was in the works.

Alex Schultz, Facebook’s VP for growth and internationalization, said that a Facebook team is now working on reducing the number of people asked to verify their name on Facebook, when they’re already using the name people know them by, as well as making it easier for people to confirm their name if necessary.

Schultz says Facebook expects to test those changes in December.

In the meantime, those who are legitimately, “authentically” named Isis are still, obviously, liable to having their accounts reported and frozen.

Image of Facebook sign up screen courtesy of PeoGeo / Shutterstock.com

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SlIu-LghKv4/

Tech firms fight anti-encryption demands after Paris murders

Comment Anti-encryption sentiment among politicians is rising following the Paris terror attacks, but Silicon Valley firms are so far resisting attempts to weaken crypto systems to allow easier access to private communications for law enforcement and intel agencies.

WhatsApp on Android and Apple’s iMessage (as well as other applications) provide end-to-end encryption, which means that encryption keys are held on devices and not by the firms providing the services.

Hence, they are unable to hand over private crypto keys even if presented with a court order, a set-up roundly criticised by politicians including Prime Minister David Cameron and others, even before last Friday’s attacks.

Politicians have upped the rhetoric in the days since as pressure for a fundamental change grows.

American senator Dianne Feinstein, who chairs the US Senate Intelligence Committee, told MSNBC: “If you create a product that allows evil monsters to communicate in this way, to behead children, to strike innocents – whether it’s at a game in a stadium, in a small restaurant in Paris, take down an airline – that is a big problem.”

An opinion column in the New York Times, authored by Manhattan’s district attorney and the City of London Police’s commissioner, argued that “encryption blocks justice“. The piece, which logs law enforcement’s frustration with encryption technologies, was first published in August.

Previously, tech firms might have been won over by such pleas. However, the Snowden leaks about the lengths intel agencies are prepared to go to in developing mass surveillance capabilities (AKA bulk interception) have forced technology firms to push back, partly as a way of restoring user confidence that products and services are worthy of their trust.

Technology developers also argue strongly against granting backdoor access to data or communications, claiming it creates a security weakness that third parties (foreign government and criminals) might be able to exploit, as well as creating a logistical nightmare.

Faced with a user backlash over apparent cooperation with the US government, technology companies were desperate to show they could be trusted. Apple, in particular, has been bullish about encryption.

“If the government laid a subpoena to get iMessages, we can’t provide it,” Apple boss Tim Cook told US public broadcaster PBS last year. “It’s encrypted and we don’t have a key.”

Enterprise encryption providers are sticking to this stance even in the wake of recent terrorist attacks in Beirut, Lebanon and Paris.

“The events in Beirut and Paris have revived calls for increased surveillance and weakened encryption as the means to prevent another atrocity,” said Pravin Kothari, founder and chief exec of CipherCloud. “Renewed calls for government shared encryption keys are emerging in the US. In the UK, some advocate expediting the passage of the Investigatory Powers Bill [IPB], which mandates cloud providers to remove encryption to make data accessible.”

Kothari argued interfering with commercial encryption products and services would be both counterproductive and ineffective.

“Diluting commercial encryption won’t prevent the bad guys from using their own proprietary encryption and won’t make us safer,” Kothari argued. “Weakening the technology that companies use to protect average users misses the mark. Nor will enacting the IPB better protect the homeland as many of its monitoring provisions already exist in France following Charlie Hebdo.”

It’s not yet clear if terrorists used encrypted comms in commissioning the Paris attack, with some early reports suggesting the group used SMS messaging and other suggesting encrypted apps played a part.

Encryption policy in any case shouldn’t turn on whether the Paris attackers were using crypto, some independent security experts argue.

“Paris attackers probably used encryption [and the] Snowden revelations probably helped those attackers,” said Rob Graham of Errata Security in a Twitter update. “None of this says we should install crypto backdoors, or employ mass surveillance of the population,” he added. ®

Sponsored:
Data Loss Prevention Data Theft Prevention

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/11/19/anti_encryption_backlash_keys_imessage/

Telegram messaging app blocks some ‘public’ ISIS-related channels

Telegram is – following bad publicity about its messaging app – on a whack-a-mole mission against channels understood to have been set up on the service by ISIS.

The tool has been used to spread propaganda and urge others to join the terrorist group.

However, anyone wishing to read Telegram’s full statement – in which it confirmed that it had so far “blocked 78 ISIS-related channels across 12 languages” – will have to install the app first.

Arguably, that’s a clever marketing exercise for an app seizing on the appetite for news about ISIS following the Paris attacks last Friday.

The Berlin-based outfit said: “We were disturbed to learn that Telegram’s public channels were being used by ISIS to spread their propaganda.”

Reuters had earlier reported that the app had been used by ISIS as a recruitment tool.

Questions immediately hit Telegram’s Twitter feed, after it announced that it had taken down 78 channels connected to ISIS.

One user asked: “Oh, so do you intercept conversations?”, to which Telegram’s chief Pavel Durov replied: “Of course not; only publicly available channels could be reported and blocked.”

Telegram claims to be “way more secure” than WhatsApp. The service uses the MTProto protocol, developed by Durov.

It notes in its FAQ section that channels “are not part of [the] core Telegram UI”. It adds: “While we do block terrorist (e.g. ISIS-related) bots and channels, we will not block anybody who peacefully expresses alternative opinions.”

So-called secret chats on the app use end-to-end encryption and are not backed up in the cloud, according to Telegram. ®

Sponsored:
Data Loss Prevention Data Theft Prevention

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/11/19/telegram_messaging_app_blocks_some_public_isisrelated_channels/

George Osborne fires starting gun on £20m coding comp wheeze

Security vendors and training organisations have welcomed plans by the UK government to open a £20m competition along with a new “Institute of Coding”.

The proposals were floated during a speech by UK Chancellor George Osborne on cyber-security and the fight against terrorism at GCHQ on Tuesday, during which also he announced £1.9bn funding for cyber security and a new National Cyber Centre.

Paul Farrington, ‎senior solutions architect at application security firm Veracode, commented: “Our world is run on software – medical devices, finance, IoT, access to knowledge via Internet, etc – so any foundational security training must include the ability to code securely.”

He continued: “The opportunity for young people to gain affirmative training in coding goes beyond just providing them with the ability to design and build, but will give them a greater understanding about the issues and responsibilities that developers face in ensuring that their code remains secure.”

Shortcomings in applications security – often caused by coders making well understood mistakes – are putting consumer and enterprise data at risk. Improving application security training for developers in crucial in closing these gaps.

“Coding vulnerabilities in web applications remain one of the most frequent patterns in confirmed breaches and account for up to 35 per cent of breaches in some industries,” Farrington explained. “Understanding these threats and the security measure that developers must take to ensure they aren’t using exploitable or malicious code is essential to our global cyber hygiene. This was demonstrated earlier this Autumn when the XcodeGhost malware infiltrated the Chinese Apple App Store after developers used a local, bootlegged version of Xcode, rather than the original Apple version, which contained the malicious code.”

It’s not immediately clear whether or not the £30m competition is being financed through new funding or a consolidation of existing programs. The UK government already has a variety of programmes in promoting skills and careers in cyber security, not least the Cyber Security Challenge scheme. El Reg put in a query to HM Treasury on this point but we’re yet to hear back.

Dr Robert L Nowill, chairman of the Cyber Security Challenge UK, welcomed the initiative as complementary to its own goals and helpful in schooling the next generation of developers in secure coding best practices. The details on how the new Institute of Coding will work and how it will dovetail with academies of excellence in UK Universities are yet to be worked out. Nonetheless, Nowill is upbeat.

“The Institute will help in educating the next generation of coders by having security principles built into their training,” Nowill told El Reg, adding that an understanding of secure coding practices will tend to make the products they develop much more secure, benefiting consumers and enterprises alike.

The previous Lib-Con coalition government came up with the idea of coding academies but this competition is being set up with fresh funding, according to Nowill.

Dr Adrian Davis, managing director for EMEA of (ISC)2, the security training and certification body, also welcomed the Chancellor’s speech as a strong sign that cyber security was being treated as a UK economic as well as national security priority. Davis praised the emphasis on skills.

“The UK government has been a leader and it’s good to see that the UK government continues to recognise that expert capabilities are needed to match the developing threat through the National Cyber Security Strategy and that they are prioritising embedding knowledge at every level of education,” Davis said. “There is a lot of work to do here and we remain committed to being a strong partner in this area of development. I would like to emphasise that this is a need that goes far beyond our own profession and that we need to work to embed cybersecurity across many disciplines, not just develop the experts.”

(ISC)2’s Global Information Security Workforce estimated that would be a 1.5m shortfall in information security workers worldwide by 2020.

Davis raised concerns that the government’s agenda was too reliant on advice from GCHQ. Broader input from business and professional communities is needed, he argued.

“This is a plan that is all about catalysing action from stakeholders, partners and the broader business community,” Davis said. “While more details are to come, it appears they [the government] continue to defer direct development toward specialist expert capability and technical innovation driven by the very focussed perspective that comes out of GCHQ.

“I am wary of having the management of the entire plan – from law enforcement to business support – centralised within a centre of excellence that reports to GCHQ. GCHQ is a valued and an incredible resource and there is no doubt that the new initiatives will have their value but to really catalyse action and investment, these plans must ensure broader input from the private, business and professional communities.” ®

Sponsored:
Data Loss Prevention Data Theft Prevention

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/11/19/uk_gov_institute_coding_comp/

‘Hacked by China? Hack them back!’ rages US Congress report

A report laid before the US Congress yesterday encouraged lawmakers to allow American companies responding to Chinese miscreants pilfering their data to hack those companies back to save their info.

The US-China Economic and Security Review Commission was established by Congress “to report on the national security implications of the bilateral trade and economic relationship between the United States and the People’s Republic of China.” It delivered its annual report yesterday.

This year, the commission offered “a detailed look into China’s space and counterspace” programmes, as well as offering an overview on commercial cybersnooping coming from the People’s Republic. This is despite a cyber peace-deal announced between the two nations earlier this year.

Advocating a tit-for-tat approach to China-related foreign policy, the commission encouraged Congress to consider making laws “conditioning the provision of market access to Chinese investors in the United States on a reciprocal, sector-by-sector basis to provide a level playing field for US investors in China.”

These difficulties were particular marked in the information and communications technology sector, suggested the commission, which noted that Beijing is currently considering “a requirement that US technology companies and their customers turn over source code, encryption software, and create backdoor entry points into otherwise secure networks.”

The report additionally alleged that the People’s Republic discriminates against foreign investors, and has “abusive legal or administrative processes” that particularly favour “indigenous companies over US firms” while “refusing to protect the intellectual property of US companies from piracy and counterfeiting”. It encouraged Congress to have a nosey into whether such practices chime with the nation’s World Trade Organisation commitments.

For these reasons we believe it is important for Congress to assess whether US-based companies that have been hacked should be allowed to engage in counterintrusions for the purpose of recovering, erasing, or altering stolen data in offending computer networks.

Congress was also encouraged to “study the feasibility of a foreign intelligence cyber court” which The Register understands would hear the complaints of cyber victims before cyber deciding whether the cyber-government would undertake any counter-cyberintrusions on a cybervictim’s cyberbehalf. Cyber. ®

Sponsored:
Data Loss Prevention Data Theft Prevention

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/11/19/hacked_by_china_hack_them_back_encourages_congressional_report/

Dark Reading Radio: A Cybersecurity Generation Gap

Millennials–especially young women–not pursuing careers in cybersecurity due to lack of both awareness and interest.

You’ve heard it over and over:  the embarrassment of riches in cybersecurity job openings that sit unfilled due to a lack of skilled talent for those gigs. Meanwhile, the number of women in the cybersecurity field remains static at an anemic 10% worldwide over the past two years. And don’t count on millennials to infuse fresh talent or diversity into the cybersecurity industry: a recent survey by Raytheon and the National Cyber Security Alliance (NCSA) found that 18- to 26-year-olds worldwide just aren’t pursuing careers in the field.

Young millennial women are less interested and informed about the field than millennial men, according to the report: 52% of millennial women say cybersecurity programs and activities aren’t available to them in school, while 39% of millennial men said the same. Only about half of millennial men are aware of what cybersecurity jobs entail, while just 33% of women are, the survey found.

Why aren’t young people drawn to this hot industry? The Raytheon-NCSA survey indicates they just aren’t getting the proper information in school. But another big hurdle is a lack of entry-level cybersecurity jobs, which limits young graduates’ opportunities in the industry.

Join me on the next episode of Dark Reading Radio, “Millennials The Cybersecurity Skills Shortage,” this Wednesday, November 18 at 1pm ET/10am PT, as we explore this conundrum with the experts:  Valecia Maclin, Raytheon’s program director for the Department of Homeland Security’s network security deployment division, and millennials Jennifer Imhoff-Dousharm, co-founder of the dc408 and Vegas 2.0 hacker groups, and Ryan Sepe, information security analyst at Radian Group Inc.

Register for the radio broadcast (it’s free) and live chat here

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: http://www.darkreading.com/operations/dark-reading-radio-a-cybersecurity-generation-gap/a/d-id/1323158?_mc=RSS_DR_EDT