STE WILLIAMS

ROCA ’round the lock: Gemalto says IDPrime .NET access cards bitten by TPM RSA key gremlin

Some Gemalto smartcards can be potentially cloned and used by highly skilled crooks due to a cryptography blunder dubbed ROCA.

Security researchers went public last week with research that revealed that RSA keys produced for smartcards, security tokens, and other devices by crypto-chips made by Infineon Technologies were weak and crackable.

In other words, the private half of the RSA public-private key pairs in the gadgets, which are supposed to be secret, can be calculated from the public half, allowing the access cards and tokens to be cloned by smart attackers. That means keycards and tokens used to gain entry to buildings and internal servers can be potentially copied and used to break into sensitive areas and computers.

Infineon TPMs – AKA trusted platform modules – are used by various computers and gadgets to generate RSA key pairs for numerous applications. A bug in the chipset’s key-generation code makes it possible to compute private keys from public keys in TPM-generated RSA private-public key pairs. The research was put put together by a team from Masaryk University in Brno, Czech Republic; UK security firm Enigma Bridge; and Ca’ Foscari University of Venice, Italy.

Infineon TPMs manufactured from 2012 onwards, including the latest versions, are all vulnerable. Fixing the problem involves upgrading the module’s TPM firmware, via updates from your device’s manufacturer or operating system’s maker.

Major vendors including HP, Lenovo and Fujitsu have released software updates and mitigation guides for their laptops and other computers. ROCA – short for Return of Coppersmith’s Attack AKA CVE-2017-15361 – hit the Estonian ID card system, too.

Although not included in the initial casualty list, it turns out some Gemalto smartcards are also affected by the so-called ROCA vulnerability. Gemalto confirmed to El Reg today that some of its tech – specifically the IDPrime .NET access cards – are affected while downplaying the significance of the problem and saying remediation work was already in hand:

There has been a recent disclosure of a potential security vulnerability affecting the Infineon software cryptographic library also known as ROCA (CVE-2017-15361). The alleged issue is linked to the RSA on-board key generation function being part of a library optionally bundled with the chip by this silicon manufacturer. Infineon have stated that the chip hardware itself is not affected. As Gemalto sources certain products from Infineon, we have assessed our entire product portfolio to identify those which are based on the affected software. Our thorough product analysis has concluded that:

It is standard practice that Gemalto’s products use our in-house cryptographic libraries, developed by our internal RD teams and experts in cryptography. In the vast majority of cases, the crypto libraries developed by the chip manufacturer are not included in our products. We can confirm that products containing Gemalto’s crypto libraries are immune to the attack. A very limited set of customized products (including IDPrime.NET) are affected. We have already contacted the customers using these products and are currently working with them on remedial solutions.

As of today, this theoretical vulnerability has only been demonstrated as a mathematical possibility but no real cases have been seen to date.

Gemalto takes this issue very seriously and has set up a dedicated team of security experts to work on it and we will continue to monitor any evolution to the situation.

Dan Cvrcek, of Enigma Bridge and one of the ROCA researchers, said: “Gemalto stopped selling these cards [IDPrime .NET smartcards] in September 2017, but there are large numbers of cards still in use in corporate environments. Their primary use is in enterprise PKI systems for secure email, VPN access, and so on.

“ROCA does not seem to affect Gemalto IDPrime MD cards. We have also no reason to suspect the ROCA vulnerability affects Protiva PIV smart cards, although we couldn’t test any of these.”

Cvrcek has blogged about the issue here.

A paper detailing the research – titled The Return of Coppersmith’s Attack: Practical Factorization of Widely Used RSA Moduli – is due to be published at the ACM’s Computer and Communications Security conference in Dallas, Texas, on November 2. There is no public exploit code for the TPM flaws that we know of. While we all wait for more technical details of the vulnerability to be released, this online checker can be used to test RSA keys for ROCA-caused weaknesses. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/23/roca_crypto_flaw_gemalto/

Security Training & Awareness: 3 Big Myths

The once-overwhelming consensus that security awareness programs are invaluable is increasingly up for debate.

Organizations of all sizes continue to invest heavily in security awareness training, hoping to transform employees into a primary defense against email phishing and other cybersecurity threats. But such an endeavor, which historically has been positioned as an inexpensive solution, is today proving costly. A recent report commissioned by Bromium discovered that large enterprises spend $290,033 per year on phishing awareness training.

Even more telling, according to security experts quoted in a recent article in The Wall Street Journal, security awareness initiatives often fall short of their intended purpose because the training is a “big turnoff for employees.” Unfortunately, such sentiment is frequently ignored by security awareness training vendors with three claims that can easily be dispelled as myths.

Myth #1: Employees must participate in numerous hours of security awareness training for it to be effective.

The Facts: While many reporters and analysts explore how to create security awareness training programs that employees “won’t hate,” few experts would argue for allocating more time than absolutely necessary. That’s because training adults on cybersecurity is a lot like training children in math or science — more time spent does not typically equate to better results.

Experiential learning techniques, such as gamified quizzes and interactive sessions in which attacks are simulated, can provide the mental stimulation required to capture attention spans of all generations that lead to measurable improvement in employee cybersecurity aptitude. For example, the state of Missouri in 2015 implemented a cybersecurity training program that required employees to participate in short, 10-minute learning sessions each month, leading toend users [who] have become one of the best ‘intrusion detection systems’ as a result and have alerted us to many sophisticated attacks,” according to Missouri Chief Information Security Officer Michael Roling in GCN.com.

Myth #2: Content leads to behavior change

The Facts: Changing behavior is one of the most difficult human undertakings, despite conventional wisdom to the contrary. In fact, psychologists have estimated that the average person requires 66 to almost 300 days to form a new habit. Can you imagine the backlash of mandating 66 or more days of cybersecurity training?

Instead of forcing employees to consume a plethora of content, organizations should remain focused on communicating their main security messages and repeating those messages over and over and over again. This concept of “less is more” is sometimes referred to in the corporate world as micro-learning, an educational philosophy that “allows companies to make their training relevant to the needs of their workers, easily accessible, and interesting enough to grab their attention and keep it.” While not all organizations subscribe to this way of thinking, micro-learning has been shown to increase knowledge retention, which is exactly what cybersecurity awareness training is supposed to be all about. 

Myth #3: Extensive training modules are necessary to reduce risk

The Facts: Modules, which can help employees learn how to classify and analyze data, do very little to prepare workers to identify and act on cyberattacks. Instead, the oversaturation of modules frequently confuses and frustrates employees who can’t see how such education benefits them. Organizations serious about reducing risk must mute themselves from the background noise and prioritize direct employee feedback and experiential learning techniques in order to train a truly cyber-aware workforce.

As evident by the continued escalation of successful phishing attacks, it is a myth that security awareness and training requires significant time investment, an abundance of content and modules to successfully educate workers and in turn significantly minimize risk. What is true — if done correctly — is that security awareness and training is a necessary part of the increasingly complex cybersecurity puzzle.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Eyal Benishti has spent more than a decade in the information security industry, with a focus on software RD for startups and enterprises. Before establishing IRONSCALES, he served as security researcher and malware analyst at Radware, where he filed two patents in the … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/security-training-and-awareness-3-big-myths/a/d-id/1330165?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Play Bug Bounty Program Debuts

Google teams up with HackerOne to create the Google Play Security Reward Program.

Google has teamed up with HackerOne to launch the Google Play Security Reward Program.

Top Google Play application developers that have opted into the program will be listed on the Google Play Security Reward program page, which currently includes such apps as Dropbox, Tinder, Snapchat, and others. Google is also including some of its own apps in the program.

Independent security researchers are required to report the vulnerability to the app developer, who then works with the researcher to resolve the flaw. After app maker pays the researcher his or her bounty and fixes the vulnerability, Google will provide the researcher an additional $1,000 bonus award.

Google already has public bug bounty programs Google Vulnerability Reward Program (VRP), Android Rewards, and Chrome Rewards in place. Under the VRP program, independent security researchers are paid anywhere from $100 to $31,337 for finding vulnerabilities in Google-developed apps, extensions, some of its hardware devices like OnHub and Nest, and on Google-owned Web properties. 

Read more about the Google Play Security Reward Program here.

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/google-play-bug-bounty-program-debuts/d/d-id/1330193?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Microsoft tears into Chrome security as patching feud continues

The ding-dong between Microsoft and Google vulnerability researchers is not yet an inter-generational conflict but it’s showing signs of turning into one.

After being embarrassed by Google’s Project Zero over a string of software flaws, Microsoft has fired back by publicising a critical Remote Code Execution (RCE) flaw its Offensive Security Research (OSR) team spotted after crashing Chrome’s open-source JavaScript engine, V8.

Identified as CVE-2017-5121, the flaw in the just-in-time compiler was patched by Google in September (Chrome 61.0.3163.100), which we now know was reported to the company by Microsoft because, the company’s blog reveals, its team were paid a $7,500 (£5,700) bug bounty by Google.

Normally, that would be that, except that Microsoft’s dissection swiftly turns into a launchpad for a broader critique of weaknesses in Chrome’s design. For example:

Chrome’s relative lack of RCE mitigations means the path from memory corruption bug to exploit can be a short one.

And, significantly:

Several security checks being done within the sandbox result in RCE exploits being able to, among other things, bypass Same Origin Policy (SOP), giving RCE-capable attackers access to victims’ online services (such as email, documents, and banking sessions) and saved credentials.

Bluntly, Microsoft seems to be saying, Chrome’s much-vaunted sandboxing (a feature that limits one web page or browser tab’s access to another) doesn’t always stop criminals from pwning the user.

The vulnerability was fixed weeks ago so why would Microsoft want to tear it apart in such detail?

Perhaps to make a point about throwing stones in glasshouses after a period in which the company has received a string of similar criticisms from Google’s Project Zero team.

Only days ago, Google’s Mateusz Jurczyk laid into Microsoft over its alleged prioritisation of Windows 10 patches over those for older versions of the OS.

In May his colleague Tavis Ormandy took to Twitter to talk up a “crazy bad” RCE vulnerability affecting Windows Defender which, as it happens, Microsoft fixed only days later.

Worst of all was February’s disclosure by Jurczyk of a vulnerability in Windows he felt the company was taking too long to patch but which, he said, Google had a responsibility to tell the world about under its 90-days disclosure policy.

The difference of opinion over what constitutes responsible disclosure has turned into a particular bone of contention. As Microsoft makes a point of saying:

We responsibly disclosed the vulnerability that we discovered along with a reliable RCE exploit to Google on September 14, 2017.

Rubbing salt in the wound, Microsoft’s used its new MSRD Azure “fuzzing” platform to find it, perhaps subtly mocking Google’s enthusiasm for spotting flaws using the same technique.

It seems unlikely that a truce will be called in this head-to-head any time soon. Google will continue hammering Microsoft for taking too long to fix flaws while Microsoft will shoot back that Google isn’t immune to security woes of its own.

For Microsoft and Google users, this is all good. Not that long ago, it seemed that the software industry lacked urgency when it came to acknowledging and fixing vulnerabilities. If that complacency is melting away, it does no harm for big companies to help the thaw by taking each other to task.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/G878w0_cWwg/

Wowee. Look at this server. Definitely keep critical data in there. Yup

Israel-based Illusive Networks claims that its approach of planting poison-pill servers in a network can detect incoming attacks faster than any other method.

At the startup’s Tel-Aviv office, CEO and founder Ofer Israeli told a visiting press group that his technology is a post-breach mechanism. It automatically learns the topology of a customer’s network and plants details of phoney servers and shared resources.

In a typical attack, a hacker might penetrate the network and gain privileges needed to move from node to node. Then they will move across the network to identify the target’s location.

Illusive Networks places extra network destinations and shares inside a server’s deep data stores. An attacker lands on a decoy and looks where to go next, finding a mix of real and phoney destinations, which all look genuine.

By having enough fake destinations, attackers will eventually land on one or more of them. As soon as they do, the software knows it’s a real penetration attempt and alerts network managers so that a response team can then deal with the attack.

Real users do not see the fake network addresses as they are planted deep in a server’s system data stores and will only be accessed by attackers looking for network topology data so as to progress their attack.

It works on-premises or in a public cloud. Israeli said: “We are deployed across a bank which is completely a cloud bank.”

The software can work on Windows servers and workstations, Macs, and Linux servers but not Unix ones, although Unix support is coming. There is a recent mainframe protection product which involves planting deception sites around mainframes rather than working in the mainframes directly.

The software does not work on network switches but will do so in future. Cisco is a strategic investor.

Illusive can provide risk analysis services. “We can provide risk analysis of attacks to organisations so they can respond appropriately,” Israeli said. “We can show which attacks are closest to your critical data and prioritise them.”

Business is picking up. The firm had a run rate in the high single-digit millions in 2016 and has grown fast since then. Its business model is based on annual subscriptions

There are around 65 employees in Israel and the US, and the firm has taken in just over $30m in funding since it was founded in 2014. Citibank and Microsoft are also strategic investors.

Business is particularly good in the banking and finance sector, with the Bangladesh SWIFT attack acting as a wake-up call.

“It’s an ongoing thing,” Israeli said. “Companies will never be safe. Attackers are always developing new methods.” ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/23/illusive_networks_decoy_server_software/

Phone crypto shut FBI out of 7,000 devices, complains chief g-man

The FBI has been locked out of almost 7,000 seized mobile phones thanks to encryption, director Christopher Wray has said.

Speaking at the International Association of Chiefs of Police conference in Philadelphia in the US, Wray lamented that device encryption kept the g-men out of “more than 6,900, that’s six thousand nine hundred, mobile devices … even though we had the legal authority to do so.”

“It impacts investigations across the board: narcotics, human trafficking, counterterrorism, counterintelligence, gangs, organized crime, child exploitation,” he added.

Device encryption, where a mobile phone encrypts information stored on it, is a useful security measure in cases where phones are stolen. However, if a device owner arrested by police refuses to decrypt it, cops would have to crack the device to read messages stored on it.

The problem does not arise in the UK, where it is a criminal offence to refuse to give your password to State investigators.

Wray later added in his speech: “I get it, there’s a balance that needs to be struck between encryption and the importance of giving us the tools we need to keep the public safe,” according to the BBC.

Device encryption is a separate thing from end-to-end encryption, which protects messages and phonecalls from being intercepted and read over the air as they travel between the device and the server. Amber Rudd, the Home Secretary, has repeatedly attacked technology companies for their use of end-to-end encryption, which makes it far harder for small State agencies to use their Investigatory Powers Act powers to spy on alleged miscreants.

In 2014 British police complained that another phone security measure, the ability to remotely wipe the device, was being used to cleanse phones seized in criminal investigations before police could read them. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/23/fbi_director_6900_phones_encryption/

Google slides text message 2FA a little closer to the door

Text messages aren’t a great way to implement two-factor authentication, but it’s a technique that’s stubbornly persistent. Now Google has decided to push things along by pushing its alternative into production.

The Chocolate Factory’s alternative is called “Google Prompt”. Instead of sending users a one-time code in a text message, it asks users if they are trying to sign in. If they are, in they go. If they’re not expecting the login prompt, down come the shutters.

Prompt first landed as a trial back in July, replacing 2FA with an app. As the company explained here, TXT-based 2FA is susceptible to phishing, so a prompt improves security.

Infosec bods have long warned that 2FA-by-text was insecure. Last year, NIST said it should be deprecated, and the problems were made manifest in May when attackers started exploiting Signalling System 7 (SS7) vulnerabilities to steal 2FA-protected logins.

Last month, Positive Technologies named gmail as one service still vulnerable to compromise via SS7.

Mountain View is following one of the NIST’s preferred paths, an app for 2FA.

For now, text-based 2FA will remain, as one of the second choices alongside Authenticator, backup codes, or Google’s Security Keys.

As the blog post noted, “This will only impact users who have not yet set up 2SV. Current 2SV users’ settings will be unaffected. In addition, if a user attempts to set up 2SV but doesn’t have a compatible mobile device, he or she will be prompted to use SMS as their authentication method instead.”

One reason for retaining text 2FA is that the Prompts app needs a data connection to work.

The 2FA app supports both Android and iOS phones (Apple users need the Google app to use Prompts). ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/23/google_slides_text_message_2fa_a_little_closer_to_the_door/

Sarahah anonymous feedback app told: ‘You’re riddled with web app flaws’

The web-based version of anonymous feedback app Sarahah is riddled with security flaws, according to a researcher.

Sarahah is a well established mobile app that allows people to receive anonymous feedback messages from friends and co-workers. Flaws in the technology make it vulnerable to web-based attacks including cross-site scripting and CSRF, according to security researcher Scott Helme.

Helme found that it was “trivially easy” to bypass Cross-Site Request Forgery (CSRF) protection in the app. CSRF is a class of attack that forces an end user to execute unwanted actions on a web application.

Ask.fm, another technology popular with teenagers, became a platform for insults and flaming, partly because of the ability to send anonymous messages brought out the worst in people.

The Sarahah app does seem to have some rudimentary filtering in place to prevent abuse of other members but it doesn’t include rate limiting. This omission meant Helme was able to anonymously send hundreds of messages to a test account.

Helme told El Reg that Sarahah exhibited numerous flaws he was surprised to find in a mature web app.

“My biggest worry is that this is a brand new application and the issues were not difficult to find at all,” Helme explained. “They are basic issues I wouldn’t expect to find in a new app and as a result I’m concerned the app hasn’t undergone any security testing prior to release. If it has then I’d be raising some very serious questions with the firm that did the testing as to why such fundamental flaws were missed.”

In response to queries from El Reg, Sarahah acknowledged Helme’s research had uncovered flaws in its technology. “We have passed the items to our developer and doing our best to solve the issues,” it said.

Sarahah is the number one app on Apple’s App Store and is number one in more than 10 countries on Google Play too.

Helme first reported issues to weeks ago in early August. He expressed frustration about the slow response.

“An app of this nature should be very security and privacy focused,” he explained. “I was disappointed at how difficult it was to contact the firm to responsibly disclose these issues that affect their users and how poor the response and handling was once I made contact.” ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/23/sarahah_insecurity/

US energy, nuke and aviation sectors under sustained attack

The United States’ Department of Homeland Security has issued an alert that warns of “advanced persistent threat (APT) actions targeting government entities and organizations in the energy, nuclear, water, aviation, and critical manufacturing sectors.”

The alert says an unknown actor has been at it since May 2017 and has compromised some networks.

Compiled with the help of the FBI, the alert also acknowledges Symantec’s September 2017 report on attacks labelled ‘Dragonfly’, and says “The threat actors appear to have deliberately chosen the organizations they targeted, rather than pursuing them as targets of opportunity. Staging targets held preexisting relationships with many of the intended targets.”

The attackers “are seeking to identify information pertaining to network and organizational design, as well as control system capabilities, within organizations.” The alert adds “the threat actors focused on identifying and browsing file servers within the intended victim’s network [and] viewed files pertaining to ICS or Supervisory Control and Data Acquisition (SCADA) systems. Based on DHS analysis of existing compromises, these files were originally named containing ICS vendor names and ICS reference documents pertaining to the organization (e.g., “SCADA WIRING DIAGRAM.pdf” or “SCADA PANEL LAYOUTS.xlsx”).”

The attacks were conducted with depressingly-familiar tactics: the perps would first figure out high-value targets in the organisations they sought to crack, then spear-phished them with emails bearing subject lines such as “AGREEMENT Confidential” containing benign attachments that “prompted the user to click on a link should a download not automatically begin.” In a colossal non-surprise, some of those links led to malware.

Other phishing campaigns led to fake login pages that harvested credentials.

Once the attackers had credentials, they loaded malware that started to sniff for and exfiltrate data, sometimes by creating new users on targeted domains.

The alert notes that the phishing payloads were legitimate attachments that did not contain malware, but exploited either user gullibility or known-to-be-risky features of tools like initiating downloads of documents using Server Message Block.

The attackers’ tactics worked just as well on standalone computers as they did on virtual desktops, a worrying outcome given government agencies frequent use of virtual PCs as a way to improve security.

The Department’s recommended actions therefore reference existing and long-standing security advice and include things like deploying email and web filters, checking for obvious signs of intrusion like frequent deletion of log files and checking to see if new users have unexpectedly been created.

The alert doesn’t say what damage, if any, the attacks have wrought. Nor does it attempt to reveal the origins of the attacks, although the Department has previously suggested [PDF] that Dragonfly was a Kremlin-sponsored operation. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/22/us_department_of_homeland_security_warns_of_sustained_attacks_on_industry/

NetBSD, OpenBSD improve kernel security, randomly

The folks at NetBSD have released their first cut of code to implement kernel ASLR – Address Space Layout Randomisation – for 64-bit AMD processors.

The KASLR release randomises where the NetBSD kernel loads in memory, giving the kernel the same security protections that ASLR gives applications.

Randomising code’s memory location makes it harder to exploit bug classes like buffer overruns, since an attacker can’t easily predict (and access) the memory location exposed by the bug.

As developer Maxime Villard explains, the current implementation puts a specialised kernel, “prekern”, between the bootloader and the kernel.

“The kernel is compiled as a raw library with the GENERIC_KASLR configuration file, while the prekern is compiled as a static binary. When the machine boots, the bootloader jumps into the prekern. The prekern relocates the kernel at a random virtual address (VA), and jumps into it. Finally, the kernel performs some cleanup, and executes normally.”

Villard adds that the implementation is incomplete: for example, wherever the kernel is put by prekern, it lands in a contiguous block of memory.

That makes the direction of future development pretty obvious, with the main items being:

  • Randomise the kernel sections independently, and intertwine them;
  • Modify several kernel entry points not to leak kernel addresses to userland;
  • Randomise the kernel heap too (which is still static for now).”

The OpenBSD project offered its first look at a similar approach back in June, referred to as KARL (kernel address randomised links).

That effort became mainstream early this month in OpenBSD 6.2. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/22/netbsd_openbsd_improving_kernel_security/