STE WILLIAMS

Hiring Outside the Box in Cybersecurity

Candidates without years of experience can still be great hires, as long as they are ready, willing, and able.

We all know what the ideal security team candidate looks like. She has years of hands-on operational experience, is skilled in a variety of cybersecurity technologies (particularly the ones in the organization’s security stack), and comes highly recommended. But given the workforce shortage, such candidates are like diamonds: very rare and extremely expensive. Organizations can have unfilled positions open for months on end while they look for a candidate with the perfect resume.

The reality is that most organizations would be well-served to expand their searches beyond the typical rock star resumes and hire outside the box. There are plenty of talented individuals who could become strong contributors if they are given the opportunity in an organization that is willing to cultivate its own talent.

I feel particularly strongly about this subject because I started my first computer forensics job without any applicable experience in the field. I applied for the position because it sounded exciting and I knew I could quickly acquire the skills I needed by working hard on the job and on my own time. Ultimately it was a win-win situation: I had a job I thoroughly enjoyed, where I was constantly learning and developing a new skill set, and my employer had the talent it needed at a rate that was initially under market. (My salary doubled during my time at the company). 

As a result, one of the key tenets of my hiring strategy is to always be on the lookout for capable individuals who have the potential to excel in their roles regardless of their backgrounds. I have found that there are several must-have intangible qualities that are strong indicators that a candidate will be a quick study and successful team member. Here are three ways to identify them:

Ready
One of the best ways to determine whether a candidate is prepared to do the work necessary for the job is to give him or her a short exam as part of the interview process. I am not referring to a closed-book, multiple-choice test that relies on memorization or obscure cybersecurity facts. I am talking about an onsite, open book, practical exam based on a real-world security analysis scenario where the candidate talks through his or her thought process each step of the way. The candidate may not be able to provide all the right answers or complete the analysis, but someone with solid potential will be able to demonstrate an intelligent methodology and a clear understanding of the fundamental concepts. If you give him a hint, he will be able to run with it and make additional progress. This is the type of person who will become effective on the team once he receives some relevant on-the-job training.

Willing
You can often glean how motivated a candidate is to be in cybersecurity directly from what the person’s resume lists for education, extra-curricular activities, certifications, and/or technology. This filter is especially important when evaluating candidates who are looking to transition into cybersecurity from other industries.

If the person is working in a field unrelated to cybersecurity and is completing a cybersecurity educational program or regularly attending cybersecurity meetups or activities at night or on weekends, she is probably quite motivated to move into cybersecurity. Likewise, if the candidate has earned a cybersecurity certification, she is demonstrating notable determination as well. While there is debate as to whether certifications are indicative of skill, it is clear that obtaining a certification of any type requires commitment to the field and the expenditure of a significant amount of time and energy.

Along the same lines, if the candidate is new to security and lists numerous security products in her technology section, if she is researching which products are used for specific functions, and putting the effort into familiarizing herself with the technologies, that provides additional indication of interest and motivation. You can confirm during the interview process whether the candidate’s knowledge of the technology is substantive.

Able
Our industry evolves rapidly. Network defenders are constantly improving their capabilities to keep pace with new attacks, new advisories, and new technologies. No matter what an individual’s skill set includes when starting a job, he will need to develop new competencies while on the job. When interviewing candidates, I try to understand their propensity for developing their capabilities by solving problems on their own. I often ask questions such as “what do you do when you don’t know something?” If the answer is “read through the standard operating procedures (SOPs),” I delve into what the candidate would do if there was no SOP because I want to determine whether the person would go beyond what was already known and readily available to him.

If the answer is “ask someone on the security team,” I inquire further to determine whether the candidate is more likely to be collaborative or burdensome to team members. The type of answer that is usually the best sign is more along the lines of “I would research the topic on my own.” If the person says that he would conduct Google searches, that is sufficient, but it is better to hear a candidate name several reputable resources specifically.

Most security leaders will find that hiring outside the box can be challenging. It requires a rigorous interview process, internal training, and patience. But in the end, it can be well worth the effort when the security team is full of ready, willing and able team members who are prepared, motivated, and growing as professionals.

Hear Roselle speak about “Ten Ways to Stretch Your IT Security Budget on November 29 at the INsecurity Conference sponsored by Dark Reading.

Related Content:

Roselle Safran has over a decade of experience in cybersecurity and is a frequent speaker on cybersecurity topics for conferences, corporate events, webinars, and podcasts. She is President of Rosint Labs, a cybersecurity consultancy that provides operational and strategic … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/hiring-outside-the-box-in-cybersecurity/a/d-id/1330342?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Less Than One-Third of People Use Two-Factor Authentication

The number of 2FA users is still lower than expected, but most adopters started voluntarily, researchers found.

The adoption rate for two-factor authentication (2FA) is even lower than expected: 28%. But on the bright side, those who use it report that they started 2FA voluntarily.

Researchers at Duo Labs hypothesized the majority of the US doesn’t use 2FA. Their results were confirmed in a survey designed to measure both adoption of 2FA and users’ perceptions of it. More than half of 443 participants had not heard of 2FA prior to taking the survey.

More than half (54%) of the 126 people who use 2FA were voluntary adopters, which took researchers by surprise. They had expected involuntary adopters would be this high, with the assumption that most people began using 2FA because their employers required it. However, results proved them wrong: only 20.8% of respondents learned about 2FA in the workplace.

“Initially, [20.8%] was surprising, but we realized that it seems reasonable,” says Kyle Lady, senior RD software engineer at Duo. “Consumer financial services have offered SMS-based authentication for many years, and services such as Gmail and Facebook occasionally prompt users to enable security features.”

Less than half (45%) of people who employ 2FA use it across all the services that offer it. Those who only use 2FA for some apps and services explain their use with three popular answers: they are required to use 2FA, it’s easier to enable 2FA on certain websites and apps, or certain websites and apps hold data they want to protect.

While it’s easy to understand why people want to safeguard sensitive data, Duo data scientist Olabode Anise says the ease-of-use factor is critical.

“The finding that ease of setup was a major factor in 2FA use underscores the importance of making it as easy as possible for users to get into a desired state of security,” he explains.

Most people (85.8%) prefer email or SMS for 2FA. This wasn’t a surprise: many websites and applications offering 2FA have SMS as a default option. Researchers hope any future change points to a decline in SMS use, as it has become easier to social-engineer wireless carriers into forwarding text messages to another SIM card, or intercept messages via fake cell towers.

At 9% adoption, 2FA via security key was the least common method, which was understandable as security keys are the youngest 2FA technology. Adoption of authenticator apps increased as well, from 46% in 2010 to 52% in 2017. Hard tokens decreased from 38% to 19%.

Interestingly, 84% of people reported hard tokens were the most trustworthy form of 2FA. However, they did not agree hard tokens were more secure than a username and password.

“We view the distinction between trustworthy and secure as a focus on reliability,” says Lady. “Users have great confidence that hard tokens will work, but they recognize that there are risks, such as losing them.”

Email-based authentication is perceived as secure because it arrives in users’ inboxes, but it’s less trustworthy because emails commonly disappear or take a long time to arrive.

2FA via push notification ranked highest for user perception. Users reported it was the least frustrating and required the least concentration, plus instructions were not needed to use it.

Researchers also dug into where people most frequently use different methods of 2FA. Email/SMS is the most popular method in work environments at 29%, followed by authenticator apps (21%), and hard tokens (15%). More specifically, it’s also the most common form of 2FA in the financial industry (45%) and in healthcare (31%).

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/endpoint/less-than-one-third-of-people-use-two-factor-authentication/d/d-id/1330347?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

South America the Target of ‘Sowbug’ Cyber Espionage Group

Diplomatic and government organizations in Brazil, Peru, Ecuador, and Argentina are being targeted by what Symantec says looks like a nation-state actor.

A shadowy threat group with nation-state capabilities has been conducting a sophisticated cyber espionage campaign against government targets in South America, a region where such attacks have been relatively rare.

The group, called Sowbug, has been active since at least early 2015 and appears primarily interested in gathering foreign policy information from diplomatic and government entities in the region, Symantec warned in a report published Tuesday.

The Sowbug group’s victims include organizations in Brazil, Peru, Argentina, and Ecuador. In addition to South America, the hacker group has also targeted organizations in Southeast Asia and broken into government organizations in Brunei and Malaysia.

Symantec says it first spotted signs of Sowbug’s activity in March 2017 when it discovered a brand new backdoor dubbed Felismus being used against a target in South East Asia. The group appears to be well-resourced and capable of infiltrating multiple targets simultaneously and maintaining a presence on their networks for extended periods. In some cases, they have remained undetected on a victim’s network for up to six months.

What makes the Sowbug campaign significant is its focus on South America, says Dick O’Brien, security researcher at Symantec. Typically, most attacks of this nature have been directed at organizations in the United States and Europe. “The most significant thing for us was seeing a group like this targeting South America, which, to date, has been quite rare,” O’Brien says. 

“The big takeaway from our perspective is that cyberespionage is now a global issue and no region is unaffected,” he says. Organizations shouldn’t assume they won’t be targeted because of where they are located, and should build their defenses against such threats.

In terms of capabilities, the Sowbug group has developed its own sophisticated malware and seems to have enough personnel to take on multiple targets at the same time. The attackers tend to only operate outside of the normal working hours in their targeted countries to minimize their chances of getting caught. “Therefore, we’d view them as capable and well-resourced attackers, near the elite end of the spectrum,” O’Brien says.

The actors behind the Sowbug campaign appear to be looking for very specific information on victim networks. For instance, in a May 2015 attack on the foreign affairs ministry of a South American nation, the attacks seemed focused on extracting documents from a division of the ministry that was responsible for foreign relations with a nation in the Asia-Pacific region.

The threat actors first looked for and extracted all Word documents stored in a file server belonging to the division that had been modified from May 11, 2015. One hour later, they came back to the same server and extracted an additional four days worth of data. “Presumably they either didn’t find what they were looking for in the initial incursion, or else noticed something in the documents they stole earlier that prompted them to hunt for more information,” Symantec noted in its report.

They then attempted to extract Word documents and other content from remote files shares and shared drives belonging to the targeted division, once again using a very specific date range. In this particular instance, the threat actors from Sowbug managed to remain undetected on the victim network for a period of four months between May and September 2015.

One tactic the group has used to evade detection is to disguise its malware as well-known software packages such as Windows and Adobe Reader. The group has been able to hide in plain sight by naming its tools after well-known software products and placing them in directory trees where they can easily be mistaken for the real thing, according to Symantec.

O’Brien says Symantec so far has not been able to figure out how the group initially infiltrates a target system or drops the Felismus backdoor on them. In some cases, the researchers have seen the attackers employ a dropper dubbed Starloader to install Felismus. But in those cases, the company has not been able to figure out how Sowbug got the dropper on the system.

“We were unable to identify any technical or operational aspects of the attack that would indicate possible origin of this activity,” O’Brien says. “However, we can say the targets are likely of interest to a nation-state and the malware used in these attacks is at the level of sophistication we would expect to see with state-sponsored attackers.”

Related content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/south-america-the-target-of-sowbug-cyber-espionage-group/d/d-id/1330349?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Fake WhatsApp pulled from Google Play after 1m downloads

What’s in a name?

A space, as it turns out. Not just any space mind you, a special kind of space known as a non-breaking space. Under normal circumstances the humble non-breaking space character glues two words together so that they can’t be split up at the end of a line. Not so on Google Play.

On Google Play, the marketplace for Android apps, the non-breaking space takes on chameleon-like powers, allowing scammers, chancers and other such ne’er-do-wells to create fake apps and pretend they were made by the authors of the apps they’re mimicking.

At least that’s what happened about four days ago to an app that you and a few billion others might have heard of: WhatsApp.

On Friday 3 November an app called Update WhatsApp Messenger was spotted on Google Play. The app was decked out in all the greenery and speech-bubble-logoed finery you’d expect of a legitimate WhatsApp and, most crucially, it appeared under the developer name WhatsApp Inc. .

Got that? Let me illustrate the point with some quotation marks.

The developer wasn’t the “WhatsApp Inc.” you might be thinking of, the company behind WhatsApp, but “WhatsApp Inc. ” (with an extra trailing space), transient purveyors of knock-off fakery.

The difference is a little more obvious (at least it was to a bunch of diligent Redditors) if you look at those same developer IDs as they appeared in Google Play URLs. URL encoded links for products made by the real WhatsApp Inc. developer contained the name WhatsApp+Inc. whereas links for the sham app contained the name WhatsApp+Inc.%C2%A0.

Yes, that’s correct, the elite hacking technique that allows a guy in his basement to pull the wool over Google’s all-seeing eyes is a space character.

The Redditors who noticed the problem eventually chased the adware disguised as the world’s favourite messaging app off of Google Play, but not before a huge number of people had downloaded it, as Reddit user Sunny_Cakes noted:

It already has 1 million installs lul. For shame google, for shame

This isn’t the first time that Google has been forced to pull apps from Google Play – in August it removed 500 apps that had been downloaded a total of 100 million times between them.

Searching for popular apps on Google Play often shows the app you’re looking for surrounded by a host of imposters. With tricks as simple as copying a logo and adding a space to a developer’s name available to the fakers it’s no wonder.

If you discover a fake app on Google Play, report it to Google. For more insight into the problems of Android malware, download the Sophos 2018 Malware Forecast.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/4SIS7sIl_xA/

2018 Malware Forecast: the onward march of Android malware

In the 2018 Malware Forecast released this week, we see Android malware on the march, with ransomware an ever-increasing threat. According to SophosLabs analysis, the number of attacks on Sophos customers using Android devices increased almost every month in 2017.

The number of malicious Android apps has risen steadily in the last four years. In 2013, just over a half million were malicious. By 2015 it had risen to just under 2.5 million. For 2017, the number is up to nearly 3.5 million.

Rowland Yu, a SophosLabs security researcher focusing on mobile malware, said that in September alone, more than 30% of the malicious Android malware processed by SophosLabs was ransomware.

Looking at the top Android malware families since the beginning of 2017, Rootnik was most active – 42% of all such malware stopped by SophosLabs. PornClk was second most active at 14%, while Axent, SLocker and Dloadr followed behind at 9%, 8% and 6%, respectively.

Many apps on Google Play were found to be laced with Rootnik, and that family was also seen exploiting the DirtyCow Linux vulnerability in late September.

Meanwhile, SophosLabs witnessed a drop in the number of PUAs (Potentially Unwanted Applications – applications that some regard as a problem and others are prepared to tolerate, such as adware).

The numbers had risen every year between 2013 and 2016, but 2017 saw a drop from 1.4 million down to below 1 million.

Of the PUAs seen by SophosLabs this year, Android Skymobi Pay accounted for 38% of all activity, followed by Android Dowgin (16%) and Android Riskware SmsReg (12%).

For more insight into the threat of Android malware, download the Sophos 2018 Malware Forecast. Businesses can protect themselves from the threat of Android malware and PUAs with Sophos Mobile, while home users can protect themselves for free with Sophos Mobile Security for Android.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zP9BN_dfcVY/

Google’s Halloween lock-out caused by false positive

Who is in charge of files created and stored on Google Docs and Drive?

Most people assume it’s the user or team sharing them but an incident affecting these services on Halloween has reminded everyone that there is always a superuser with absolute power sitting above this – Google itself.

On that particular day, a portion of Docs users started finding themselves blocked from opening or editing specific documents. Many reported seeing the following message:

This item has been flagged as inappropriate and can no longer be shared.

Except the files were wholly innocent of the charge, something that was quickly pointed out to Google using the preferred medium of modern complaint, Twitter. A few hours later, access to the files was restored.

All back to normal? Not exactly.

On Friday, Google offered an official explanation for what went wrong:

A short-lived bug that incorrectly flagged some files as violating our terms of service (TOS). [This] caused the Google Docs and Drive services to misinterpret the response from these protection systems and erroneously mark some files as TOS violations, thus causing access denials for users of those files.

What Google is saying is that its “unparalleled automatic, preventive security precautions … using both static and dynamic antivirus techniques” suffered what is known in the trade as a false positive.

This happens when a security system incorrectly flags something as suspect that isn’t, a phenomenon affecting all systems from time to time.

While not fun they’re still less worrisome than a false negative, which happens when a genuinely malicious file slips through unnoticed.

Nonetheless, the incident makes it clear that every time a user creates a file on Drive (which is where Docs files are stored), there is a possibility that it might at some point be scanned by Google’s security software to decide whether it’s “inappropriate” or not.

Drive has been widely abused to host malicious (boobytrapped) files, command and control and even crude phishing attacks, so you can understand why Google might want to do such a thing.

The deeper issue is how this is done and whether it in any way compromises privacy over and above the implicit fact (as stated in the terms and conditions) that Google can be legally compelled to hand files over to law enforcement if presented with a court order.

On the basis of Google’s policies it seems unlikely to me that the system reads the contents of files or scans each individually as it is created and used. Rather, periodic scans are run on groups of files as a way of spotting patterns that indicate something suspicious is afoot.

We have no way of knowing how well this system spots malice, but we can say from the rarity of events like this, where large numbers of users are locked out, that disruptive false positives are rare.

It’s possible individual users can protect themselves against this kind of glitch by mirroring Drive files to a local machine and working on those offline. This definitely won’t work for G Suite (formerly Apps for Work) files shared across multiple users and hosted online, however.

The lesson from the Halloween lock-out remains that while content sitting on Drive or created through Docs might belong to the user, the service itself is always Google’s domain. If only more people read the TCs.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zkUn-lEZLKs/

It’s 2017 and you can still pwn Android gear with Wi-Fi packets – so get patching now

A security researcher has turned up new ways to silently hijack and infect Android devices via malicious Wi-Fi packets over the air.

Scotty Bauer, a Linux kernel developer, described in detail on Monday how he found a bunch of exploitable programming blunders in the qcacld Wi-Fi driver that supports Qualcomm Atheros chipsets. These chips and their associated driver are used in a number of Android phones, tablets, routers, and other gizmos, including some Pixel and Nexus 5 handhelds, for wireless networking.

In an effort similar to Gal Beniamini’s work scrutinizing Broadcom’s insecure wireless technology, Bauer went looking for low-level remote-code-execution vulnerabilities in Google-powered gadgets, found them, and reported them so they can be addressed.

The result of that effort is some juicy security fixes that were released on Monday by Google. These need to be installed on vulnerable Android devices to protect them from attacks leveraging the now-documented bugs.

Essentially, it is possible vulnerable Android gizmos can be secretly commandeered by nearby hackers via Wi-Fi due to flaws in the aforementioned wireless driver code, originally developed by Qualcomm Atheros. So check for updates from Google, via the Settings app, and install this month’s Android security updates if or when they are available for your devices.

Whatsapp running on an iPhone

Over a million Android users fooled by fake WhatsApp app in official Google Play Store

READ MORE

Bauer explained that since Qualcomm uses a partial SoftMAC – that is, at least some of the MAC layer is implemented in software – “the source code for handling any sort of 802.11 management frames must be in the driver and is thus available to look at.” In other words, it is possible to study the code and figure out the right management frames to send to a nearby victim’s device to trigger the execution of malicious code, leading to crashes or the installation of spyware.

Bauer’s “first and best” bug in the mammoth driver – which is 691,000 lines of code – is in the dotllf.c file “tasked with parsing over-the-wire packets to a C-style structure.” This flaw, labeled CVE-2017-11013, is a classic buffer overrun, and a potential remote-code execution hole. It was fixed on Monday by Google.

Next on Bauer’s list of discovered bugs is a pair of programming cockups that can cause the code to get stuck in an infinite loop, one of which hasn’t been publicly identified yet because there was an error in the patch for it, and a “new fix is in the works” to fully correct it. The other denial-of-service flaw that has been publicly disclosed, CVE-2017-9714, does have a patch available to correct it: that was released in October.

Another bug found by Bauer and fixed on Monday this week, CVE-2017-11014, is a cockup in how an access point’s neighbor identification broadcast packets are processed. It’s another buffer overrun: an attacker sending malicious APChannel data to a target can push 100 bytes into a buffer provisioned for eight bytes, triggering a crash or a potential execution of malicious code.

The last of Bauer’s disclosures this week, CVE-2017-11015, also potentially allows an attacker gain remote code execution on a handset by exploiting a mistake in a vulnerable Android phone’s portable access point capability. A specially crafted challenge packet, sent by an attacker, can potentially push 253 bytes into a challenge text memory space that’s just 128 bytes long. Again, a patch for this was released on Monday by Google.

Any code injected into the driver and successfully executed, by bypassing any builtin protections, will run at the kernel level, giving it total control over the device.

Bauer promises another bunch of bug discoveries in December on his website, linked above. He’s also asked that the flaws he finds not be named or branded with a logo. ®

PS: There are many other security issues fixed in November’s Android patch batch, so even if you likely don’t have the aforementioned vulnerable wireless chipset, you should grab the update anyway as soon as it arrives for your device. We’ll cover them all this week.

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/07/android_wifi_pwnage_emerges/

Boffins tear into IEEE’s tissue-thin anti-hacker chip blueprint crypto

Several large gaps have been found in the IEEE’s P1735 cryptography standard that can be exploited to unlock or tamper with encrypted system-on-chip blueprints.

The P1735 scheme was designed so that chip designers could, ideally, shield their intellectual property from prying eyes.

When you’re creating a system-on-chip processor, you typically won’t want to craft it completely from scratch. You’ll most likely license various complex pieces – such as video encoders and decoders, wireless communications electronics, and USB controllers – and slot them onto your final die design alongside your own logic and CPU cores.

These licensed components are quite valuable to their designers, though. As such they’ll want to protect them from being reverse engineered and cloned to be used for free by pirates. As such, the IEEE developed P1735, a standard for encrypting hardware designs to keep them confidential throughout the manufacturing process. This requires you use P1735-compliant engineering software to import the ciphered blocks and integrate them with your own logic before taping out your chip.

That should keep everyone happy: the video decoder or USB controller developer gets to license their technology for money and keep the blueprints secret, and you get to market faster with your custom system-on-chip.

However, according to a team at the University of Florida in the US this month, the standard is broken and potentially dangerous. It is possible to decrypt blueprints protected by P1735, and alter them to inject hidden malware.

“We find a surprising number of cryptographic mistakes in the standard,” the research crew said.

“In the most egregious cases, these mistakes enable attack vectors that allow us to recover the entire underlying plaintext IP [intellectual property]. Some of these attack vectors are well-known, e.g. padding-oracle attacks. Others are new, and are made possible by the need to support the typical uses of the underlying IP.”

The main flaw lies in the standard’s use of AES-CBC mode, the bedrock of its encryption system. Because the standard makes no recommendation for any specific padding scheme, developers of P1735-compliant engineering tools often pick the wrong scheme. This makes it possible to decrypt the blueprints without the necessary key using a classic padding-oracle technique.

There is a fix for this weakness, we’re told: toolchain programmers should switch to an AByte or OZ padding scheme, which is secure, or encrypt designs using AES-CTR mode. That will keep the data confidential, which should please technology licensors, but it doesn’t protect the blueprints’ integrity, which is not good news for you, the system-on-chip designer.

Malicious

It’s one thing to prevent miscreants from decrypting the licensed schematics or hardware design code; it’s another to prevent them from silently modifying bits and bytes to maliciously change the operation of the actual resulting chip. P1735 ought to prevent that, but doesn’t.

The encryption scheme is vulnerable to so-called syntax-oracle attacks. It is possible to modify a byte within the encrypted data so that when it is decrypted and processed by the engineering software tools, it triggers an error message that gives away the kind of data altered. For example, if you tweak a character in the ciphered design, and see something like “expecting identifier immediately following back-quote,” or “unknown macro” in the toolchain’s log output, you can start to get a sense of what the plaintext was.

Over enough iterations, an attacker, with access to an encrypted design and the necessary engineering toolchain, can eventually work out what’s what inside the ciphered blueprint, and alter it to add an invisible backdoor hidden in the physical circuits of the chip. The IEEE’s P1735 approach does not do enough to prevent this.

Suffice to say, if someone can tamper with a licensed block to embed secret security vulnerabilities, and then somehow smuggle it into your manufacturing process, you have way bigger problems on your hands. However, it would be nice if the institute’s standard could alert you straight away to any mystery alterations in the supplied blueprints.

It is also possible, it appears, to alter the metadata within an encrypted block to, for example, use the design without a valid technology license.

To demonstrate the flaws in the standard, the Florida team crafted a hardware trojan and slipped it into an encrypted system-on-chip block, where it remained virtually undetectable. We can imagine the NSA being interested in this kind of thing to compromise chip designs.

“While the confidentiality attacks can reveal the entire plaintext IP, the integrity attack enables an attacker to insert hardware trojans into the encrypted IP,” the boffins concluded. “This not only destroys any protection that the standard was supposed to provide, but also increases the risk premium of the IP.”

A full list of the flaws is below:

US CERT has alerted vendors using the P1735 scheme in an insecure manner, and has published a list of manufacturers and developers contacted. These include AMD, Cisco, IBM, Intel, Qualcomm, Samsung, Synposys, Mentor Graphics, Cadence Design Systems, Marvell, NXP, and Xilinx, all potentially at risk but so far unconfirmed. ®

PS: P1735 uses one-time session keys and public-private key pairs to encrypt designs in transit and at rest. However, how the standard keeps a blueprint out of the hands of a miscreant once the design is decrypted in memory by the engineering toolchain for processing is unclear….

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/07/ieee_p1735_chip_design_insecurity/

Apache OpenOffice: We’re OK with not being super cool… PS: Watch out for that Mac bug

Interview Apache OpenOffice 4.1.4 finally shipped on October 19, five months later than intended, but the software is still a bit buggy.

The resource-starved open-source project had been looking to release the update around Apache Con in mid-May, but missed the target, not altogether surprising given persistent concerns about a lack of community enthusiasm and resources for the productivity suite.

Among those working on the project, there’s awareness things could be better. “I believe the 4.1.4 shows us, that we have to do a better job in QA,” observed AOO contributor Raphael Bircher in a developer mailing list post.

A followup comment by Patricia Shanahan touches on the scarcity of development talent available to the project. “I don’t like the idea of changes going out to millions of users having only been seriously examined by one programmer – even if I’m that programmer,” Shanahan wrote, adding that more active programmers are needed on the security team.

Version 4.1.4 did fix four security vulnerabilities, and that’s one less than the five that appear to be outstanding for the software, based on two reported in the November 2016 minutes of Apache Foundation Board of Directors’ meeting and three reported in the April 2017 minutes.

However, the math adds up once you remove one reported issue that turned out not to be a problem.

“Those numbers represent the total number of reports (valid and invalid) received for each project,” said Mark Thomas, a member of the Apache Software Foundation security team, in an email to The Register. “Not all reports are valid so it is expected that the number of issues announced is lower.”

The four fixes, published a week after the release announcement, were:

  • CVE-2017-3157: Arbitrary file disclosure in Calc and Writer
  • CVE-2017-9806: Out-of-Bounds Write in Writer’s WW8Fonts Constructor
  • CVE-2017-12607: Out-of-Bounds Write in Impress’ PPT Filter
  • CVE-2017-12608: Out-of-Bounds Write in Writer’s ImportOldFormatStyles

Asked whether the AOO has enough people looking at its code to keep it secure, Thomas said there’s nothing about the project that causes him grave concern.

“Open source projects always want more resources,” said Thomas during a phone interview. “They never have enough. From a board point of view, the criteria we look at are whether there are three or more active PMC [Project Management Committee] members, because that’s the minimum number to vote a release out the door.”

Thomas said that while AOO is not the most active Apache Software Foundation project, neither is it the least active. And he observed that the project has been recruiting more contributors. He considers the 4.1.4 release to be a sign that AOO can still deliver.

Despite being the subject of a deathwatch – perhaps mainly by fans of rival LibreOffice – AOO appears to be rather popular, with the 4.1.4 update racking up at least 1.6 million downloads.

But that also means a significant number of people – 77,000-plus, according to SourceForge stats – have downloaded the macOS version which contains a significant bug: if Apache OpenOffice is used to create a diagram in a Calc spreadsheet, the file becomes corrupted when saved.

The project developers have been discussing how to handle the issue for the past two weeks.

Concerns about the state of AOO appear to be what in August prompted Brett Porter, Apache Software Foundation chairman at the time, to ask whether it would be an option in a planned statement about the state of AOO to “discourage downloads”?

That’s not generally a goal among software developers unless things are very bad indeed.

Naysayers

Yet, according to Jim Jagielski, a member of the Apache OpenOffice Project Management Committee, things are better than naysayers suggest.

“There is renewed interest and involvement in the project,” he said in an email to The Register. “To be honest, part of the issue has been that many involved with the project have had to spend a lot of time and resources ‘fighting’ the ongoing FUD related to AOO, which meant limited time in doing development. As you can see, we are pushing 4.1.4 and are working on test builds of 4.2.0 for Linux, Windows and macOS.”

Jagielski said those working on the project hope to maintain support for older platform versions that have been abandoned by other office suites. “Of course, this also means maintaining older build systems and platforms,” he said. “But we think it is worth it.”

Beyond releasing 4.1.4, Jagielski said the project team is documenting its build environment and streamlining its release cycle.

Is it time to unplug frail OpenOffice’s life support? Apache Project asked to mull it over

READ MORE

As for the macOS bug, it’s proving to be a challenge to fix.

“Unfortunately, the build-fix that addresses this regression caused another,” Jagielski explained. “Again, this is due to AOO trying to maintain backwards compatibility with very old versions of OS X (10.7!) and sometimes small variations in libraries can cause some weird interactions.”

While AOO and the ASF formulate a formal statement of direction for the project, Jagielski said more or less that all’s well.

“AOO is not, and isn’t designed to be, the ‘super coolest open source office suite with all the latest bells and whistles,'” Jagielski continued. “Our research shows that a ‘basic,’ functional office suite, which is streamlined with a ‘simple’ and uncluttered, uncomplicated UI, serves an incredible under-represented community.

“Other office suites are focusing on the ‘power user’ which is a valuable market, for sure, but the real power and range for an open-source office suite alternative is the vast majority which is the ‘rest of us. Sometimes we all forget how empowering open source is to the entire world.” ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/07/apache_openoffice/

Let’s get ready to grumble! UFC secretly choke slams browsers with Monero miners

Yet another website has been caught secretly running Coin Hive’s JavaScript that silently pressgangs visitors’ computers into mining the Monero digital currency.

On Monday, it was the turn of Ultimate Fighting Championship’s pay-per-view ufc.tv site, which streams mixed martial arts battles in which men and women in tight outfits beat the crap out of each other in a cage.

What’s super rude is that this is the website people pay good money to watch fights, and yet it was quietly using viewers’ PCs to generate alt-coins, making whoever put the code there a fast buck on the side.

The CPU-hogging JavaScript was spotted by a netizen when their Avast anti-malware package flagged up the presence of the code on UFC’s Fight Pass. An examination of the webpage’s source revealed Coin Hive was trying to operate. It appears there is no warning or notification of the covert mining operation when fans log in.

“I noticed this because my antivirus kept pinging off every time I went on Fight Pass,” Redditor gambledub reported.

“It’s not harmful AFAIK, but doing this on a service we’re paying for is fucked up IMO. I researched Coin Hive, mentioned by my antivirus, and found the JavaScript on their website, and sure enough it’s running on Fight Pass.”

Over the past few months there have more than 200 cases of websites either covertly installing Coin Hive’s freely available stealthy software, as in the case of Pirate Bay, or by having such poor website security that hackers were able to drop it in there surreptitiously and reap the rewards – as we saw with CBS Showtime, Politifact, and on the website of soccer ace Cristiano Ronaldo.

A handful of euro 1 cent coins

More and more websites are mining crypto-coins in your browser to pay their bills, line pockets

READ MORE

UFC hasn’t responded to questions about whether or not it officially put the Coin Hive software on its website, but it’s unlikely – the biz is very cash rich, and shouldn’t need the relative pittance such a mining operation would bring in compared to its subscription fees. On the other hand, the site’s traffic would make it a top spot for hackers seeking profit.

Antivirus software and many ad blockers kill the Coin Hive software on sight as a matter of course, and in response the development team behind the software is no longer working on the code. Instead they have a version that asks for visitors’ permission before harvesting their computers’ CPU time.

It now appears UFC has removed the Coin Hive software from its website, making the hacking explanation the most likely, but we’re still seeking confirmation as to what went on. It appears that the UFC team is a lot more inept at their defense than most of its fighters. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/07/ufc_coin_hive/