STE WILLIAMS

Hacking MiSafes’ smartwatches for kids is child’s play

MiSafes, the maker of surveillance devices meant to track kids, is back in the news. This time it’s due to the company’s smartwatches that researchers say are drop-dead simple to hack.

Pen Test Partners has found that attackers can easily eavesdrop on children’s conversations; track them; screw with the geofencing so that parents don’t receive notices when their children wander off; see kids’ names, genders, birthdays, heights and weights; see parents’ phone numbers; and see what phone number is assigned to the watch’s SIM card.

Pen Test Partners researchers Ken Munro and Alan Monie told the BBC that they got curious about the watches after a friend bought one for his son earlier this year.

The watches, in kid-happy kartoon kolors, use a GPS sensor to locate a wearer and a 2G mobile data connection to let parents see where their child is via a smartphone app. They allow one-press phone calls and feature an SOS feature that records a 10-second clip of your kid’s surroundings that’s sent to parents via text. It also sends the child’s exact location, with automatic updates every 60 seconds until the emergency is canceled.

The phones also let parents create “safe zones” and, if everything is working as intended, be alerted if their child leaves the area. Parents can also eavesdrop on kids at any time and initiate two-way calls.

Problem: Pen Test Partners found that none of this data is encrypted by the watches. Nor are children’s accounts secured. The researchers bought a bunch of MiSafes watches so they wouldn’t be illegally attacking anybody, and then they used Insecure Direct Object Reference (IDOR) attacks to pull the watch’s flimsy security apart. IDOR vulnerabilities are common – in fact, they’re a staple of the OWASP Top 10 (you’ll find them merged into the broader category of Broken Access Control from 2017). They are also easy to discover, and let an attacker get at data without authorization.

The BBC quoted Pen Test Partners’ Ken Munro:

It’s probably the simplest hack we have ever seen.

This is what they could do because of the IDOR vulnerabilities:

  • Retrieve real-time GPS coordinates of the kids’ watches.
  • Call the child on their watch.
  • Create a covert one-way audio call, spying on the child.
  • Send audio messages to the child on the watch, bypassing the approved caller list.
  • Retrieve a photo of the child, plus their name, date of birth, gender, weight and height.

This sure ain’t the first time

Nothing surprising here: it’s just yet another Internet-of-Things (IoT) security SNAFU. You’d think that products designed and sold to be used by kids, and by parents to safeguard those kids, would have sterling security profiles. You’d be wrong.

In October 2017, the Norwegian Consumer Council (NCC) put out a report after looking at four smartwatch models for kids and finding that they were giving parents a false sense of security. Some features, such as the SOS panic button and the geofencing alerts to keep track of kids’ whereabouts, didn’t work reliably.

Most worrying of all, the NCC found that through simple steps, strangers could take control of the smartwatches. Given the lack of security in the devices, eavesdroppers could listen in on a child, talk to them behind their parent’s back, use the watch’s camera to take pictures, track the child’s movements, or give the impression that the child is somewhere other than where they really are.

The NCC’s acting director of digital services, Gro Mette Moen, told the BBC that the MiSafes watches appeared to be “even more problematic” than the other products it had flagged. They never should have hit the market at all, and people should stay away from them:

This is another example of unsecure products that should never have reached the market. Our advice is to refrain from buying these smartwatches until the sellers can prove that their features and security standards are satisfactory.

Unfortunately, nobody can get a response from a China-based company listed as the product’s supplier. That’s nothing new: it didn’t respond to its baby monitor security failings, either.

The privacy/surveillance-sensitive Germany spun on a dime following the NCC’s report: within a few weeks, the country banned the sale, distribution and possession of kids’ smartwatches, calling them illegal spying devices.

Destroy them, said the country’s telecom regulator, and make sure to hang on to the receipt showing that you did.

Then, in February 2018, news came that MiSafes’ Mi-Cam baby monitors were one of the seemingly endless list of baby-cams that left children or babies exposed to the danger of being eyeballed by prying eyes or chatted up by strangers roaming the internet.

SEC Consult, an Austrian cybersecurity company, had found multiple critical vulnerabilities that allowed for the hijacking of arbitrary video baby monitors. Simply modifying a single HTTP request would let an attacker eavesdrop on nurseries and talk to whoever’s near the baby monitor.

The baby monitors also had outdated firmware riddled with numerous publicly known vulnerabilities; root access protected by only four digits worth of credentials (and default credentials, at that); and a password-forget function that sent a six-digit validation key good for 30 minutes: plenty of time for a brute-force attack.

SEC Consult didn’t give away much detail about the vulnerabilities at the time, because it couldn’t figure out how to get through to the vendor to responsibly disclose them. When we first wrote up the news in February, the security firm had been trying to get in touch with MiSafes since December, without any luck.

The BBC says that Amazon used to sell the watches in the UK but hasn’t had any in stock for some time. I couldn’t find any on Amazon US, either. The BBC says it also found three listings for the watches on eBay earlier this week, but they’ve since been removed. eBay said it took down the watch listings due to its ban on selling equipment that could be used to spy on people’s activities without their knowledge.

If you come across any in the dusty corners of the internet, don’t strap them onto your kids. These things are scary, as is the distributor’s utter lack of response.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Tnr8ofX0uEw/

Judge asks if Alexa is witness to a double murder

Christine Sullivan was stabbed to death on 27 January 2017, in the kitchen of the New Hampshire home where she lived with her boyfriend. Her friend, Jenna Pellegrini, was also murdered that day, in an upstairs bedroom.

There might have been a witness who heard Sullivan’s murder as it happened, given that an Echo smart speaker equipped with Amazon’s Alexa voice assistant was sitting on the kitchen counter the whole time.

What did it hear?

A New Hampshire judge says that Amazon must let us know. Last week, the judge ordered Amazon to turn over any recordings the Echo device may have made between the day of the murder and two days later, when police found the women’s bodies beneath a tarp under the porch. The murder weapons – three large knives – were found wrapped in a flannel shirt buried one foot below the bodies.

From court documents seen by the Washington Post:

The court finds there is probable cause to believe the server(s) and/or records maintained for or by Amazon.com contain recordings made by the Echo smart speaker from the period of Jan. 27 to Jan. 29, 2017… and that such information contains evidence of crimes committed against Ms. Sullivan, including the attack and possible removal of the body from the kitchen.

A 36-year-old New Hampshire man, Timothy Verrill, has been charged with two counts of first-degree murder in the fatal stabbings and is expected to stand trial in May. Prosecutors allege that Verrill killed the two women when he grew suspicious that one of them was tipping off the police about a suspected drug operation. Verrill has pleaded not guilty.

This is at least the second time that a court has demanded Alexa recordings so that a digital assistant can testify in a murder case. The first case was that of James Andrew Bates, who pleaded not guilty to the November 2015 murder of Victor Parris Collins, whose body was found in a hot tub.

Amazon had resisted handing over audio recordings from Bates’s Echo, arguing that prosecutors failed to establish it was necessary for the company to hand over the data and that it had to weigh its customers’ privacy against such a request. The company argued that both its users’ requests to Alexa and the device’s responses are covered by First Amendment rights, and that law enforcement should thus meet a high burden of proof to require release of the data.

Why First Amendment? To Amazon’s way of thinking, a user’s voice requests are protected because the right to free speech covers the “right to receive, the right to read, and freedom of inquiry” without government scrutiny. Amazon has argued that Alexa’s responses are also protected because ranked search results constitute a “constitutionally protected opinion”, which qualified as free speech in a former, separate case involving Google.

The tug-of-war was made moot when Bates agreed to hand over the recordings. Whatever Alexa recorded, it wasn’t conclusive evidence of a murder. Benton County Prosecutor Nathan Smith said that the evidence could have supported more than one reasonable explanation for the death, and the charges against Bates were dropped in November 2017.

As far as this more recent murder trial is concerned, Amazon told The Post that the company hasn’t changed its stance on the matter from what it argued in the Bates case: it’s still prioritizing consumer privacy and won’t necessarily simply hand over customer information “without a valid and binding legal demand properly served on us.” From its statement:

Amazon objects to overbroad or otherwise inappropriate demands as a matter of course.

Even if Amazon does eventually turn over recordings in this case, there’s no guarantee that Alexa heard anything that would serve to prove innocence or guilt. Typically, Echo has to be prompted by the wake words “Alexa,” “Computer” or “Echo” to begin recording.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/_IuvT6wzsKg/

How to rob an ATM? Let me count the ways…

How many computer users still regularly use Windows XP?

It’s a trick question, of course, because the answer is that millions of people do every time they take money out of an ATM cash machine; a significant proportion of which still run some variant of the geriatric OS.

It’s a finding that jumps out of a new probe of ATM security by Positive Technologies, which found that 15 out of the 26 common designs it tested were running embedded versions of XP.

The report doesn’t differentiate between Windows XP and the various Windows Embedded products based on it, but in technology terms they’re all ancient. XP gasped its last breath in April 2014, as did Windows XP Professional for Embedded Systems. The end of extended support has come and gone for most other embedded products based on XP too, and those that are still hanging on by their fingernails only have a few months left.

A further eight ATMs used Windows 7, while only three used Windows 10. While ATM security shouldn’t be reduced to which OS version is in use, the fact that over half were using an OS that even Microsoft thinks is on life support underscores the challenge of keeping them safe.

A quick check on Naked Security shows a string of stories of ATM compromises going back into the mists of time, including August’s multinational cashout warning by the FBI, and a wave of “jackpotting” attacks.

Then there is the recent trend for black box attacks in which a hole is drilled into the machine to hook up a mini-computer (Raspberry Pis being a popular option) to instruct the ATM to chuck out money.

A bit of a mess

Reading deeper into Positive’s report, it’s not hard to see why attacks keep happening. Its researchers uncover weaknesses at every level of their security design.

At the most basic layer of security – encrypting internal hard drives to prevent attackers copying over malware – only two of 26 had this protection.

In a quarter of ATMs, it was possible to bypass security by connecting and booting from an external drive, changing the boot order in the old-style BIOS (no UEFI or authentication present), and configuring the ATM to boot from this to run malware.

A further 11 could be started in Safe Mode, Directory Service Restore Mode or Kernel Debug – a simple way to bypass security checks. Ditto forcing an ATM out of kiosk mode, which was possible for 20 machines.

The team even discovered previously unknown flaws in the security software that was supposed to be protecting ATMs.

What about common attacks?

Spoofing attacks are one example where attackers insert themselves between the ATM and the processing centre to coax it to spit out cash using false commands – just over a quarter had vulnerabilities that might allow this.

Meanwhile, skimming card data from the magnetic stripe either directly during use or subsequently as it is transferred from the ATM to a processor, proved possible for every single ATM tested.

As for black box attacks, 18 were susceptible to this compromise.

About the only defence an ATM maker could put up to these tests is that all require some time – usually minutes – as well as undisturbed access to the ATM cabinet from the front.

Said Positive’s cyber resilience head, Leigh-Anne Galloway:

To reduce the risk of attack and expedite threat response, the first step is to physically secure ATMs, as well as implement logging and monitoring of security events on the ATM and related infrastructure.

The report goes on to recommend some familiar precautions – that data exchanged with the card reader should be encrypted, and that manufacturers take steps to prevent arbitrary code execution and man-in-the-middle attacks between the ATM and processing centre.

In other words: ATMs are just computers at the end of the day (but with an older OS than yours).

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wMl0UinM5bs/

‘Unjustifiably excessive’: Not even London cops can follow law with their rubbish gang database

London cops have broken data protection rules by using a controversial database that ranks people’s likelihood of gang-related violence but fails to distinguish between victims and perps, and low and high-risk people.

The UK’s data protection watchdog today reported there had been “multiple and serious” breaches in the use of the Gangs Violence Matrix, and issued an enforcement notice on the Metropolitan Police.

In a 27-page report (PDF), the Information Commissioner’s Office reeled off a list of failures it said were likely to have caused at least some data subjects harm.

These included an absence of central governance, oversight and auditing; a lack of coherent guidance and policies on data retention, sharing and deletion; and a failure to implement basic data protection practices like encryption or information-sharing agreements.

The Gangs Violence Matrix was set up by the Metropolitan Police in 2012 with the aim of reducing gang-related crime in the capital. People are given a risk score with a traffic light “harm” rating (red, amber or green) based on police info on arrests, convictions and other intelligence, including social media use.

If they can’t prosecute someone on the database for a specific gang-related crime, it allows cops to target them more generally, for instance through increased stop and search, or housing or immigration enforcement, which means sharing the information stored on the database – including names, dates of birth, addresses, ethnicity and police or partner intelligence information – with various private and public bodies.

This can have a significant impact on someone’s life, and has invoked the ire of campaigners. The ICO’s investigation was prompted by one such complaint, from Amnesty International.

It concluded that the use of the Gangs Violence Matrix breached a variety of data protection principles: processing of personal data was excessive; it wasn’t fair or lawful; forces were retaining and processing personal data for longer that necessary; and they failed to take measures to prevent unlawful processing or accidental loss.

Moreover, the Met Police had apparently failed to carry out a data protection or privacy impact assessment, or to establish coherent and consistent guidance.

This resulted in the 32 London boroughs applying very different rules for each of their versions of the database. Some have “diametrically opposed” views on the accuracy and relevance of social media as intelligence.

‘Unjustifiably excessive processing’

There are two major areas of concern identified in the report: one is that 88 per cent of the people in the database are from black and ethnic minorities, but there was no evidence the Met was heeding requirements set out in the Equality Act, with the commissioner noting there are “obvious potential issues of discrimination”.

Another is the lack of differentiation between the groups of people listed on the database. In particular, those who are included because they have been victim of two or more gang-related crimes – this sees them classed as gang-associated – and for people who have a low risk ranking, with 64 per cent of the people on the database rated green.

Overall, this is “unjustifiably excessive and lacking in differentiation”, the ICO said, while enforcement against all gang nominals regardless of risk rating “is excessive processing in the face of the very purpose of having a system of graduated risk”.

This is compounded by the fact those who have a risk score of zero should, according to informal policy (no formal ones on retention exist), be removed from the database entirely.

But the ICO found that people with this score remained on both the database and “informal lists” that officers create on their personal drives, which lack policies or governance on data retention, access or accuracy.

Let that sink in: “Informal lists” that officers create on their personal drives.

“As a result, data subjects are never truly removed from the Gangs Matrix: their personal data continues to be processed as through they remain connected with gangs,” the report said. This extends to it being shared with third parties, and to the policies of enforcement meted out.

Unencrypted?, unredacted, unsupervised

The ICO also slammed the data-sharing practices, which saw personal data handed over to both public and private organisations “in full, in unredacted form”.

Such “blanket and undifferentiated sharing” of both personal data and sensitive personal data (data related to criminal convictions or allegations is given this elevated status) “goes beyond what is reasonably necessary to achieve the MPS’s legitimate purposes in preventing and detecting crime and prosecuting offences”.

Moreover, this data has been shared without information sharing agreements or with incomplete ones – which are a “basic necessity”, the ICO pointed out – with these “manifest and manifold failures” not addressed by either borough or central management.

And of course there are fundamental issues with security: the ICO found that data was “routinely” transferred by officers “in a variety of unsecured ways” and that it wasn’t encrypted. The watchdog is investigating a separate “significant” data breach at Newham Council.

Although the Gangs Matrix is on protected drives, officers at local levels circumvented this by saving the information to local drives – there were no measures to prevent this from happening – and officers who moved to a different beat didn’t routinely have access rights to the main database revoked.

This failure to take technical and organisational measures against unauthorised or unlawful processing, and against accidental loss is another breach of data protection principles.

Met Police: ‘We welcome the scrutiny’

The ICO said it had considered whether to require the Met to scrap the database entirely, but decided not to on the grounds that it was needed for law enforcement.

Instead, the enforcement notice sets out more than a dozen actions the Met has to take in order to ensure the Gangs Matrix is fit for purpose, and it has to report to the ICO on a monthly basis.

These include conducting a data protection impact assessment, a full review of data-sharing agreements, ensuring data subjects are clearly labelled, to erase any informal lists on people whose data shouldn’t be retained, creating an access log, and developing overarching guidance for boroughs.

The Met said that it accepts the enforcement notice and welcomes the ICO’s scrutiny and was “working hard” to address its failings.

“We have already started work to ensure that we improve our data handling and information sharing with partners, who are also involved in community safety work,” said deputy assistant commissioner of Met operations Duncan Ball.

“As well as addressing the concerns within the ICO report, we are also taking forward additional work including the introduction of a public facing website to explain the legal framework for the Gangs Matrix and further information to improve public confidence and transparency.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/16/ico_gangs_matrix_decision/

BlackBerry absorbs Operation Cleaver beaver Cylance into threat detection unit

BlackBerry has made its biggest acquisition ever, spending over half of its cash pile to bolster its threat detection unit.

Cylance, a relatively recent newcomer, is being bought for $1.4bn in cash.

Incoming CEO John Chen made security one of BlackBerry’s priorities in 2013, as it began to withdraw from mobile into enterprise software.

The company already had a foot in enterprises and data centres, a decent reputation for secure communications, and a veteran embedded division, QNX, that put its software in nuclear power stations and cars, for example. There’s also a deep portfolio of crypto patents.

BlackBerry has made some shrewd acquisitions including German outfit SecuSmart (secure calls and messaging), adopted by German Chancellor Angela Merkel in the belief that President Barack Obama’s administration had bugged her phone, crisis-squawker AdHoc, and secure document company WatchDox.

The company formally established a cybersecurity Services division in 2016 (PDF), which conducts threat assessment and penetration testing. Last year BlackBerry brought these disparate efforts together under the Spark brand. Cylance means there’s more to Spark than just marketecture and slides.

cleaver

Iranian CLEAVER hacks through airport security, Cisco boxen

READ MORE

Co-founded by security expert and author Stuart McClure in 2012, Cylance specialises in machine learning threat detection. Although it’s largely a B2B proposition, it also offers a subscription malware service for consumers.

Cylance’s biggest claim to fame was identifying a global Iranian cyberwarfare operation codenamed “Operation Cleaver”, which targeted transportation and logistics facilities. Which is uncanny. BlackBerry identified IoT logistics early on as a potentially vulnerable and under-served market, and its first effort was the container tracker, Radar.

The purchase dwarfs the $400m acquisition of one-time arch-rival Good Technology. A call will be held later today. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/16/blackberry_acquires_cylance/

Black Hat Europe Speaker Q&A: SoarTech’s Fernando Maymi on ‘Synthetic Humans’

Ahead of his Black Hat Europe appearance, SoarTech’s Fernando Maymi explains how and why synthetic humans are critical to the future of cybersecurity.

Soar Technology lead scientist Fernando Maymi is one of many cybersecurity luminaries who will be in attendance at Black Hat Europe in London next month. While he’s there he’ll be co-presenting (alongside Soar’s Alex Nickels) a 50-minute Briefing on “How to Build Synthetic Persons in Cyberspace” which promises to be packed with intriguing ideas. Notably, Soar has developed Cyberspace Cognitive (CyCog) agents that can behave like attackers, defenders or users in a network. While many organizations have developed technologies and techniques for replicating enterprise-scale networks, realistically populating those networks with synthetic agents that behave like real people is a thorny challenge — one Maymi thinks Soar has solved.

We caught up with Maymi via email to get a better sense of what Black Hat Europe attendees can expect from this Briefing and to learn more about his own exciting experiences in cybersecurity.

Hey Fernando! Tell us a bit about yourself and your cybersecurity work.

Fernando Maymi: I work at a company in Michigan called Soar Technology, or SoarTech for short. We specialize in researching and developing artificial intelligence (AI) solutions to hard problems in training, unmanned platforms and cyberspace operations. I joined the company two years ago after retiring from the U.S. Army, where I taught cybersecurity at West Point, ran research projects at the Cyber Research Center and led the stand-up of the Army Cyber Institute, which is the Army’s think tank for cyberspace issues.

Through all of this, I’ve learned that if we only surround ourselves with like-minded people we assume huge risks, but if we connect with diverse folks and share information we stand a much better chance. I just got back from Tokyo, where I was running a multi-sector cyber exercise helping prepare for the 2020 Olympics. It was awesome to watch folks from the power and manufacturing and other sectors come together to solve a really challenging scenario. Helping each other out really works!

Without spoiling too much, what are you going to be speaking about at Black Hat Europe this year?

Fernando: My colleague Alex Nickels and I have been involved in three projects aimed at researching and developing different kinds of synthetic autonomous actors for cyberspace. The first one was an autonomous penetration tester for the U.S. Navy. Then we were asked to build a defender against whom human penetration testers could be trained. Finally, DARPA asked us to build high-fidelity models of human users in order to test for vulnerabilities in user behaviors.

We had a head start, because our expertise is in modeling the cognition of expert humans as opposed to building autonomy from the ground up. Along the way, we found a lot of common issues and some really hard challenges. We also realized that autonomous agents will soon become common in cyberspace and that we need to come together as a community to address the security implications of this change—both positive and negative.

Why is this important, and what do you hope Black Hat attendees will learn from it?

Fernando: We are, at best, barely holding the line when it comes to defending our information systems against human adversaries. Once autonomous agents become effective attackers, we will absolutely need some cyber robots on the defensive side as well just to keep up. Even if you don’t buy into the idea that synthetic hackers are coming (and they are), we could really use some breakthroughs in developing autonomous cyber defenders to improve our security posture.

Despite all the hype, artificial intelligence (AI) is still not there yet when it comes to providing this capability. In our talk, we will provide a gentle introduction to AI, describe the state of the art and then show how we have developed some innovative approaches to defending and testing our networks. We also point out where we’ve fallen flat on our faces, talk about why, and provide some thoughts on how we can work together as a community to address some of these shortfalls.

What have you learned about human behavior in the course of trying to emulate it in your family of CyCog agents?

One of the coolest things we did was to gradually change the nature of email messages until we duped a synthetic user into clicking a link that they would not have clicked right off the bat. These agents learn and have biases much like us, so they can fall in the same traps as we do. Another lesson learned was how slow we humans are compared to computers. In order to maintain the appearance of being human, we need to slow our agents down a few orders of magnitude. Most importantly, it is not all that difficult to simulate about 80% of typical human behavior in cyberspace. The other 20%, however, is really really hard, and boils down to the fact that AI systems really just lack plain common sense.

What are you hoping to get out of Black Hat Europe this year?

Fernando: Our biggest hope is to stimulate some thinking, exchange ideas, and maybe meet some people with whom we could collaborate as we tackle the challenges ahead. I think many of us are at risk of buying into the hype about AI and may not realize its limitations and all the challenges that remain ahead of us. For example, behavioral models of the sort that can drive helpful synthetic cyberspace actors are in their infancy. We could really use a community approach to building this knowledge base so that synthetic cybersecurity agents can team with and enhance the performance of us humans. After all, we are in the business of building systems that model human expertise and, since that expertise has to come from somewhere, the more experts the better.

Black Hat Europe returns to The Excel in London December 3-6, 2018. For more information on what’s happening at the event and how to register, check out the Black Hat website.

Article source: https://www.darkreading.com/black-hat/black-hat-europe-speaker-qanda-soartechs-fernando-maymi-on-synthetic-humans/d/d-id/1333270?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

AI Poised to Drive New Wave of Exploits

Criminals are ready to use AI to dramatically speed the process of finding zero-day vulnerabilities in systems.

Artificial intelligence has significant potential for use in cybersecurity – on both sides of the security battle lines. And you don’t have to wait for scenarios out of “The Terminator” to see its impact.

According to Derek Manky, Fortinet’s chief, security insights global threat alliances, AI’s use by attackers is a simple matter of economics. “Looking forward, cybercriminals will be looking at increasing their ROI. I think what we’ll start to see is the concept of AI fuzzing,” he says. “We’ve seen some interesting research on this.”

“Fuzzing” – among a series of predictions in Fortinet’s Q3 2018 “Global Threat Landscape Report” – is a technique that has its roots in software quality testing.   The system (or component) being tested is given random input until it crashes, and then the crash is analyzed. From an attacker’s point of view, fuzzing can uncover vulnerabilities to exploit.

“Discovering vulnerabilities is still quite manual, very human-driven,” Manky says, which makes finding them an expensive process. AI can dramatically speed up the fuzzing process, and it’s already been proved in other contexts. “Groups have applied this to gaming, using AI to hack the game and exploit a weakness,” Manky says. Moving that experience to finding security exploits is a natural next step.

Attackers can use AI to dramatically shorten the time from finding a problem to creating an exploit, as well. Groups will be “using AI to study code and systems to find vulnerabilities, and then using AI to find the best exploit of those vulnerabilities,” Manky says. “This is automatically creating zero-days.”

AI, in this context, is a tool for finding the vulnerabilities and exploits, not orchestrating attacks. As that tool is used by more criminal organizations, Manky sees detecting and exploiting zero-days becoming faster and easier, with the cost of those exploits becoming lower and lower on the black market. “From a cybercriminal perspective, the AI becomes a commodity with zero-day mining systems. Zero-days become less expensive and more accessible to hackers,” he explains.

Defending a more porous and exploited attack surface is a matter of getting all the basics right, Manky says. “From the CISO perspective, it becomes more important for your patch management to be good, and open collaboration becomes more important,” he says. In addition, designing zero-trust architectures that are thoroughly segmented is critical. “You don’t want a successful attack gaining access to the rest of the network,” Manky explains.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/application-security/ai-poised-to-drive-new-wave-of-exploits/d/d-id/1333289?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

95% of Organizations Have Cultural Issues Around Cybersecurity

Very few organizations have yet baked cybersecurity into their corporate DNA, research finds.

These days, a sinister phenomenon called cybercrime-as-a-service is steadily growing, enabling malcontents with only basic technical skills to perpetrate massive IT disruption among companies of all sizes, everywhere. All they need to know is how to unleash firepower by hiring a cybercriminal or their services through one of the various market places in the Dark Web — the shady underworld where demand meets supply.

Some may consider cybersecurity to be the sole purview of a company’s IT department, but that’s wrong. It’s essential for HR and IT to work hand-in-hand to train staff in online safety and write solid cybersecurity policies that collectively serve to entrench security in the corporate culture. 

Deeply Embedding Cybersecurity into the Organization’s DNA
According to Information Systems Audit and Control Association’s (ISACA) Cybersecurity Culture Report, 95% of organizations admit that their current cybersecurity environments are far from the ones they’d like to have. In a poll of some 4,800 business and technology professionals, a mere 5% of them say their organizations’ cybersecurity culture is sufficient to safeguard the company against threats from both inside and outside. An overwhelming 87% of respondents think that establishing a stronger culture of cybersecurity would increase their organization’s profitability or viability.

The CMMI Institute, an ISACA enterprise commissioned to write the report, defines a cybersecurity culture as one that incorporates cybersecurity into every aspect of an organization’s operations. Rather than considering it as a cost item or afterthought, digitally savvy organizations deeply embed cybersecurity into their DNA and see it as differentiating factor against competition — simply because their services are more reliable, secure, and trustworthy than those of their rivals. While the need for a change might be obvious, it’s often much easier said than done. Getting to this happy place demands a major rethinking of the status quo and a different corporate mindset.

ISACA found that in organizations where employees are highly engaged in cybersecurity, 92% of respondents say their executive leaders have and share an excellent knowledge of potential cybersecurity problems. But 42% say their companies don’t have a cybersecurity culture management plan or policy. The study concludes that there’s a positive correlation between companywide employee involvement and organizations’ satisfaction with their cybersecurity culture. In fact, companies that feel they’re far from their ideal security culture spend 19% of their cybersecurity budget on tools and training; the ones that are more attentive to and supportive of cybersecurity expend far more (43%) on tools and training to improve staff knowledge and engagement.

Complex Policies Are Useless
Unfortunately, just because a company has a cybersecurity policy does necessarily mean that employees will adhere to it. As the research firm Clutch found, almost half (47%) of employees don’t pay much attention to their employers’ cybersecurity policies.

Most employees (64%) use a company-approved device for work, but only 40% of them are supposed to follow rules governing the use of personal devices. Employees’ use of their own devices to transact company business exposes those companies to all varieties of online risk. Virtually all employees (86%) check email and more than two-thirds (67%) access shared documents using their devices, many of which may lack the protection needed to shut out hackers and other Internet intruders.

A big reason why internal cybersecurity practices can be ineffective is that it’s easy for staff to become overwhelmed by all the different rules and procedures they’re supposed to follow. It all becomes too much to swallow. Maarten Van Horenbeeck, writing in the Harvard Business Review, opines that “some of these rules often don’t work because they are simply too complex and drive people to take shortcuts that defeat their purpose,” suggesting that education, user-friendliness, and simplification are the factors that drive success.

Thus, simply having a policy isn’t enough. Companywide communication and careful training are needed and, in light of escalating security breaches, more necessary than ever. But the training needs to be easy to digest and follow up on.

Conclusion
Employees are typically on the front lines when cybersecurity incidents occur. However, many of them come into contact with their organization’s cybersecurity policies primarily through reminders and restrictions. Those who don’t know about them are caught off-guard and unprepared for attacks.

Employees follow cybersecurity best practices, even beyond the boundaries of their companies’ policies. But when companies don’t communicate their security policies in a way that connects with employees, or when their policies make everyday work processes more cumbersome or a hassle, employees are more likely to engage in risky behavior.

Companies need to recalibrate their cybersecurity approach from technology-based defenses to proactive steps that include processes and education. It takes laser focus, commitment, and an intelligent and forward-looking leadership suite to make cybersecurity a pillar of the corporate agenda. It also arms the IT department with the information they need to customize their security training and testing to individual employees. Such teamwork within the organization is the only way to change people’s habits and make a meaningful difference in safeguarding organizations from against a rapidly evolving cyber-threat landscape.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Marc Wilczek is a columnist and recognized thought leader, geared toward helping organizations drive their digital agenda and achieve higher levels of innovation and productivity through technology. Over the past 20 years, he has held various senior leadership roles across … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/95--of-organizations-have-cultural-issues-around-cybersecurity/a/d-id/1333290?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Where to implant my employee microchip? I have the ideal location

Something for the Weekend, Sir? “Work out loud,” my prospective new employer tells me, adding that “we are a team, not a family”. Sister Sledge need not apply.

I try to keep my best poker face but I can sense my left eyebrow raising by itself. When I first entered the work market in the 1980s, the prevailing language of corporate bullshit rolled its tongue around paradigm-shifting and envelope-breaking. Today, we talk about “high-bandwidth collaboration” and “it’s OK to fail”.

Come to think about it, my prospective employer just said something about “failing quickly and cheaply”. Earlier, they pontificated that “failure breeds success”. Clearly, failure is the key skill they’re looking for in an employee. I’m their man.

I come well-prepared for this onslaught of hipster interview gibberish: I grew some stubble, put on a lumberjack shirt, boned up on my IT certifications (just in case) and, most important of all, learnt the language of corporate culture decks. You too can master modern marketspeak for the digital era by reading Culture Decks Decoded by Brett Putter.

Unfortunately, the interviewer is now talking about “pseudo-harmony” and has just invited me to be “a no-ego doer”. My left eyebrow feels like it is travelling towards the back of my head.

It’s when he says “date the model, marry the mission” that I realise I couldn’t possibly keep up the pretence in such a workplace for more than five minutes. I can control it no longer. Visibly shaken by my sudden and uncontrollably explosive yell of laughter, my interviewer wishes me a good day. No worries, there are plenty of other organisations out there who’ll pay me handsomely to fail for them – quickly, cheaply and even frequently if that’s what’s required.

I am a recent convert to the Church of Failure. Previously, I regarded failure as undesirable and unnecessary if there was an option of not failing. My LinkedIn profile would list items under the “Experience” heading thus:

Provided consultancy to major newspaper group on how to maximise digital publishing productivity at minimal cost; was ignored; watched helplessly as six-figure sum poured needlessly into incompetent alternative system that inevitably failed; left company to work elsewhere; those who instigated embarrassing disaster received promotion.

Now I get the picture: bosses can forgive and even admire a brave failure, no matter how avoidable… but absolutely nobody likes a smart arse.

So I understand the practical need to fuck up, visibly and expensively if necessary, to further one’s career. My problem is that I don’t yet feel ready to embrace failure on a personal level.

For example, I’m not keen on the implantation of microchips into employees to avoid the need for security access cards and the like. I don’t have any particular social or moral objection, I just know for a fact that these microchips will inevitably go tits up. Sooner or later, probably sooner and without any doubt whatsoever, they will either stop working altogether or self-enable a hidden routine that switches the implanted microchip into Total Buggeration Mode.

Embracing failure is literal when the failed item is 4mm beneath the surface of your skin.

Do you use an RFID card to unlock security doors or release gates at your workplace? Do they work every time? Of course they bloody don’t. Half the time, you’re standing in front of the door flourishing your card impotently across the sensor from different directions again and again, watching the red light flash repeatedly with an accompanying ugly audio bleat, as you duly recite the workplace mantra: “Fucking open the fuck up you fucking fucked fucker”.

Waving my empty microchip-implanted hand over this sensor does not seem to offer any obvious advantage.

And unlike those stupid swipe cards that they give you in four-star hotels – the ones that fail after just two uses, assuming they work at all – you can’t just nip down to reception to get them to remagnetise the strip with your room number. Nor can you simply rock up to the security manager’s office and ask for a replacement ID card.

Instead, you’d have to join a lengthy queue for an appointment at the blood-splattered door of the workplace surgeon, who will gouge out the failed chip from between your thumb and forefinger with a pair of pliers, insert a new one using a bent coat hanger and sew your hand back up with dental floss. You’ll have to do this roughly every two weeks, in my experience of security access ID cards.

If I want proof of how glitchy such a system is, I just observe my cat going for a crap.

He is microchipped at the back of his neck, you see. The microchip triggers the electronic release mechanism on the backdoor cat flap, so that my cat is the only furry animal that can use it to enter and exit the house. But it doesn’t always work. Sometimes it takes him several headbutts before the bolt clicks back and lets him out.

This causes him to meow noisily, probably the feline equivalent of “fucking open the fuck up you fucking fucked fucker”, and jump onto the window ledge in the hope that I’ll let him out manually. One day he’ll give up and take a dump on my pillow instead.

I have a lot of sympathy for my cat because I have been in almost exactly the same situation myself, when the security card with which I’d been issued on a contracting job failed to let me though to the office toilets. I had been busy and perhaps let things, er, “mount up” before deciding that a visit to the Gents was too urgent to put off any longer. I ended up sprinting across the open-plan floor only to end up performing the squirm-dance with flailing arms at the exit as the RFID sensor determinedly refused to acknowledge my card.

Perhaps I should have acted like my cat: either pee out the nearest window or take a shit on the security manager’s chair.

Embedding such a failure-ridden technology semi-permanently into my body tissue is too insane to contemplate. Not only is the convenience value vastly overrated (since it only works some of the time), any perceived security benefits are purely imaginary. Sure, I’m less likely to leave my hand behind on a train seat or allow someone to steal my arm than I would lose an ID card, but I can’t just tuck my hands away when I’m not opening a door. Usually they are waggling in front of me quite a lot, doing other stuff.

As Martin Jartelius, CSO of Outpost24, put it: “The very location of a microchip in your hand may actually lead to increased exposure, as the hands form the basis of our physical interaction with our surroundings.”

In other words, there’s nothing secure about waving your unshielded security ID device around all day in public view – at work, at home, and while commuting between the two.

Not to worry. I’m sure someone will conjure an alternative, better hidden body location for my security implant. And if it’s to unlock the office toilet quickly with a simple gesture, I think I know just the place…

Youtube Video

Alistair Dabbs
Alistair Dabbs is a freelance technology tart, juggling tech journalism, training and digital publishing. He was reminded of his time freelancing for PC Magazine when the editors refused to upgrade his ID photocard to unlock the door of the PC Mag hardware testing labs. After a few frustrating attempts to plead entry by hammering at the door and shouting, he found he could silently slip in and out unnoticed by pushing his unsophisticated old ID card through a gap between the Yale lock and the door frame. @alidabbs

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/16/where_to_implant_my_employee_microchip_i_have_the_ideal_location/

Super Micro chief bean counter: Bloomberg’s ‘unwarranted hardware hacking article’ has slowed our server sales

Super Micro Computer on Thursday reported net sales in the range of $952m to $962m for the first quarter of its fiscal 2019, which ended September 30, 2018. That’s higher than company guidance of $810m to $870m, and up roughly 40 per cent on the year-ago period.

The Silicon Valley server maker delivered GAAP EPS in the range of $0.35 to $0.39, and GAAP margins in the range of 13.2 to 13.4 per cent.

Though buoyed by strong sales in after-hours share trading, despite news the company will restate its past three fiscal years to correct accounting errors, the manufacturer’s stock has yet to recover from the precipitous fall it took in early October following Bloomberg’s claim that Super Micro’s server motherboards had been backdoored by malicious spy chips added at the behest of Chinese government snoops.

The organizations named in that report – Amazon, Apple, and Super Micro – have all asked Bloomberg to repudiate its story. However, the news organization continues to maintain its reporting was accurate: that a Chinese factory used by Super Micro was leaned on by Beijing to hide tiny chips on the motherboards, chips that allowed agents to spy on the machines. So far, however, no compromised hardware has surfaced, leaving the tech industry unable to figure out who’s telling the truth.

Man confused by laptop

Chinese Super Micro ‘spy chip’ story gets even more strange as everyone doubles down

READ MORE

Amazon, unsatisfied with the lack of retraction, reportedly pulled its fourth quarter ad spending with Bloomberg, which publishes the magazine Business Week as well as information terminals to traders. Apple’s terrible vengeance, it’s said, took the form of excluding Bloomberg from its October 30 media event in Brooklyn, New York. Welcome to the club, Bloomers.

Echoing a letter sent to customers last month, Super Micro CEO Charles Liang on a conference call today with financial investors briefly addressed the allegations, calling the claims “not only impossible but wrong.”

“We have never been contacted by any customer with regard to malicious chips,” he said. “We’ve never been contacted by US or foreign law enforcement alerting us to malicious chips on our hardware.”

Bloomberg has said it stands by its reporting.

Liang said the company will continue to work with customers to deal with concerns. We’ve heard anecdotally of IT staff at some buyers of Super Micro gear – particularly banks and similar corporations – tearing through their inventories and networks for signs of James Bond chips and unauthorized traffic. The effects of the Bloomberg report have also been felt by resellers.

Chief financial officer Kevin Bauer said the “unwarranted hardware hacking article” had slowed sales, though insisted customers have begun to return. It will take another quarter, he said, before the full extent of the article’s sales impact is clear. To reiterate, the above numbers are for the three months to the end of September; Bloomberg’s article landed a week later.

Asked to elaborate on how the biz interacted with its customers following the report, Bauer declined to go into details, allowing only that the manufacturer has corresponded directly with punters on the issue.

Liang stepped in to reiterate his previous denial. “We don’t believe there’s such a spy chip and most of our key customers don’t believe that too,” he said.

For its second fiscal quarter of 2019, Super Micro expects revenue somewhere between $830m to $890m. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/16/super_micro_q1_fy2019_chip_spy/