STE WILLIAMS

Basic email blunder exposed possible victims of child sexual abuse

When it comes to mistakenly putting recipients’ email addresses in the “To” field instead of the “Bcc” field, happy endings aren’t common. But it was most particularly damaging when that common email misstep was made by the UK’s Independent Inquiry into Child Sexual Abuse (IICSA), which sent out a bulk email that identified possible victims of child sexual abuse.

The Information Commissioner’s Office (ICO) said on Wednesday that it’s fined the IICSA £200,000 (USD $260,000) over the blunder.

The Inquiry covers England and Wales. It was set up in 2014 to investigate the extent to which institutions – specifically, according to the BBC, local authorities, religious organizations, the armed forces and public and private institutions – failed to protect children from sexual abuse.

The Inquiry’s failure to keep confidential and sensitive personal information secure is a breach of the Data Protection Act 1998, the ICO said.

According to the ICO, on 27 February 2017, an IICSA staff member sent a blind carbon copy (Bcc) email to 90 Inquiry participants telling them about a public hearing. After somebody spotted an error in the email, a correction was sent out. But in that correction, email addresses were mistakenly entered into the “to” field, instead of the “Bcc” field.

That glitch let recipients see each other’s email addresses and thereby identified them as possible victims of child sexual abuse.

Participants’ full names were included – or were part of an attached email signature – in 52 of the email addresses.

One of the recipients alerted the Inquiry to the breach. He or she entered two more email addresses into the “to” field, then clicked on “Reply All.”

It snowballed from there. First, the Inquiry sent out three emails, asking the recipients to delete the original email and not to circulate it any further. One of those emails generated 39 “Reply All” emails.

One recipient told the ICO he was “very distressed” by the security breach. In total, the Inquiry and the ICO received 22 complaints.

ICO Director of Investigations Steve Eckersley:

This incident placed vulnerable people at risk, which is concerning. IICSA should and could have done more to ensure this did not happen.

People’s email addresses can be searched via social networks and search engines, so the risk that they could be identified was significant.

The error could have been avoided with more staff training, a different email account, and a lot less trust in the IT company hired to manage the mailing list, the ICO said. Specifically, its findings:

  • The Inquiry failed to use an email account that could send a separate email to each participant.
  • The Inquiry failed to provide staff with any (or any adequate) guidance or training on the importance of double checking that the participant’s email addresses were entered into the “Bcc” field.
  • The Inquiry hired an IT company to manage the mailing list and relied on advice from the company that it would prevent individuals from replying to the entire list.
  • In July 2017 a recipient clicked on ‘Reply All’ in response to an email from the Inquiry, via the mailing list, and revealed their email to the entire list.
  • The Inquiry breached their own privacy notice by sharing participants’ emails addresses with the IT company without their consent.

What to do?

It’s not easy to muster up good advice for people who make the To/Bcc mistake. The fact that it happens so regularly (if you haven’t done it, I bet you know somebody who has) suggests that there’s either a basic design flaw in email, or that normal email clients might be the wrong tool for the job.

If you’re sending sensitive emails you might want to look at hiding your email client’s “To” and “CC” fields so that you simply can’t enter email addresses in a way that allows them to be shared. Alternatively, you could use an email marketing platform that sends an individual copy of your email to every individual on a mailing list.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/4OoT7ANTtNs/

Roblox says hacker injected code that led to avatar’s gang rape

“Roblox has made it almost impossible to rape people anymore,” a gamer complained in a YouTube video posted in September. He apologized for not posting a rape script video in over a year, all due to the company adding more security into their games.

If any of you guys know how to make the rape script work on filtered enabled games, make sure to let me know.

Well, somebody clearly did figure it out, as a whole lot of people unfamiliar with gaming rape culture found out earlier this month, when a 7-year-old girl’s avatar was gang-raped on a playground by two male avatars in the hugely popular, typically family-friendly game.

Roblox is a multiplayer online gaming platform in which users can create their own personal avatar, embark on their own adventures and interact with each other in virtual reality.

The girl’s mother, Amber Petersen, described in a 28 June Facebook post how she had seen her daughter’s character get attacked while she was playing Roblox on an iPad. Petersen shielded her daughter from seeing most of the attack, and she captured screenshots that she also posted.

At the time, Roblox traced the virtual violence to one “bad actor” and permanently banned them from the platform. As it was, at the time of the assault, Roblox already employed moderators who review images, video and audio before they’re uploaded to Roblox’s site, as well as automatic filters. After Petersen reported her daughter’s experience, the company put in yet more safeguards to keep it from happening again. It issued this statement:

Roblox’s mission is to inspire imagination and it is our responsibility to provide a safe and civil platform for play. As safety is our top priority – we have robust systems in place to protect our platform and users. This includes automated technology to track and monitor all communication between our players as well as a large team of moderators who work around the clock to review all the content uploaded into a game and investigate any inappropriate activity. We provide parental controls to empower parents to create the most appropriate experience for their child, and we provide individual users with protective tools, such as the ability to block another player.

The incident involved one bad actor that was able to subvert our protective systems and exploit one instance of a game running on a single server. We have zero tolerance for this behavior and we took immediate action to identify how this individual created the offending action and put safeguards in place to prevent it from happening again. In addition, the offender was identified and permanently banned from the platform. Our work on safety is never-ending and we are committed to ensuring that one individual does not get in the way of the millions of children who come to Roblox to play, create, and imagine.

Now, the company is blaming a hacker/hackers who attacked one of its servers and thereby managed to inject code that enabled the assault.

Tech Crunch reports that Roblox, which is experiencing vigorous growth (it recently said it expects to pay out double the sum it paid to content creators a year ago), was in the process of moving some older, user-generated games to a newer, more secure system when the attack took place. There were multiple games that could have been exploited in a similar way.

Following the incident, Roblox’s developers have removed the other vulnerable games and asked their creators to move them to a newer, safer system. Tech Crunch reports that most have done so, and those who haven’t won’t see their games back online until they do. None of the games now online are vulnerable to the exploit used by whatever hacker crawled out of Dante’s Seventh Circle of Hell to attack a 7-year-old’s avatar.

Petersen has lauded the company’s fast and thorough action. In her initial Facebook post, reeling with shock, disgust and guilt, Petersen had urged other parents to delete the app. But two weeks later, in a follow-up post on 11 July, Petersen said she’d edited that initial post: she now emphatically believes that the incident was not Roblox’s fault:

This was the fault of a HACKER, not the company. Shortly after I reported the abuse and wrote my Facebook post, Roblox quickly responded and determined that the offending avatars were hacked by an outside user. Immediately, the offender was permanently banned from the platform, the game was suspended, and Roblox engineers worked overtime through the weekend to tighten their platform to ensure this event would not happen again. Afterward, I revised my original post. Rather than calling for people to delete the app, I encouraged parents to double-check security settings on all their devices and make sure they are aware of what their children are playing.

Petersen is now urging parents to visit Roblox’s parent’s guide at https://corp.roblox.com/parents/.

Although she no longer thinks parents should delete Roblox, she still thinks that it’s vital for parents to closely supervise children’s activity, on any device, as “no form of technology is entirely safe from hackers,” she says.

And, these such hackers don’t restrain themselves to sexual violence or aggressiveness. On the Go Ask Mom Facebook page, one mother wrote, in response to the Roblox rape story, that she’s keeping her son off Roblox after learning about a game he was playing:

My son has not been allowed to play this since I walked into him playing and the mission was to kill yourself. Like he had to go around his character’s house and drink bleach or find a knife.

There’s just no way to protect kids from every single type of troubling content on games and social media. Rather than freak out and stuff them away in a Faraday cage, experts recommend that parents can take certain precautions, foremost of which is to keep an eye on what their children are encountering online.

Larry Magid, CEO of Connect Safely, a nonprofit dedicated to educating technology users about safety, privacy and security, told WRAL that Petersen was doing pretty much everything right.

Namely, she …

  • …was sitting right next to her daughter, ready to step in to interrupt when things took a turn for the objectionable.
  • …had the privacy settings set so her daughter would only experience age-appropriate play. It’s not clear how those settings were reset: it might have happened when the app was deleted to save space and then reinstalled, for example. Regardless, it points to the importance of regularly rechecking privacy settings.

Magid and other experts offered additional steps that can help:

  • Select “curated content” only in the security settings: that will restrict the content to age-appropriate games. Check out Roblox’s site for more information on its curated content.
  • Let Roblox – or any game maker, for that matter – know immediately when unacceptable content appears.

Those are helpful tips. But for better or worse, gamers, and game hackers, are a creative bunch. That means that the list of threats keeps morphing, and the hackers are ever ready to pounce on any means possible to insert their idea of “fun” into a game. Just run a search on “Roblox rape” on YouTube to see what I mean.

Maybe it was just one bad actor responsible in this case. But even if it was, there are clearly plenty of people who think of that act as a win and who would happily do the same.

That rape script video upload I mentioned? It was a six-part series.

Keep an eye on the kids – it’s a world of nasty out there.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/oFgzGeQDDRk/

Hackers hold 80,000 healthcare records to ransom

Data breaches tend to be mysterious affairs where organisations on the receiving end say as little as possible and the attackers remain safely in the shadows.

The breach of medical records at Canadian company CarePartners, which provides healthcare services on behalf of the Ontario Government, looks as if it is turning into an unwelcome exception to this rule.

CarePartners made the breach public in June, saying only that patient and employee health and financial data had been “inappropriately accessed by the perpetrators” without specifying the size or extent of the breach.

And so it would have remained had the attackers not decided to contact the Canadian Broadcasting Corporation (CBC) this week with more detail of their exploits. They also revealed the not insignificant nugget that they have demanded that CarePartners pay a ransom for them not to release the stolen data:

We requested compensation in exchange for telling them how to fix their security issues and for us to not leak data online.

To underscore the threat, the attackers sent CBC a sample data set which included thousands of medical records containing dates of birth, health numbers, phone numbers and details of past surgical procedures and medications.

Other files contained 140 patient credit card numbers complete with expiry dates and security codes, plus employee tax slips, social security numbers, bank account details and plaintext passwords.

The cache ran to thousands of records, said CBC, but the attackers claimed that hundreds of thousands of records were involved.

What’s concerning are discrepancies between CarePartners’ assessment of the breach and the new information the hackers have sent to CBC.

According to CBC, CarePartners said its forensic investigation had identified 627 patient files and 886 employee records that were part of the breach, with all affected individuals informed of the compromise.

And yet the sample sent by the hackers contained the names and contact information for more than 80,000 people.

When CBC’s journalists contacted a small sample of these individuals, none said they had been contacted by CarePartners.

According to the attackers, they gained access to the data after they discovered vulnerable software that hadn’t been updated in two years, adding:

This data breach affects hundreds of thousands of Canadians and was completely avoidable. None of the data we have was encrypted.

Beyond the fact that a serious breach has occurred, none of these details can be confirmed of course.

Publicising a ransom demand to a public body is probably a sign of desperation by the attackers that goes against the extortion playbook.

The first rule of extortion is to keep it a secret on the basis that publicity can make it harder for organisations to pay up, and may even force them to report the matter to the police.

The fact that the hackers have broken this rule is not good news. If they’ve given up any hope of being paid, that makes it more likely that the data will be posted to a public server where it will join the ocean of other personal healthcare data that lives in the darker recesses of the internet.

As with every data breach, today’s headlines are only the beginning of a story that stretches many years into the future, its consequences hard to predict.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/39PJYjJSDg8/

UK’s Huawei handler dials back support for Chinese giant’s kit in critical infrastructure

A UK government-run oversight board has expressed misgivings about the security of telecoms kit from Chinese firm Huawei.

An annual report (PDF) from the Huawei Cyber Security Evaluation Centre (HCSEC) concluded that “shortcomings in Huawei’s engineering processes have exposed new risks in the UK telecommunication networks and long-term challenges in mitigation and management”.

Huawei kit is widely used on BT’s network backbone so reduced confidence in equipment from the manufacturer has profound implications unless steps are taken to restore full confidence.

HCSEC warned: “Huawei’s processes continue to fall short of industry good practice and make it difficult to provide long term assurance.”

Concerns centre around two technical issues: the consistency of software builds of networking products from Huawei supplied to UK telecom network operators, and (more particularly) Huawei’s management of third-party components imported as part of a product build, both commercial and open source. “Security critical third party software used in a variety of products was not subject to sufficient control,” according to an evaluation by GCHQ that followed a technical visit to Shenzhen by NCSC, HCSEC and the UK telecom operators.

“Third party software, including security critical components, on various component boards will come out of existing long-term support in 2020, even though the Huawei end of life date for the products containing this component is often longer,” the report said.

“The lack of progress in remediating these is disappointing,” the report continued. “NCSC [National Cyber Security Centre] and Huawei are working with the network operators to develop a long-term solution, regarding the lack of lifecycle management around third party components, a new strategic risk to the UK telecommunications networks. Significant work will be required to remediate this issue and provide interim risk management.”

Doubts about Huawei’s engineering process have prompted the advisory board to water down its endorsement of the Chinese equipment maker’s technology.

Due to areas of concern exposed through the proper functioning of the mitigation strategy and associated oversight mechanisms, the oversight board can provide only limited assurance that all risks to UK national security from Huawei’s involvement in the UK’s critical networks have been sufficiently mitigated. We are advising the National Security Adviser on this basis.

Huawei is yet to respond directly to a request to comment from El Reg but it told the BBC that “Cyber-security remains Huawei’s top priority, and we will continue to actively improve our engineering processes and risk management systems.”

Professor Alan Woodward, a computer scientist from the University of Surrey, told El Reg that Huawei needed to improve its procedures, particularly in assuring the security of its own supply chain.

“The authorities need to be totally convinced about the security of Huawei products before they are incorporated into our critical national infrastructure,” Woodward said. “The onus appears to be on Huawei to improve their processes to enable the UK to feel confident in giving the required assurances.

“The supply chain is becoming a classic attack vector so the UK needs to be sure not just about test examples of equipment, but that the processes then used to manufacture the equipment at scale are secure from interference.”

HCSEC was set up in November 2010 to “mitigate any perceived risks” associated by the use of Huawei’s equipment in the British telecoms network and elsewhere in the UK’s critical national infrastructure. The facility is run by techies from Huawei and the NCSC.

National security concerns meant that Chinese telecom giant ZTE was temporarily blocked from selling kit in the United States. US politicians have also expressed similar concerns about using kit from Huawei which has surfaced in Australia and elsewhere.

Woodward said: “It’s not difficult to see why the US and Australian governments have decided to walk away from using Huawei products in critical areas.”

Some security experts think that all major telecom equipment suppliers should face the same rigorous scrutiny.

SpyBlog commented: “There should also be an equivalent to the Huawei Cyber Security Evaluation Centre for other foreign government-influenced networking stuff on which the UK Critical National Infrastructure depends. e.g. Cisco.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/20/huawei_security_appraisal/

Cybercrooks slurp nearly $1m from Russian bank after pwning router at regional branch

Hackers stole almost $1m from a Russian bank earlier this month after breaching its network via an outdated router.

PIR Bank was looted by the notorious MoneyTaker hacking group, according to Group-IB, the Moscow-based security firm called in by the bank to handle incident response.

Funds were stolen on 3 July through the Russian Central Bank’s Automated Workstation Client (an interbank fund transfer system similar to SWIFT), transferred to 17 accounts at major Russian banks and cashed out. Cybercrooks tried to ensure persistence in the bank’s network through “reverse shell” programs in preparation for subsequent attacks, but these hacking tools were detected and expunged before further mischief could be wrought.

According to local reports, PIR Bank lost around $920,000 from their correspondent account at the Bank of Russia. Group-IB describes this as a “conservative estimate”.

After studying infected workstations and servers at the bank, Group-IB forensic specialists collected digital evidence implicating MoneyTaker in the theft. The digital footprints from the PIR Bank raid matched the tools and techniques of earlier attacks linked to MoneyTaker.

Group-IB confirmed that the attack on PIR Bank started in late May 2018 with the pwnage of a router used by one of the bank’s regional branches.

The router had tunnels that allowed the attackers to gain direct access to the bank’s local network. This approach has already been used by the group at least three times while attacking banks with regional branch networks, Group-IB said.

When the criminals hacked the bank’s main network, they managed to gain access to AWS CBR (Automated Work Station Client of the Russian Central Bank), generate payment orders and send money in several tranches to mule accounts prepared in advance. PowerShell scripts were used to automate some stages of the hacking process.

“On the evening of July 4, when bank employees found unauthorised transactions with large sums, they asked the regulator to block the AWS CBR digital signature keys, but failed to stop the financial transfers in time,” Group-IB reported. “Most of the stolen money was transferred to cards of the 17 largest banks on the same day and immediately cashed out by money mules involved in the final stage of money withdrawal from ATMs.”

Although the hackers attempted to erase logs and hide their tracks, enough digital evidence was left behind for Group-IB experts to point a finger towards the likely suspects. Recommendations for prevention of similar attacks has been circulated to clients and partners of Group-IB, including the Central Bank of Russia.

Russian hacker

Russian hacker clan exposed: They’re called MoneyTaker, and they’re gonna take your money

READ MORE

Cybercriminals are actively targeting Russian banks and the PIR Bank case is far from isolated, Group-IB said.

“This is not the first successful attack on a Russian bank with money withdrawal since early 2018,” said Valeriy Baulin, head of the digital forensics lab at Group-IB. “We know of at least three similar incidents, but we cannot disclose any details before our investigations are completed.”

The first attack by MoneyTaker was recorded in spring 2016, when they stole money from a US bank after gaining access to the card-processing system (FirstData’s STAR). The group then went quiet for several months before resurfacing in an ongoing series of attacks primarily targeting Russian, US and (occasionally) UK banking organisations.

According to Group-IB, up until December last year MoneyTaker had conducted 16 attacks in the US, five attacks on Russian banks and one attack on a banking software company in the UK. The average damage caused by one attack in the US amounted to $500,000. In Russia, the average amount of money withdrawn is $1.2m per incident. In addition to money, the cybercriminals habitually steal documents about interbank payment systems needed to prepare for subsequent attacks. ®

Bootnote

MoneyTaker isn’t the only group of cybercriminals targeting banks in Russia. Two others (Cobalt and Silence) have also been active this year, according to Group-IB.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/20/moneytaker_russian_bank_hack/

Why Security Startups Fly

What makes startups stand out in a market flooded with thousands of vendors? Funding experts and former founders share their thoughts.

Businesses want security against common and complex cyberthreats – and venture capitalists have their eyes on startups promising it. The latest fundings have permeated security news: Most recently, BitSight raised $60 million in Series D, Social SafeGuard generated $11 million in Series B, Preempt secured $17.5 million in Series B, and Agari raised $40 million in Series E.

What’s more, last year broke records for venture capital (VC) funding in cybersecurity, with 2017 ending with 248 deals totaling $4.06 billion. Much of the high funding went to established firms including CrowdStrike and Exabeam, but plenty also was invested in relatively new entrants and startups.

The modern security market is “throbby and noisy and urgent,” says Scott Petry, co-founder and CEO of Authentic8 and founder of Postini, which was acquired by Google and became Gmail. “People are jumping into security because it’s a hot sector.”

It’s a relatively new problem for an industry unaccustomed to the spotlight. When he started Postini in 1999, Petry says, few people cared about security; most were focused on Web portals, applications, and data services. As a result, the company didn’t get much respect. Now, with cyberattacks escalating, the landscape has shifted. Security pros truly invested in defense are often balanced by people angling to get part of the ubiquitous VC funding.

“The challenge is, there’s an awful lot of technology being thrown at the security problem,” Petry says. But security’s problems often can’t be traced to a lack of tech: As more money is allocated toward security tools, the number of breaches is also going up. Most aren’t caused by gaps in technology but oversights, he adds, such as Equifax’s leaving a Web server unpatched.

Right now, the security market is unhealthy, Petry explains. Vendors capitalize on customers’ fear and uncertainty, and customers hit with breaches will buy more tech to fix the problem instead of assessing its root cause. “It’s human nature,” he admits. “The same nature applies to venture capitalists and companies hoping to get funded.”  

So where are those dollars going, and what are they being used for? Why do some startups stand out from others? And what will happen to the market as hundreds of vendors enter each year?

Where Investors are Investing 
If the problem isn’t technology, where are the billions of investment dollars going?

“Overall, the demand for cyber services is growing quite robustly, but there are so many companies that have been funded in the space that most are struggling,” says Dave Cowan, partner at Bessemer Venture Partners. There are two major trends in today’s security market, he says. One is working, one is not.

The displacement of the antivirus (AV) market is successful, he notes. Companies are turning off older antivirus agents and replacing them with next-gen systems built with a combination of endpoint detection, remediation, and attack prevention. Cowan cites Carbon Black, CrowdStrike, Cylance, Endgame, and SentinelOne as examples of next-gen AV success stories.

George Kurtz, co-founder and CEO of CrowdStrike, agrees that the ripest area for security investment is in endpoint protection. The challenge most companies will face is portfolio scope, he says. Do they offer the full spectrum of endpoint security, or do they target a small part of the solution?

“Buyers have more choices than ever as new technologies and solutions continue to emerge,” Kurtz says. “Many companies are ready to replace their legacy AV with more effective and efficient solutions.”

What’s not working so well: artificial intelligence (AI) for cybersecurity.

“Most of the companies who have raised money from venture investors in the last few years have touted their algorithms as the basis for identifying attacks,” Cowan says. Back in 2014, when the industry saw a spike in security breaches, businesses realized the stakes were getting higher and wanted visibility to detect sophisticated malware and advanced persistent threats.

The most enticing pitch was the application of AI to identify anomalies that could indicate an attack. Many startups were founded to detect suspicious activity, sending thousands of alerts to SOCs to experts who could only investigate a dozen per day. But detecting anomalies has little value to a business unless it has enough people to dig through those alerts and determine which are legitimate, Cowan says. Most alerts entering the SIEM don’t even get seen.

However, Kurtz points out, startups focused on AI continue to appear on the market as founders aim to capitalize on the benefits of this technology. As they continue to explore use cases for AI, companies will continue to receive venture funding, Cowan adds.

Asheem Chandna, partner at Greylock Partners, anticipates the continued growth of technology including cloud-based solutions, solutions that combine on-premises with cloud, the application of machine learning and AI to security, and anything around identity. Identity analytics, identity, governance, and new authentication techniques will be increasingly important in the future, he says.

What Makes Startups Stand Out
First things first: The technology has to be useful and business-appropriate.

“It’s important that a cyber company not only develop a strong defense, but develop one that works within enterprise organizations,” Cowan says, noting that it’s important for security leaders to also consider how useful a new tool might be. “Thinking about how the enterprise can actually use what you’re doing is an important factor to success.”

On a micro level, businesses building security tech should tackle smaller issues instead of trying to do everything. “What I’ve seen interesting, successful companies do is focus on solving a specific and narrow problem,” Petry explains. “Many companies are trying to take too big a bite of the apple.”

No single startup can solve all problems – the security landscape is incredibly diverse, he notes – but they can build expertise in one area. If it can solve a narrow problem quickly, acquire customers, and move on, a startup can build its business much more easily. “Solve a problem, do it well, and solve it for more people,” Petry sums up.

Successful startups employ people who know how to exploit a network, Cowan points out. It takes a hacker to stop a hacker, he says, and Silicon Valley doesn’t have many hackers. New companies aiming to deter and prevent major attacks, especially nation-state threats, need to build their products around the expertise of someone who has been in the attacker’s seat. It’s for their benefit and the benefit of their future customers.

Hiring the right financial expertise is also critical, Kurtz adds. Business is fundamentally a numbers game that relies on financial and hiring strategies. A CEO must hire employees who understand, and can perform against, the basic principle of good financial health.

Deciding Whether a Startup Is Worth the Money
A challenge for security leaders shopping in a market rife with vendors is deciding which technologies are worth their limited budgets. If you’re an IT manager and debating the pros and cons of testing a new tool, how can you tell whether the startup behind it is here to stay?

The first thing to consider is the quality of its technology team, Chandna says. It’s unlikely you’re going to get a world-class solution if the quality of the tech team isn’t “stellar,” he says, so look at the backgrounds of a startup’s founders. Where did they previously work? What did they last build?

Next, think about how the company markets its product. You want to work with one that explains its concept in a use-case-driven way that addresses your problem, and not as a technology looking for a problem to fix. In the security space, it’s important to build technology that fits with existing architecture as opposed to a tool that works in theory but is hard to use.

“Companies that are successful tend to be customer-centric and innovate in a customer-centric way,” Chandna says. “An important piece of that, for security companies, is being able to demonstrate a security solution … that works in combination with what the customer already has.” You don’t want a solution that will require you to overhaul your systems.

Finally, he says, consider the quality of the investor backing a startup. If a trusted VC has confidence the company will be around, it’s a good sign, Chandna explains.

Looking Ahead: If and When the Bubble Will Pop
The security market has thousands of vendors competing for customers and hundreds more entering each year. It seems the industry will maximize its capacity at some point. But will it?

Experts are undecided. Two things will keep the security bubble from popping, says Petry, and the first is ongoing security risk. Businesses will continue to lose data, meaning they will continue to spend more money on tools promising to prevent future incidents.

The second will be the limited capacity of major organizations to cover all of their bases. Established vendors spending hundreds of millions of dollars on security won’t have the resources to develop new systems in-house, so they’ll acquire smaller startups building them.

For startups, Kurtz advises committing to customer success, hiring top talent in a remote workforce, and creating a mission that employees are confident in. They should also get comfortable with failure, he explains, especially as tech continues to evolve. Those who succeed will be able to keep up with changes in technology, and businesses in the market for new tech should pay attention to them.

“The Silicon Valley mantra of ‘fail fast, fail often’ rings true for many tech entrepreneurs, but I believe it’s equally important to evolve even faster after failures,” he says. “While good companies are those that can excel quickly, the best companies are those that have a long-term vision and know where they are headed.”

Attackers’ changing strategies will also influence the shape of startups coming into the market, anticipates Gary Golomb, chief research officer at Awake Security. Companies that hard-code specific protections into their tech will have a harder time because they won’t be able to keep up with advanced attackers, as opposed to platforms that can accommodate new detections.

“The ability of attackers to shift tactics rapidly and intelligently based on a target’s security measures means that the startups that get funding and succeed will be those that have a platform approach where new detections can be added easily, whether by the startup or the customer,” Golomb says.

Related Content:

 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/why-security-startups-fly---and-why-they-crash/d/d-id/1332339?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Why Artificial Intelligence Is Not a Silver Bullet for Cybersecurity

Like any technology, AI and machine learning have limitations. Three are detection, power, and people.

A recent Cisco survey found that 39% of CISOs say their organizations are reliant on automation for cybersecurity, another 34% say they are reliant on machine learning, and 32% report they are highly reliant on artificial intelligence (AI). I’m impressed by the optimism these CISOs have about AI, but good luck with that. I think it’s unlikely that AI will be used for much beyond spotting malicious behavior.

To be fair, AI definitely has a few clear advantages for cybersecurity. With malware that self-modifies like the flu virus, it would be close to impossible to develop a response strategy without using AI. It’s also handy for financial institutions like banks or credit card providers who are always on the hunt for ways to improve their fraud detection and prevention; once properly trained, AI can heavily enhance their SIEM systems. But AI is not the cybersecurity silver bullet that everyone wants you to believe. In reality, like any technology, AI has its limitations.

1. Fool Me Once: AI Can Be Used to Fool Other AIs
This is the big one for me. If you’re using AI to better detect threats, there’s an attacker out there who had the exact same thought. Where a company is using AI to detect attacks with greater accuracy, an attacker is using AI to develop malware that’s smarter and evolves to avoid detection. Basically, the malware escapes being detected by an AI … by using AI. Once attackers make it past the company’s AI, it’s easy for them to remain unnoticed while mapping the environment, behavior that a company’s AI would rule out as a statistical error. Even when the malware is detected, security already has been compromised and damage might already have been done.

2. Power Matters: With Low-Power Devices, AI Might Be Too Little, Too Late
Internet of Things (IoT) networks are typically low power with a small amount of data. If an attacker manages to deploy malware at this level, then chances are that AI won’t be able to help. AI needs a lot of memory, computing power, and, most importantly, big data to run successfully. There is no way this can be done on an IoT device; the data will have to be sent to the cloud for processing before the AI can respond. By then, it’s already too late. It’s like your car calling 911 for you and reporting your location at the time of crash, but you’ve still crashed. It might report the crash a little faster than a bystander would have, but it didn’t do anything to actually prevent the collision. At best, AI might be helpful in detecting that something’s going wrong before you lose control over the device, or, in the worst case, over your whole IoT infrastructure.

3. The Known Unknown: AI Can’t Analyze What It Does Not Know
While AI is likely to work quite well over a strictly controlled network, the reality is much more colorful and much less controlled. AI’s Four Horsemen of the Apocalypse are the proliferation of shadow IT, bring-your-own-device programs, software-as-a-service systems, and, as always, employees. Regardless of how much big data you have for your AI, you need to tame all four of these simultaneously — a difficult or near-impossible task. There will always be a situation where an employee catches up on Gmail-based company email from a personal laptop over an unsecured Wi-Fi network and boom! There goes your sensitive data without AI even getting the chance to know about it. In the end, your own application might be protected by AI that prevents you from misusing it, but how do you secure it for the end user who might be using a device that you weren’t even aware of? Or, how do you introduce AI to a cloud-based system that offers only smartphone apps and no corporate access control, not to mention real-time logs? There’s simply no way for a company to successfully employ machine learning in this type of situation.

AI does help, but it’s not a game changer. AI can be used to detect malware or an attacker in the system it controls, but it’s hard to prevent malware from being distributed through company systems, and there’s no way it can help unless you ensure it can control all your endpoint devices and systems. We’re still fighting the same battle we’ve always been fighting, but we — and the attackers — are using different weapons, and the defenses we have are efficient only when properly deployed and managed.

Rather than looking to AI as the Cyber Savior, we need to keep the focus on the same old boring problems we’ve always had: the lack of control, lack of monitoring, and lack of understanding of potential threats. Only by understanding who your users are and which devices they have for what purposes and then ensuring the systems used actually can be protected by AI can you start deploying and training it.

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info

Tomáš Honzák serves as the head of security, privacy and compliance at GoodData, where he built an information security management system compliant with security and privacy management standards and regulations such as SOC 2, HIPAA and U.S.-EU Privacy … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/why-artificial-intelligence-is-not-a-silver-bullet-for-cybersecurity/a/d-id/1332336?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Either my name, my password or my soul is invalid – but which?

Something for the Weekend, Sir? Try as I might, it won’t go in.

I have entered pretty much everything else so far but this time I’m getting a definitive “no”. I respect that, of course, but it leaves me jolly frustrated. Despite all my powers of persuasion, I’m left standing in the cold with one hand on my lock.

Yes, lock. The site keeps rejecting my password, you see.

Hang on, maybe the password isn’t the problem, maybe it’s my username. It’s difficult to tell because whatever I’m doing wrong is triggering some uppity JavaScript message coloured a demonstratively angry red to tell me my “username and/or password is not recognised”.

Growing bored of retyping likely alternative usernames and/or passwords repeatedly in various combinations, I begin typing random characters and/or bollocks into both fields just to see if this produces a different kind of response. Maybe the error message will get angrier and/or redder?

By the way, I haven’t forgotten my login credentials: I am registering with a new service as a new user but for some reason it doesn’t like what I’m typing. Who knows, perhaps it doesn’t like the way I’m typing. I try typing lightly. I try typing forcefully. I try typing while hunched and laughing maniacally. I try typing with big campy flourishes. (I bet you wish I’d captured all this on my webcam.) No luck.

Ah now, I seem to remember something like this happening while working on-site at one of my old newspaper clients. It was one of those places where the CTO would be systematically replaced every year and each fresh-faced, middle-aged jock would insist on heaving his seniority-enhanced paunch into everyone’s faces for a few weeks upon arrival before getting everything wrong, messing everything up and eventually being systematically replaced 11 months later.

One of these just-passing-through guys insisted on a hurried rejig of the Active Directory sign-ins to force us all to change our passwords on a monthly basis. Annoying, yes, but I was prepared to go with the flow in the interests of corporate security. Joking aside, this stuff matters when the livelihoods of thousands of staff worldwide are at stake.

For example, despite the valid criticism thrown at British banks for their historic laxity when it comes to personal login credentials, I give credit to Barclays for its recent TV campaign explaining how easy it is for customers to sabotage their own security via social media.

Youtube Video

Unfortunately, the AD changes at my client were rushed through by a harassed IT Support Desk still struggling with the public shame of being rechristened Customer Delight Providers by the latest short-term tenant of the glass office in the corner with the nice view over London. As well as expiring every calendar month, the passwords were now expected to have a minimum 12-character length and include at least two upper-case letters, two numbers, a special character, a Japanese hiragana, a Cyrillic consonant, a typographical thin space and any emoji representing a sexually suggestive root vegetable.

Oh, and the new password system had been set up to automatically reject – again without explaining why – recognisable strings resembling dates, surnames, local streets, Beatles song titles (I kid you not) and, worst of all, the names of all nearby pubs.

Not a problem, I hear you cry. Well, it is if no one got around to adding these rules into the New Password prompt. Again and again we’d type in new but not-quite-right passwords only to be told they were invalid – but not why. The poor sods on the ex-IT Support hotline spent the next 48 hours Providing non-stop Delight to their Customers until someone got around to updating the password prompt.

With this memory still stinging in my mind, I phone a friend for assistance. He tells me it’s my own fault because my kind of email address is “wrong”.

Er yeah OK bye. Idiot.

I should have known he’d come up with a daft suggestion like that. This is the bloke who would casually sabotage his own monthly New Password prompts by changing his password 11 times immediately and, for the twelfth, reset it to his old one again so he could carry on as before. He even kept his 11 non-passwords on a sticky note attached to his display bezel so that he could run through the same routine in the same order every month.

Why should I be surprised when research suggests that 45 per cent of infosec professionals, who really ought to know better, reuse the same passwords across multiple accounts? It’s not a lack of awareness, it’s a clear admission from within the security industry itself what a pain in the arse it is to sign in again and again dozens of times a day with different credentials.

And don’t get me started on two-factor authentication, as this invariably means little more than two-password authentication: if you can bypass one, you can bypass another. This is especially so if the second factor is merely a detoured PIN sent to your smartphone: all a thief has to do is nick your phone and he sits and waits for the second password to light up in front of him.

Nor am I sure about biometric ID such as those built into EU passports to speed up airport security checks. If I’m facially scarred in a road accident, for example, my biometric passport will no longer work. I’d have to apply for a new one – by submitting a birth certificate, a utility bill and other such conventional, easily faked paperwork.

Perhaps we need to go full-DNA, as nothing short of being bitten by a radioactive spider or being locked in an Intrinsic Field Subtractor is going to alter the arrangement of my chromosomes. Take a swab, darling! Need a specimen to unlock the door? No problem! From where? Ooh missus. Love is the key, I suppose…

Demonstrations of commercial DNA identity products such as Parabon’s Snapshot certainly look like they can work magic. Or too much like magic?

Nope, I’m no Doctor Manhattan. I certainly don’t fancy standing at passport control after a vacation on Mars sporting an ultra-violet tan, rippling thermodynamic muscles and my knob hanging out.

With a sigh, I turn back to my website sign-up. Hmm. A thought. Why not?

I type in a different email address for my login ID. This works and the registration process is soon completed. It turns out my friend was sort-of correct: there is nothing wrong with my usual email address except that the online service I have been trying to register with has been designed not to recognise strings of fewer than four characters before the ‘@’. Nor will it accept any kind of ID other than an email address.

In other words, it wasn’t that my username and/or password was invalid. It’s that the site is mistaken and/or fucked. ®

Youtube Video

Alistair Dabbs
Alistair Dabbs is a freelance technology tart, juggling tech journalism, training and digital publishing. He apologises to new and recently conscripted readers for not uploading any more photos of himself holding things against his face for your amusement, as he did last week. But if you’re really patient, he might concoct a complete My Guy-style photostory from the dungeons of El Reg after the August break. It depends on various factors (at least two). @alidabbs

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/20/either_my_name_my_password_or_my_soul_is_invalid_but_which/

Adobe on internal systems security hole: Panic not. It isn’t critical

Adobe has attempted to play down the significance of a vulnerability in its internal systems.

Bug hunters at an outfit called Vulnerability Laboratory claimed they had discovered a remote code execution hole in one of the Photoshop giant’s main staff-only database systems – a weakness that was only corrected on Saturday. Remote code execution flaws are almost invariably rated critical.

In response to queries from El Reg on the matter, though, Adobe claimed the flaw was a far less severe class of vulnerability.

“This was a cross-site scripting bug in a form used for event marketing registration,” an Adobe spokeswoman told El Reg today. “We have since implemented a fix.”

Vulnerability Laboratory has disputed Adobe’s take, and stands by its own assessment on the severity of the flaw, which, if it is correct, would rate a score of 6.4 in the Common Vulnerability Scoring System.

“At the beginning the engineers thought this [was] only affecting the marketing system by XSS [cross-site scripting] but [ultimately] it was not,” Vulnerability Laboratory’s Benjamin Kunz Mejri told El Reg.

“[Many] domains [were] affected; the email service was affected; parts of the backend w[h]ere the data was processed [were affected]. The [scheme showing how it works] was delivered at the end to ensure that Adobe understands the impact of the attack.”

Mejri added: “An arbitrary code inject, results for sure – at several parts in their infrastructure – in a code execution.” He told The Reg that, during its investigation, Vulnerability Lab team did not, of course, attempt to illegally access Adobe’s servers, however, they believed it would be possible for miscreants to do so via the bugs they found.

Adobe internal systems vulnerability [source: Vulnerability Laboratory]

How to attack the Adobe internal systems vulnerability, according to Vulnerability Laboratory

Vulnerability Lab first notified Adobe about the issue in February, and has been working with the vendor in the five months since. Adobe resolved the flaw on Saturday, July 14, allowing Vulnerability Lab to finally go public with its findings on Thursday. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/19/adobe_internal_systems_bug/

Declassified files reveal how pre-WW2 Brits smashed Russian crypto

Efforts by British boffins to thwart Russian cryptographic cyphers in the 1920s and 1930s have been declassified, providing fascinating insights into an obscure part of the history of code breaking.

America’s National Security Agency this week released papers from John Tiltman, one of Britain’s top cryptanalysts during the Second World War, describing his work in breaking Russian codes [PDF], in response to a Freedom of Information Act request.

The Russians started using one-time pads in 1928 – however, they made the grave cryptographic error of allowing these pads to be used twice, the release of Tiltman’s papers has revealed for the first time.

By reusing one-time pads, Russian agents accidentally leaked enough information for eavesdroppers in Blighty to figure out the encrypted missives’ plaintext. Two separate messages encrypted reusing the same key from a pad could be compared to ascertain the differences between their unencrypted forms, and from there eggheads could deduce, using stats and knowledge of the language, to work out the original words.

However, even though using one-time pads twice was a critical and exploitable blunder, it was still better than the weak ciphers and code books the Russians had used previously.

The practice of reusing one-time pads continued into the Cold War, and helped Brit spies unravel the contents of supposedly secret Kremlin communications, as a blog post by Cambridge University computer scientist Ross Anderson explained this week. Anderson wrote:

The USA started Operation Venona in 1943 to decrypt messages where one-time pads had been reused, and this later became one of the first applications of computers to cryptanalysis, leading to the exposure of spies such as Blunt and Cairncross.

The late Bob Morris, chief scientist at the NSA, used to warn us enigmatically of “The Two-time pad”. The story up till now was that the Russians must have reused pads under pressure of war, when it became difficult to get couriers through to embassies. Now it seems to have been Russian policy all along.

Anderson speculated that the development of decryption techniques to exploit the Russians’ use of two-time pads may have fueled post-WWII work by Claude “the father of information theory” Shannon on the mathematical basis of cryptography [PDF].

Alan Turing

Cryptography is the Bombe: Britain’s Enigma-cracker on display in new home

READ MORE

In response to Anderson’s post, veteran computer scientist Mark Lomas floated the difficult-to-verify but tantalising theory that bureaucratic problems with the pad printers might have led to Russia’s crypto-gaffe. Rather than difficulties in getting enough code-making materials through to spies and soldiers in the field, it could be that printers printed the same one-time pads multiple times and supplied them to their two main intel agencies, the KGB and GRU.

“They both selected a secure printing works that usually produced banknotes and gave strict instructions that only two copies of each pad should be printed,” Lomas commented. “The printers decided to print four copies of each pad then send two each to the KGB and GRU. Neither the KGB nor the GRU reused the pads they received, except perhaps because of occasional operator error.”

“Venona was able to determine where a KGB message had used the same key as a GRU message. Subtracting one message from the other cancelled out the unknown key to produce a synthetic message that was the difference between the two original messages. These could then be picked apart using a combination of statistics and predictable words” to decrypt the contents, he added. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/19/russia_one_time_pads_error_british/