STE WILLIAMS

FCC repeals net neutrality

As I write this, the Federal Communications Commission (FCC) is going through the motions, live streaming its commissioners as they (mostly) express support for what turned out to be the inevitable killing of net neutrality: the 3-year-old landmark rule – imposed during the administration of President Obama – that prevents internet service providers (ISPs) from favoring some sites over others by slowing down connections or charging customers a fee for streaming or other services.

…at least, the FCC had been going through the motions, until around 12:51 pm, when the room was evacuated and bomb sniffing dogs were led through the emptied room by their handlers.

Commissioners were let back into the room around 1pm after it had been cleared by security. Within minutes, the room, the internet, and the telecom industry had also been cleared of net neutrality.

There has been much gnashing of teeth.

Clearly, this has been a contentious few months of debate: on one side, telecom giants like ATT, Charter, Comcast and Verizon have been urging the repeal, which was put forward and championed by Republican FCC Chairman Ajit Pai. They view it as a major victory that will peel back what they see as onerous government regulation.

Getting rid of net neutrality is going to be great for innovation, Pai has been saying, though “blaring from every computer screen in the nation” is actually a joke news piece from The Onion:

Robert Reich, founding fellow of The Sanders Institute – a nonprofit, educational organization founded last year by Jane Sanders, wife of Sen. Bernie Sanders, I-Vt., to help raise awareness of “enormous crises” facing Americans – called industry claims that net neutrality hurts consumers because it discourages investment in their networks “rubbish.”

Since Net Neutrality was adopted, investment has remained consistent. During calls with investors, telecom executives themselves have even admitted that Net Neutrality hasn’t hurt their businesses.

This is what cable companies can inflict on us in the absence of net neutrality, Reich predicts:

  1. Drive up prices for internet service. Broadband providers could charge customers higher rates to access certain sites, or raise rates for internet companies to reach consumers at faster speeds. Either way, these prices hikes would be passed along to you and me.
  2. Give corporate executives free reign to slow down and censor news or websites that don’t match their political agenda, or give preference to their own content – for any reason at all.
  3. Stifle innovation. Cable companies could severely hurt their competitors by blocking certain apps or online services. Small businesses who can’t afford to pay higher rates could be squeezed out altogether.

No, says former FCC Chairman Michael K. Powell: that’s the rubbish.

Powell, now a lobbyist for the cable and telecom industry, came out with an opinion piece in which he declared that opponents’ protests amount to “hyperbole, demagoguery and even personal threats.”

More from his article, which was published by Recode on Wednesday:

New-age Nostradamuses predict the internet will stop working, democracy will collapse, plague will ensue and locusts will cover the land.

The biggest threat to Silicon Valley innovation and improving consumer experiences isn’t net neutrality; he says; it’s “an internet that stalls and doesn’t get better.”

Powell says that the “vibrant and open internet” that Americans cherish “isn’t going anywhere.” Not for days, not for weeks, not for years: we’ll also still be merrily shopping online for the holidays, oversharing our photos on Instagram, harping on about our political grievances on Facebook, and asking Alexa what the score of the game is. Everything is going to be Just Fine, and the internet Will Not Blow Up.

Why the confidence? Because ISPs value the principles of net neutrality and the open internet more than activists would have you believe, Powell says. After all, it’s easier to make money with an open internet:

A network company makes the most money when its pipe is full with activity. The more consumers use, the more profitable the business. With new, compelling services, consumer demand rises for higher speeds. Degrading the internet, blocking speech and trampling what consumers now have come to expect would not be profitable, and the public backlash would be unbearable. Economic self-interest and the pursuit of profits tilts decidedly toward an open internet.

His optimism is not mirrored throughout the internet.

Senior analyst Michael Fauscette, Chief Research Officer at G2 Crowd, a review website for business software, says that letting a business self-regulate hasn’t gone well in the past, either for the businesses or the public.

Neither is this struggle over. Fauscette predicts that “there will be plenty of lawsuits attempting to put the protections back in place.” Besides whatever happens in the court, there are things happening inside Congress to restore net neutrality by passing a law to protect it. On Tuesday, Sen. John Thune (R-SD) asked net neutrality supporters on “both sides of the aisle” to work with him on a legislative solution.

Would such a law pass anytime soon, given the makeup of the Republican majority House and Senate? Maybe not, but “soon” might come sooner rather than later, given Democrat Doug Jones’ upset victory to become senator in conservative Alabama, plus the fact that influential Republican Ted Cruz is seen as the next conservative in Democrats’ cross-hairs.

In the meantime, take your pick between alternating views of the near future: either everything will be hunky dory, per Powell, or we can all start reaching for our wallets to pay for internet fast lanes or kicking back with a beer as we get shunted onto slow lanes.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/C0AvCqg7R2Q/

How MP Nadine Dorries could have shared her passwords securely

Last week, British MP Nadine Dorries admitted doing something with her email password that a lot of people thought sounded a bit crazy.

Passwords are supposed to be top secret but Dorries, it seems, pays little attention to any of that and simply hands it out to her office staff so they can help manage her bulging email inbox of political correspondence.

She described this novel password system in a tweet:

That’s it – if staff such as interns want her password, she tells them (or perhaps they tell her, we’re not sure).

Twitter’s unforgiving vox populi were unimpressed, pointing out:

But what Dorries and others like her may like to know is that there are ways to share access to your resources safely, and there are even safe ways to share passwords with colleagues when shared access isn’t an option.

Sharing access has become essential in many offices where employees work behind company profiles on sites like Facebook and LinkedIn, or, as in Dorries’ case, where multiple employees need access to a single email account or calendar.

The best way to do this is with something like delegated access, available in both the Microsoft Exchange and Google for business environments, where each user has an individual account.

Sometimes, sharing passwords is the only option though. It is essential for services like Twitter that don’t provide ways for multiple accounts to access a single profile, for example. A password manager such as LastPass is the best option here.

In both cases, access can be granted without secondary users being able to see passwords, which means these can’t leak out in plaintext or be re-purposed as part of credential stuffing attacks. Access can also be revoked at any time (revoking access where everyone just knows the boss’s password means changing the password every time somebody leaves and since that inconveniences everybody who uses it, it often doesn’t happen).

Some might have qualms about doing this for email accounts (attackers might in theory compromise the secondary user’s PC and abuse its access) but it would still be better than writing down a password, shouting it out in the office where it can be overheard, or anything else that increases the number of people who have to display good password hygiene to keep a system secure.

It also avoids what Dorries might call the ‘Damian Green defence’ of plausible deniability – that MPs can’t be held responsible for what is downloaded to their computers because other people were accessing their email account from the same machine and might have been responsible.

Secure sharing makes clear that each person can access an account from their own computer, sidestepping the issue.

But perhaps simply finessing password sharing is to miss the bigger takeaway from the great Dorries password debate of 2017: that there is an urgent need to stop relying on passwords alone and move to better authentication.

On that score, a blog by the Parliamentary Digital Service analysing the cyberattack on the House’s email system last summer noted that the recent roll-out of multi-factor authentication (MFA) to new MPs had helped reduce the effects of the breach.

Said PDS director, Rob Greig, on the sudden importance of MFA:

What was going to be a planned and careful roll-out designed to tackle legacy systems going back years, became an intense period of activity to get every user account secured.

Presumably, the PDS will have read of the password-sharing shenanigans of Dorries and her fellow MPs and immediately moved all of them up the MFA priority list.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/_VZqY0ldz-o/

Brit film board proposed as overlord of online pr0nz age checks

The British Board of Film Classification will be responsible for regulating age checks for UK users of online porn websites, if the government gets its way.

The UK’s Ministry of Fun* has proposed the BBFC as the regulator for ensuring sites are using age-verification controls.

These checks were made mandatory by the Digital Economy Act, and will require residents wishing to access porn sites to prove they are 18 or over.

The government said the BBFC had “unparalleled expertise” in classifying content and a “proven track record of interpreting and implementing legislation” as the authority responsible for video age ratings.

But campaigners have questioned the choice, saying that it hands over too much control to the BBFC, making it a de facto censor of online porn websites – as it can block sites that don’t comply by telling UK ISPs to restrict access to them.

“While BBFC say they will only block a few large sites that don’t use age verification, there are tens of thousands of porn sites,” said Jim Killock, director of the Open Rights Group.

“Once MPs work out that age verification is failing to make porn inaccessible, some will demand that more and more sites are blocked. BBFC will be pushed to block ever larger numbers of websites.”

Killock was also sceptical of the BBFC’s ability to ensure age verification is safe, secure and anonymous, saying it is “powerless to ensure people’s privacy” and the development of age-verification products is “out of BBFC’s hands”.

There are particular concerns over how data collected from viewers will be used. Information Commissioner Elizabeth Denham previously warned that information like passport details could be “vulnerable to misuse and/or attractive to disreputable third parties”.

The new law, experts say, could also encourage users to be less security conscious, leaving them susceptible to dodgy actors setting up fake sites. As security researcher Alec Muffet said on the issue:

Fake porn sites (especially outside the UK) abound; with this mechanism you are training people to give their phone numbers to untrusted websites (whilst still “rewarding” them with porn) and the websites can sell/give these numbers onward to marketing companies.

In addition, there are concerns about the impact the law will have on the porn industry, which is already dominated by MindGeek, the owner of mainstream porn sites like PornHub and RedTube, and video producers such as Brazzers.

That means its product, AgeID – effectively an aggregator of verification solutions with a federated login – is likely to dominate the market.

Critics, such as independent pornographer Pandora/Blake, argue this will allow MindGeek to expand its monopoly, if it licenses AgeID to other porn sites.

On their blog, Pandora/Blake said the end result was “regulatory capture”, as “smaller sites like mine will effectively have to pay a ‘MindGeek tax’ to our biggest competitor”.

Security concerns are not helped by MindGeek companies’ poor security history. PornHub users suffered a malvertising campaign this year and in 2012 a YouPorn data breach spilled 1 million users’ details. ®

*The Department for Digital, Culture, Media Sport – aka the fun stuff…

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/15/brit_film_board_proposed_as_regulator_of_online_p0rn_agechecks/

Russia could chop vital undersea web cables, warns Brit military chief

The head of the British Armed Forces, Air Chief Marshal Sir Stuart Peach, has warned that Russia could cut off the UK by severing undersea communications cables.

In a speech made to military think-tank the Royal United Services Institute last night, the air marshal said: “There’s a new risk to our way of life, which is the vulnerability of cables which criss cross the sea beds. Can you imagine a scenario where those cables are cut or disrupted? Which would immediately and potentially catastrophically affect our economy and other ways of living if they were disrupted.”

Peach was giving the annual Chief of the Defence Staff Lecture, in which he talks about topical defence, security and geopolitical issues. He specifically highlighted Russia as the most likely nation state that might go around cutting cables and causing chaos.

“In response to the threat posed by the modernisation of the Russian navy, both nuclear and conventional submarines and ships, we, along with our Atlantic allies, have prioritised missions and tasks to protect the sea lines of communication,” he said, specifically mentioning the role of NATO.

The air marshal also joked about bringing back the Railway Squadron of the Royal Logistics Corps, which drove military-manned trains to and from West Berlin during the Cold War, as well as beefing up Britain’s military hackers with a “reservist and contractor”-led cyber force.

A stagnant defence budget, allied to possible inflation-driven cuts to internal spending, mean the Royal Navy is facing decades of severe overstretch. Peach’s speech ought to be read (or watched, if you’ve an hour of free time – the Russian comments are all in the first five minutes) with the military need to put pressure on politicians for extra funding in mind.

Laying cable

Peach’s warning comes in the context of Russian naval renewal over the last few years and increasing naval activity by Moscow’s armed forces, as well as a recent report highlighting potential legal vulnerabilities around cables and their landing stations. The basic argument goes that as everyone knows where they are, they are uniquely vulnerable.

Without doubt, this is true. It is also true that in our increasingly interconnected world, even “the baddies” like Russia and Iran are also coming to depend on communications over these cables. In spite of conspiracy theories around Russian spy ships interfering with undersea cables, the greater threat to global connectivity appears to be the West, which has inserted eavesdropping capabilities into a large number of cables around the world. Both the US and UK have the advanced technologies necessary to do this sort of work while underwater.

Russia, meanwhile, seems to like trolling professional Western observers by sailing along cable routes, raising watchers’ blood pressure all the while. Rather than some kind of high-tech interference, the main fear is that the Russians will simply drop anchor over a cable site and drag it through in order to sever the cable – as happened accidentally off the coast of Jersey last year thanks to the careless crew of an Italian-flagged gas tanker.

As we previously reported, naval gazers reckon the Russian spy ships may be looking for so-called dark cables used for dedicated defence and intelligence communications. The idea is that by cutting dedicated links, spies and other snoopers’ comms are forced onto public cables – where they can then be re-routed into areas where hostile states can collect and analyse them at leisure. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/15/russia_cable_chop_warning_chief_of_defence_staff/

We need to talk about mathematical backdoors in encryption algorithms

Security researchers regularly set out to find implementation problems in cryptographic algorithms, but not enough effort is going towards the search for mathematical backdoors, two cryptography professors have argued.

Governments and intelligence agencies strive to control and bypass or circumvent cryptographic protection of data and communications. Backdooring encryption algorithms is considered as the best way to enforce cryptographic control.

In defence of cryptography, researchers have set out to validate technology that underpins the secure exchange of information and e-commerce. Eric Filiol,  head of research at ESIEA, the operational cryptology and virology lab, argued that only implementation backdoors (at the protocol/implementation/management level) are generally considered. Not enough effort is being put into looking for mathematical backdoors or by-design backdoors, he maintains.

During a presentation at Black Hat Europe last week, titled By-design Backdooring of Encryption System – Can We Trust Foreign Encryption Algorithms?, Filiol and his colleague Arnaud Bannier, explained how it is possible to design a mathematical backdoor.

RSA: That NSA crypto-algorithm we put in our products? Stop using that

READ MORE

During a presentation, the two researchers presented BEA-1, a block cipher algorithm which is similar to the AES and which contains a mathematical backdoor enabling an operational and effective cryptanalysis. “Without the knowledge of our backdoor, BEA-1 has successfully passed all the statistical tests and cryptographic analyses that NIST and NSA officially consider for cryptographic validation,” the French crypto boffins explain. “In particular, the BEA-1 algorithm (80-bit block size, 120-bit key, 11 rounds) is designed to resist linear and differential crypto-analyses. Our algorithm [was] made public in February 2017 and no one has proved that the backdoor is easily detectable [nor] have shown how to exploit it.”

How they did it

During the Black Hat talk, Filiol and Bannier went on to lift the lid on the backdoor they had deliberately planted and how to exploit it to recover the 120-bit key in around 10 seconds with only 600kB of data (300kB of plaintexts + 300kB of corresponding ciphertexts). This was a proof-of-concept exercise, they added, saying that more complex backdoors might be constructed.

“There is a strong asymmetry (based on the mathematics) between inserting a backdoor into an algorithm (what we did and which is supposed to be feasible and easy, at least from a computational aspect) and being able to prove its existence, detect and extract a backdoor,” Filiol told El Reg. “In a sense we have to create some sort of conceptual one-way function.”

The researcher has been looking into the topic of mathematical backdoors in crypto algorithms for years. His previous work has included a paper looking into possible issues in block encryption algorithms, which was published earlier this year.

Why, even in these circles, maths is uncool

“Research on mathematical backdoors is much more difficult (mathematical stuff) – and does not attract researchers that need to publish quickly and regularly on fashionable topics,” Filiol added. “This is the reason why this kind of research is essentially done in RD lab of intelligence agencies (GCHQ, NSA…) and [is designed] more for designing backdoors that detecting them.”

Revelations from papers leaked by former NSA sysadmin Edward Snowden that the NSA paid RSA Security $10m to use the weak Dual_EC_DRBG technology by default in its cryptographic toolset show that concerns about mathematical or by-design backdoors are far from theoretical. The Dual_EC_DRBG example is not isolated, according to Filiol.

“There are a lot of examples but only a few are known,” Filiol said. “This was precisely the purpose of the ‘History’ part in my slides [PDF].

“I am convinced that all export versions of encryption system contain backdoors in one way or another. This is a direct constraint from the Wassenaar agreement. In this respect, the crypto AG and other companies (revealed by the Hans Buehler case) are the best examples. There are other less known [examples].

“In this context and when analysing the different documents, standardisation process the  Dual_EC_DRBG precisely IS a known but certain case,” he added.

How many mathematical backdoors are out there?

Filiol admitted it was difficult to know or even gain some sense of the mix between the prevalence and importance of implementation backdoors (at the protocol/implementation/management level) versus mathematical backdoors.

“This is a difficult question to answer, since proving that there may be a backdoor is an intractable mathematical issue,” Filiol responded. “Analyzing the international regulations clearly proves that at least export versions contains backdoors.

“What is more concerning is that now we have to fear that [this] is also the case for domestic use, in the context of population [level] and mass surveillance.”

Asked whether the peer-review process weeded out mathematical backdoors, Filiol argued for reform.

“Defending (proving security) is far more difficult than attacking (proving insecurity),” Filiol said. “And the big issue lies in the fact that academic ignorance [of it has] had as [its] result that we consider the absence of proof of insecurity as a proof of security.

NSA mathematicians and proving a negative

“We are in a realm where the attacker does not publish everything they can do (especially in cryptography where the activity of intelligence entities is still prevalent). So the experts and academics can only work with the known attacks as a working reference. Just imagine what the NSA (300 of the most brilliant mathematicians working for nearly four decades) can have produced: a mathematical corpus of knowledge.”

Filiol does not accept the industry-standard and widely reviewed AES algorithm is necessarily secure, even though he doesn’t have evidence to the contrary at hand.

“If I cannot prove that the AES has a backdoor; no one can prove that there is none,” Filiol told El Reg. “And honestly, who would be mad enough to think that the USA would offer a strongly secure, military grade encryption algorithm without any form of control?”

He added: “I do not. The AES contest has been organised by the NIST with the technical support of the NSA (it is of public knowledge). Do you really think that in a time of growing terrorist threat, the USA would have been so stupid not to organise what is known as ‘countermeasures’ in conventional weaponry? Serious countries (USA, UK, Germany, France) do not use foreign algorithms for high-security needs. They mandatorily have to use national products and standards (from the algorithm to its implementation),” he added.

Filiol concluded that reforms were needed in the way that cryptographic algorithms are selected, analysed and standardised. “It should be a fully open process mainly driven by the open crypto community,” he maintains. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/15/crypto_mathematical_backdoors/

Is Your Security Workflow Backwards?

The pace at which information security evolves means organizations must work smarter, not harder. Here’s how to stay ahead of the threats.

If you’re like me, you typically make a list of items you need before you visit the supermarket. Sometimes you end up with a few more items than you planned. But in general, what you leave the supermarket with is about what you expected you would leave with. This is a fairly logical and straightforward way to approach a shopping trip, and so it is no surprise that many people shop this way.

Imagine, if you will, a different approach. What if you went to the supermarket, bought one of every item the store carried, paid for it all, searched through the items you purchased for the items you actually need, and subsequently returned the remaining items to the store? Sounds pretty inefficient and time consuming, doesn’t it?

At this point, you’re likely asking yourself what this supermarket-based thought exercise has to do with security. I would argue: all too much. You see, if we look at the security operations workflow of many security organizations, it more closely resembles the second supermarket example than the first.

Unfortunately, many security organizations still follow a fairly inefficient and time-consuming workflow. What do I mean by this? Let’s enumerate (at a high level) how security organizations typically build their security operations workflow:

  • Sensing technologies, whether network-based, endpoint-based, or intelligence-based, are deployed around the enterprise.
  • Signature sets and detection algorithms are developed internally or leveraged from external sources.
  • An alert cannon ensues, with tens or hundreds of thousands of alerts blasted to the organization’s unified work queue on a daily basis.
  • Analysts try to sift through the pile of alerts, looking for those of the highest fidelity, highest priority, and of the utmost urgency.
  • In a time-consuming process, the vast majority of alerts are “returned to the supermarket” (closed as false positives).
  • Rinse and repeat each day.

It may be a bit unnerving and uncomfortable to see this workflow presented so starkly and bluntly. Those who know me know I am a fan of directness, and sometimes it is the best way to get the message across. If you’ve worked in security operations and incident response for a little while, you know all too well the pain and somewhat illogical nature of the cycle of alert fatigue I’ve described above.

So what can organizations do to end the absurdity and work in a more logical and efficient manner? They can start by turning their entire security operations workflow on its head. I’ll explain.

If we look at the second supermarket example and compare it with the security operations workflow enumerated above, there is a common thread that runs through them both. Instead of prioritizing at the beginning of the workflow, which would allow us to focus, define, and reduce the data set we subsequently need to work with, we prioritize at the end. Of course, the supermarket example illustrates the absurdity of this approach quite clearly. This is something that is much harder for most of us to see when we look at our respective security operations workflows.

So how can organizations prioritize at the beginning of the workflow, and what does that modified workflow look like? Here’s an example:

  • Identify and prioritize risks and threats to the organization.
  • Identify assets and prioritize their criticality.
  • Identify where sensitive, critical, and proprietary data resides.
  • Develop targeted, precise, and incisive alert logic to identify activities of concern based on the results of the above three bullet points.
  • Give each resulting alert a priority and criticality score based on the threat it poses to the organization and the criticality of the assets and data it affects.
  • Send the prioritized alerts with associated background information regarding the assets and data they are associated with to the unified work queue.
  • Review the alerts in descending order, from highest priority to lowest.

As I hope you can see, the workflow enumerated here is far more efficient than the one I enumerated earlier. Of course, it takes a bit of an up-front investment in time to prioritize at the beginning of the workflow rather than the end. But this investment pays large dividends: analysts can focus on investigation, analysis, and response, rather than spending their time sifting through piles of false positives and noise.

In addition to allowing an organization to run security operations better and more efficiently, this approach also saves money. How so? Here are a few of the ways:

  • Expensive analyst resources are focused on the highest-value work, which increases team productivity with no additional labor cost.
  • Technology is acquired strategically, efficiently, and precisely — exactly where operational needs dictate and nowhere else.
  • Hardware resources can be optimized to fit the streamlined workflow of the organization, effectively doing more with less.

I don’t know too many organizations that have an endless supply of time and money. The pace at which information security evolves means organizations must work smarter rather than harder. Attacking and optimizing the security operations workflow is one of the best ways an organization can improve its security posture.

Related Content:

Josh is an experienced information security leader with broad experience building and running Security Operations Centers (SOCs). Josh is currently co-founder and chief product officer at IDRRA. Prior to joining IDRRA, Josh served as vice president, chief technology officer, … View Full Bio

Article source: https://www.darkreading.com/risk/is-your-security-workflow-backwards-/a/d-id/1330619?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Mobile Device Makers Increasingly Embrace Bug Bounty Programs

Samsung is the latest to join a small group of smartphone makers to cast their net wide on catching vulnerabilities in their devices.

With the rise of mobile threats and ubiquitous use of smartphones, mobile device makers are increasingly throwing their resources toward bug bounty programs to shore up the security of the devices.

Samsung, which holds the largest market share for Android devices, launched a bug bounty program earlier this year, offering up to $200,000 per vulnerability discovered, depending on its severity. It joined Apple, which launched its bug bounty program in 2016, as well as Google, which kicked off its Android Security Rewards Program in 2015. Silent Circle, which offers Blackphone, was the first mobile company to hold a bug bounty back in 2014.

“Is this a sign that mobile device makers are taking security more seriously? Absolutely,” says Alex Rice, co-founder and chief technology officer of HackerOne. “It rises the tide for everyone and the ones that don’t do it will look like outliers.”

Bug bounties, which reward ethical hackers for finding vulnerabilities in software and hardware, have been around since Netscape kicked off the first one in 1995, but only recently have mobile device makers joined the pack.

Bug bounty programs can be offered and managed by companies that want to find vulnerabilities in their own products, or can be outsourced to a bug bounty company, such as HackerOne or Bugcrowd, to manage. Some bug bounty programs are public, while others are private invite-only affairs.

Catalyst for Change

It has taken mobile device makers awhile to offer bug bounty programs because they have had to wait for the mobile ecosystem to mature, Rice says.

“Mobile device makers are inter-connected with other partners,” Rice explains. “They don’t have control over the entire attack surface … If you’re the manufacturer, you want to only offer a bug bounty program for something you can fix.”

But with more partners in the mobile device stack offering bug bounty programs, such as chipset maker Qualcomm and Google’s Android, it is easier for mobile device manufacturers to do the same, he says.

“Although only vulnerabilities that are specific to Samsung mobile devices or its apps are eligible for its program, at least now they have a cohesive story to where they can redirect [bug hunters] to other partners in their stack.”

Another challenge for mobile bug bounty programs is finding enough researchers to participate in the programs, says Casey Ellis, founder and chief technology officer of Bugcrowd.

Three or four years ago, Bugcrowd had to inform and solicit the ethical hacker community to focus on mobile devices and to get in early on the ground floor, Ellis recalls. But, in some respects, it has been a tough sell.

“Mobile devices are harder targets than web and mobile apps, so in the bounty context, the hacker return on investment can draw folks away from them. That said, it’s an extremely valuable skill set and a rapidly growing attack surface,” Ellis notes.

Mobile bug bounty hunters also face a challenge in getting access to all of the components in a device, which adds another layer of complexity and work, Rice says.

Hard Money, Soft Money

A discovery of a hardware vulnerability tends to pay out more than a software flaw in bug bounty programs, say bug bounty experts.

“Finding vulnerabilities in hardware often requires more research, time, and a rarer set of skills than bug finding in applications,” Ellis says. “Because of this, hardware bugs are typically priced higher to reflect their impact and to incentivize talented researchers to join the hunt.”

Rice noted vulnerabilities that allow remote code execution in trust environments also tend to yield the largest bounty payments.

Although Samsung, Google, and Apple all offer bounty rewards upwards of $200,000, depending on the severity of the vulnerabilities discovered, a HackerOne report notes the average payout for mobile critical vulnerabilities ranges from $383 per bug for the telecom industry to $2,015 for the technology industry.

“I expect to see the amount of bounties rise,” Rice says. “I predict we’ll see more players in the future and more coverage with the bounty programs for mobile devices and apps.”

Related Content:

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/mobile/mobile-device-makers-increasingly-embrace-bug-bounty-programs/d/d-id/1330651?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

TRITON Attacker Disrupts ICS Operations, While Botching Attempt to Cause Physical Damage

TRITON malware is discovered after an attack on a safety monitoring system accidentally triggered the shutdown of an industrial process at an undisclosed organization.

Cyberattacks that cause physical damage to critical infrastructure—like the Stuxnet campaign that destroyed nearly 1,000 centrifuges at an Iranian uranium enrichment facility in 2010—have been relatively rare because of how difficult they are to carry out. That may be changing.

A threat actor with possible nation-state backing recently disrupted operations at a critical infrastructure facility when trying to reprogram a system used for monitoring the safety of industrial systems (ICS) at the location, using custom malware.

The incident, described in a report from FireEye this week, is one of few in recent years involving the use of a tool specifically developed to exploit weaknesses in industrial control systems. The only other publicly known examples are Stuxnet and Industroyer, a malware sample used by the Russia-backed Sandworm Team to attack Ukraine’s electric grid last year.

FireEye, which investigated the latest incident, did not disclose the identity of the targeted organization or its location. But comments from two other security vendors—Symantec and CyberX—Thursday suggest the victim is based in the Middle East, possibly Saudi Arabia.

“We’re sharing this information in the hopes that operators will take action to improve their security,” says John Hultquist, director of intelligence analysis at FireEye. “It is very concerning that the attacker targeted a safety system which is in place to protect people, the environment, and the equipment at the facility,” he says.

FireEye said its Mandiant unit was recently called in to investigate an incident in which an attacker had deployed malware for manipulating systems that provided an emergency shutdown capability for industrial processes at the plant.

Mandiant’s investigation led to the discovery of TRITON, a malware tool designed to modify the behavior of a so-called Triconex Safety Instrumented System (SIS) from Schneider Electric. Many industrial plants use SIS to independently monitor critical systems to ensure they are working within acceptable safety thresholds and to automatically shut them down when those thresholds are exceeded. TRITON was disguised as a legitimate application used by Triconex SIS to review logs.

In the incident that FireEye reported this week, the attacker apparently managed to gain remote access to a Triconex SIS workstation running Windows and installed TRITON on it in a bid to reprogram application memory on SIS controllers. During that process, some of the SIS controllers entered a failed safe mode that prompted an automatic shutdown of the industrial process, according to FireEye.

The shutdown appears to have been triggered inadvertently. But the broader goal itself seems have been to try and find a way to cause physical damage to plant equipment by reprogramming the SIS controllers.

Such a compromise would have allowed the attacker to manipulate the SIS so it would allow an unsafe condition to persist and cause system failures. Or the attacker would be able to use the compromised system to trigger incessant shutdowns through false alarms.

In an advisory, Symantec said it was aware of TRITON targeting SIS since at least this September. It works by infecting Windows systems that could end up being connected to a SIS workstation or device. “The malware then injects code modifying the behavior of the SIS device.” Symantec said the company is still investigating the kind of damage that TRITON can do, but noted the malware has the potential to create severe disruptions at targeted organizations.

Several clues suggest a nation-state actor is behind the attack, FireEye said. For one thing, the attackers did not appear motivated by monetary gains at all and appeared interested in a high-impact attack via the SIS. TRITON was deployed almost immediately after the attacker had gained access to the SIS, indicated the tool had already been developed and tested on proprietary equipment and tools not normally available to common cybercriminals.

Phil Neray, vice president of industrial cybersecurity at CyberX, said the company has evidence pointing to Saudi Arabia as the likely target of the attack, which would make Iran a potential attacker. Iran is believed responsible for an attack on Saudi Aramco a few years ago, which destroyed thousands of PCs.

FireEye refused to divulge how the attackers might have gained access to the workstation, citing client confidentiality. But the company noted that ideally, safety instrumented systems must be segregated from process control and information system networks.

Over the past few years, many organizations have integrated these systems with other distributed control systems (DCS) that give human operators a way to monitor and manage critical systems. TRITON highlights the kind of risk that organizations run when allowing communication between DCS and SIS networks, FireEye noted.

“There have been several recent incidents where we have found Russian, Iranian, and North Korean hackers seeking to compromise industrial control systems with the ultimate goal of preparing for an attack at the time of their choosing,” Hultquist says.

Recently, there have been multiple incidents when Russian actors have been found in nuclear systems and utility companies in the US and Europe. North Korea too has been making attempts to breach US critical infrastructure.

This shutdown, however accidental, demonstrates the danger of these efforts,” Hultquist notes. “An adversary probing these critical systems can make a mistake that can have much larger consequences.”

Related content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/triton-attacker-disrupts-ics-operations-while-botching-attempt-to-cause-physical-damage-/d/d-id/1330650?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

TRITON Attacker Disrupts ICS Operations, While Botching Attempt to Cause Physical Damage

TRITON malware is discovered after an attack on a safety monitoring system accidentally triggered the shutdown of an industrial process at an undisclosed organization.

Cyberattacks that cause physical damage to critical infrastructure—like the Stuxnet campaign that destroyed nearly 1,000 centrifuges at an Iranian uranium enrichment facility in 2010—have been relatively rare because of how difficult they are to carry out. That may be changing.

A threat actor with possible nation-state backing recently disrupted operations at a critical infrastructure facility when trying to reprogram a system used for monitoring the safety of industrial systems (ICS) at the location, using custom malware.

The incident, described in a report from FireEye this week, is one of few in recent years involving the use of a tool specifically developed to exploit weaknesses in industrial control systems. The only other publicly known examples are Stuxnet and Industroyer, a malware sample used by the Russia-backed Sandworm Team to attack Ukraine’s electric grid last year.

FireEye, which investigated the latest incident, did not disclose the identity of the targeted organization or its location. But comments from two other security vendors—Symantec and CyberX—Thursday suggest the victim is based in the Middle East, possibly Saudi Arabia.

“We’re sharing this information in the hopes that operators will take action to improve their security,” says John Hultquist, director of intelligence analysis at FireEye. “It is very concerning that the attacker targeted a safety system which is in place to protect people, the environment, and the equipment at the facility,” he says.

FireEye said its Mandiant unit was recently called in to investigate an incident in which an attacker had deployed malware for manipulating systems that provided an emergency shutdown capability for industrial processes at the plant.

Mandiant’s investigation led to the discovery of TRITON, a malware tool designed to modify the behavior of a so-called Triconex Safety Instrumented System (SIS) from Schneider Electric. Many industrial plants use SIS to independently monitor critical systems to ensure they are working within acceptable safety thresholds and to automatically shut them down when those thresholds are exceeded. TRITON was disguised as a legitimate application used by Triconex SIS to review logs.

In the incident that FireEye reported this week, the attacker apparently managed to gain remote access to a Triconex SIS workstation running Windows and installed TRITON on it in a bid to reprogram application memory on SIS controllers. During that process, some of the SIS controllers entered a failed safe mode that prompted an automatic shutdown of the industrial process, according to FireEye.

The shutdown appears to have been triggered inadvertently. But the broader goal itself seems have been to try and find a way to cause physical damage to plant equipment by reprogramming the SIS controllers.

Such a compromise would have allowed the attacker to manipulate the SIS so it would allow an unsafe condition to persist and cause system failures. Or the attacker would be able to use the compromised system to trigger incessant shutdowns through false alarms.

In an advisory, Symantec said it was aware of TRITON targeting SIS since at least this September. It works by infecting Windows systems that could end up being connected to a SIS workstation or device. “The malware then injects code modifying the behavior of the SIS device.” Symantec said the company is still investigating the kind of damage that TRITON can do, but noted the malware has the potential to create severe disruptions at targeted organizations.

Several clues suggest a nation-state actor is behind the attack, FireEye said. For one thing, the attackers did not appear motivated by monetary gains at all and appeared interested in a high-impact attack via the SIS. TRITON was deployed almost immediately after the attacker had gained access to the SIS, indicated the tool had already been developed and tested on proprietary equipment and tools not normally available to common cybercriminals.

Phil Neray, vice president of industrial cybersecurity at CyberX, said the company has evidence pointing to Saudi Arabia as the likely target of the attack, which would make Iran a potential attacker. Iran is believed responsible for an attack on Saudi Aramco a few years ago, which destroyed thousands of PCs.

FireEye refused to divulge how the attackers might have gained access to the workstation, citing client confidentiality. But the company noted that ideally, safety instrumented systems must be segregated from process control and information system networks.

Over the past few years, many organizations have integrated these systems with other distributed control systems (DCS) that give human operators a way to monitor and manage critical systems. TRITON highlights the kind of risk that organizations run when allowing communication between DCS and SIS networks, FireEye noted.

“There have been several recent incidents where we have found Russian, Iranian, and North Korean hackers seeking to compromise industrial control systems with the ultimate goal of preparing for an attack at the time of their choosing,” Hultquist says.

Recently, there have been multiple incidents when Russian actors have been found in nuclear systems and utility companies in the US and Europe. North Korea too has been making attempts to breach US critical infrastructure.

This shutdown, however accidental, demonstrates the danger of these efforts,” Hultquist notes. “An adversary probing these critical systems can make a mistake that can have much larger consequences.”

Related content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/triton-attacker-disrupts-ics-operations-while-botching-attempt-to-cause-physical-damage-/d/d-id/1330650?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Starbucks Wi-Fi hijacked customers’ laptops to mine cryptocoins

What would you like with your latte? Cocoa? Cinnamon? Sprinkle of cryptocurrency mining piggybacking off your free Wi-Fi?

Recent visitors to a Buenos Aires Starbucks didn’t actually have a choice: instead, a 10-second delay was foisted on them when they connected to the coffee shop’s “free” Wi-Fi, as their laptops’ power secretly went to mine cryptocoins (of which the Starbucks customers received nary one slim dime, of course).

The mining was noticed by Stensul CEO Noah Dinkin, who took to Twitter on 2 December to ask Starbucks if it was aware of what was going on. He included a screenshot of the code.

Dinkin said in his tweet that the code was mining bitcoins, but it was actually CoinHive code, which offers a JavaScript miner for generating a cryptocurrency called Monero that’s an alternative to Bitcoin.

Unauthorized cryptocurrency mining has been around for years, typically showing up in malware. And this isn’t the first time we’ve seen uninvited cryptominers that specifically generate Monero, which is similar to Bitcoin but designed for even greater privacy. That privacy has reputedly made it popular on the dark web, and it’s why the WannaCry authors preferred it to their bitcoins.

Another recent case: one or more malware creators made around $63,000 in five months by invading unpatched IIS 6.0 servers to install their miners. To install the miner, they first hijacked the servers by exploiting the CVE-2017-7269 vulnerability: a good example of the importance of keeping up with patches.

It’s one way to make money. In fact, the torrent site The Pirate Bay, in true pirate fashion, recently planted CoinHive JavaScript code on visitors’ browsers, mining search pages to generate Monero without asking for permission or informing them.

When visitors smelled a cryptomining rat, an admin ‘fessed up. The rationale: hey, it’s this or ads, we gotta make rent money somehow!

We really want to get rid of all the ads. But we also need enough money to keep the site running. Do you want ads or do you want to give away a few of your CPU cycles every time you visit the site?

At any rate, Starbucks confirmed the mining on Monday, saying that it took the issue up with its internet provider to make sure its customers’ processing power isn’t siphoned off any longer:

Judging by the “it’s not our Wi-Fi” statement a Starbucks spokeperson gave Motherboard, it sounds like Starbucks wasn’t knowingly on board with the CPU sucking:

Last week, we were alerted to the issue and we reached out to our internet service provider – the Wi-Fi is not run by Starbucks, it’s not something we own or control. We want to ensure that our customers are able to search the internet over Wi-Fi securely, so we will always work closely with our service provider when something like this comes up.

What to do?

  • Watch your CPU. Check Activity Monitor on a Mac or Task Manager on Windows. If your laptop has fans, you might hear them revving up to deal with the extra heat generated by a heavily-loaded CPU chip.

  • Consider a plugin to control JavaScript. Security-conscious Naked Security commenters regularly mention NoScript, a popular free tool that lets you keep control over intrusive JavaScript, Flash, and Java in your browser.
  • Find out if your anti-virus detects coinmining tools. For example, Sophos products classify browser-based coinminers as PUAs (potentially unwanted applications). PUAs aren’t malware – they can be blocked or allowed as you choose.
  • Patch promptly. Crooks who can break into your servers could add cryptomining code to leech ‘free money’ from all your website visitors, leaving you to bear the brunt of any complaints.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MH7ixYUpth8/