STE WILLIAMS

Embrace the Machine & Other Goals for CISOs

Here are five ways we can become more effective for our organizations.

Depending on how you look at it, the past year was either tough for security professionals or it showed the world how complex and interesting this field really is. After all, we’re not working to identify some deterministic software bug — we’re combatting real adversaries who are constantly testing our defenses.

Like many of you, I spend a lot of time talking to customers, partners, and other security professionals, and there is clearly a lot we can do to become more effective for our organizations. Here is my take on what the security community should resolve to accomplish or overcome as we move forward.

1. Embrace the machine.
We have access to programmable technology today that is compatible with other systems, and capable of massive correlations using data from many sources — logins, proximity card data, Web behaviors, locations. We have agents on users’ machines that log information about process execution. And we have rich, intelligent sources of threat information from third-party vendors and other experts.

The ability to almost instantaneously correlate all that information means that today’s expert systems are doing things humans used to do but doing it much faster. Machines can calculate those correlations in near-real time, build information about what happened, and prioritize events for an analyst to review.

Taking it a step further, today we see machines good enough at making correlations that they instantly know the identified activity is malicious. The challenge is to let go and allow the machine itself to loop back into firewalls, endpoint security, and applications, and actively mitigate the threat.

Embracing AI in this way can reduce response times from months to milliseconds, produce logs that are more relevant, and create APIs that respond to inputs from the bigger systems.

2. Consume farm-to-table security data.
CISOs need to understand the difference between primary data and secondary data, and get as close to the source as possible when automating systems. The closer our data points are to the user, the less risk we run of bad modeling.

The key is to capture logs at the time of creation so, unless the event logging system itself is compromised, you’re going to get unfiltered truth. If you go back to a machine after a bad guy has cleaned up his toolset and deleted the log, the tracks may be covered.

To this end, you have to constantly evaluate log sources to see how quickly the data is logged, what the source is, whether there is redundancy — and identify the correlation points that enable a true picture of what’s happening with each machine on the network.

3. Give back to the community.
On both a human and machine level, getting better at security is an iterative process. When an intrusion analyst identifies something, engineering should imbue that knowledge into the correlation engine. Eventually, this process will allow you to automate what the analyst does in a virtual movement between the machine, engineering and the network’s defenses — making every piece more effective.

Now it’s time to share what you’ve learned. Ideally, that information should go to a major threat intel vendor to be correlated with other data so the broader security community can benefit as well.

4. Let analysts analyze.
Information security pros and analysts are expensive, and if there’s a host of things that machines can suppress, this frees those human resources to add value elsewhere and reward the C-suite for the investments they’ve made in security.

And believe it or not, this is also a retention mechanism. Why? Because now only the really hard problems are turned over to analysts, which makes them happy. This is ultimately why many of us go into the security industry in the first place. We’re dealing with human adversaries who are actively and continually adjusting their software and tactics to get into your network. It’s a battle of wits and knowledge. That part of the job is much more compelling than poring over extensive activity logs.

5. Prove your value — and the value of future investments.
CISOs are great at a lot of things, but demonstrating our value isn’t always one of them. For many years, security was neglected. Only in the last decade has it come into its own, and only in the last couple of years has it really entered the broader public consciousness. Now we need to take another step toward connecting the dots between risk and value.

When we hear that competitors, customers, or peers have experienced breaches, we should alert management. If a company similar to yours lost customer data or intellectual property, or was hacked because of software you have in common, brief management on that too. Build a case study or a presentation to demonstrate how your architecture can (or did) prevent a similar attack.

Ditto when things happen in your own network. When your defenses detect a ransomware attack, it demonstrates the value of management-approved investments. The endpoint security software you bought detected the attack within 100 milliseconds. Your AI correlation engines booted the fix back into the email filtering system. The backup system just paid for itself because you were able to recover the lost work and the copy was only three hours old. The system worked. You won.

And if you didn’t win, what mitigations could have prevented the loss? Management should know that too, so they have a clear understanding of where to invest next.

Commit to Making It Happen
So what’s the point of all this? First, you need time to close the gap. Going 200 days until detection of an intrusion isn’t acceptable when it’s possible to detect many threats in 150 milliseconds and fan out a protection to every machine in the enterprise in another 150 milliseconds.

And second, organizations can only achieve that level of effectiveness when the CISO and upper management commit to embracing automation. Yes, it takes engineering, technical knowledge, and the right gear. But in the end, it’s the commitment by the organization that makes it all work.

Related Content:

Mike Convertino has nearly 30 years of experience in providing enterprise-level information security, cloud-grade information systems solutions, and advanced cyber capability development. His professional experience spans security leadership and product development at a wide … View Full Bio

Article source: http://www.darkreading.com/threat-intelligence/embrace-the-machine-and-other-goals-for-cisos/a/d-id/1328433?_mc=RSS_DR_EDT

Are you undermining your web security by checking on it with the wrong tools?

Your antivirus and network protection efforts may actually be undermining network security, a new paper and subsequent CERT advisory have warned.

The issue comes with the use of HTTPS interception middleboxes and network monitoring products. They are extremely common and are used to check that nothing untoward is going on.

However, the very method by which these devices skirt the encryption on network traffic through protocols like SSL, and more recently TLS, is opening up the network to man-in-the-middle attacks.

In the paper [PDF], titled The Security Impact of HTTPS Interception, the researchers tested out a range of the most common TLS interception middleboxes and client-side interception software and found that the vast majority of them introduced security vulnerabilities.

“While for some older clients, proxies increased connection security, these improvements were modest compared to the vulnerabilities introduced: 97 per cent of Firefox, 32 per cent of e-commerce, and 54 per cent of Cloudflare connections that were intercepted became less secure,” it warns, adding: “A large number of these severely broken connections were due to network-based middleboxes rather than client-side security software: 62 per cent of middlebox connections were less secure and an astounding 58 per cent had severe vulnerabilities enabling later interception.”

Of the 12 middleboxes the researchers tested – ranging from Checkpoint to Juniper to Sophos – just one achieved an “A” grade. Five were given “F” fail grades – meaning that they “introduce severe vulnerabilities” – and the remaining six got “C” grades. In other words, if you have a middlebox on your network and it’s not the Blue Coat ProxySG 6642, pull it out now.

Likewise, of the 20 client-side pieces of software from 12 companies, just two received an “A” grade: Avast’s AV 11 for Windows (not Mac), and Bullguard’s Internet Security 16. Ten of the 20 received “F” grades; the remaining eight, “C” grades.

How does it happen?

TLS and SSL encrypt comms between a client and server over the internet by creating an identity chain using digital certificates. A trusted third party provides that certificate and it verifies that your connection is to a trusted server.

In order to work, therefore, an interception device needs to issue its own trusted certificate to client devices – or users would constantly see warnings that their connection was not secure.

Browsers and other applications use this certificate to validate encrypted connections but that introduces two problems: first, it is not possible to verify a web server’s certificate; but second, and more importantly, the way that the inspection product communicates with the web server becomes invisible to the user.

In other words, the user can only be sure that their connection to the interception product is legit, but has no idea whether the rest of the communication – to the web server, over the internet – is secure or has been compromised.

And, it turns out, many of those middleboxes and interception software suites do a poor job of security themselves. Many do not properly verify the certificate chain of the server before re-encrypting and forwarding client data. Some do a poor job forwarding certificate-chain verification errors, keeping users in the dark over a possible attack.

In other words: the effort to check that a security system is working undermines the very security it is supposed to be checking. Think of it as someone leaving your front door wide open while they check that the key fits.

What’s the solution? According to CERT, head to the website badssl.com to verify whether your inspection product is doing proper verification itself. And of course, check out the SSL paper and make sure you’re not running any of the products it flags as security fails on your network. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/17/are_you_undermining_your_web_security_by_checking_on_it_with_the_wrong_tools/

Judge issues search warrant for anyone who Googled a victim’s name in an entire US town

A US judge has granted cops a search warrant to direct Google to provide personal details about anyone searching for a specific name in the town of Edina, Minnesota.

Tony Webster, who describes himself as a web engineer, public records researcher, and policy nerd, published a portion of the warrant out of concern that administrative subpoenas and search warrants are being used for what amounts to fishing expeditions.

Under the Fourth Amendment, searches and seizures must be reasonable and as such are generally limited in their scope, to balance privacy expectations. At issue is whether a warrant for the Google account data of anyone searching for a given term is unconstitutionally broad.

For Hennepin County Judge Gary Larson at least, the warrant was adequate.

According to the warrant, seen in full by The Register, the case involves bank fraud in which an unknown party used the victim’s name to wire $28,500 from Spire Credit Union to Bank of America. The credit union relied on a faxed copy of the victim’s passport to verify the transaction, but the document was faked.

The search warrant, filed by Edina Police Detective David Lindman, says that when investigators searched Google Images for the victim’s name, they found the photo used to make the fake passport – an image of someone who resembled the victim but was not the same person. This led police to believe that the person responsible searched Google for the victim’s name.

Searches of Bing and Yahoo did not produce results, the warrant says.

Reached by phone, Edina police lieutenant Timothy Olson told The Register an earlier report that investigators had sought the Google searches of everyone in Edina was “blatantly inaccurate.” Olson declined to discuss what he characterized as an active case, but said the warrant was related to a felony that had been reported and that it outlined probable cause.

The warrant asks for Google user information related to Google searches of the victim’s name, qualified by location – specifically, “the city or township of Edina, County of Hennepin, State of Minnesota.” Edina has a population of about 50,000.

Edina Police warrant seeking Google data

A copy of the order obtained by The Register. We’ve redacted the full name of the victim, and highlighted the extent of the search

The warrant seeks, “any/all user or subscriber information related to the Google searches of” four variants of the victim’s name over a five-week period, from December 1, 2016 through January 7, 2017.

It describes the information sought as including, but not limited to: “name(s), address(es), telephone number(s), dates of birth, social security numbers, email addresses, payment information, account information, IP addresses, and MAC addresses of the person(s) who requested/completed the search.”

According to the warrant, Google rejected an administrative subpoena from the court. “Though Google Inc.’s rejection of the administrative subpoena is arguable,” the warrant states, “[the Edina Police Department] is applying for this search warrant so that the investigation of this case does not stall.”

Google may not cooperate, however. The internet king has an interest in fending off overreaching governments and police to avoid becoming an on-demand data dispensary.

“We aren’t able to comment on specific cases, but we will always push back when we receive excessively broad requests for data about our users,” a Google spokesperson said in an email to The Register.

Stephanie Lacambra, EFF criminal defense staff attorney, in an email to The Register, expressed skepticism that the warrant as described will be able to survive Constitutional scrutiny.

“As a former public defender for over a decade, I can say that this kind of warrant appears both unusual, in that it was approved in the first place without first establishing as a threshold matter that the suspect perpetrator used Google to obtain the photo used in the fraudulent passport in the first place, and overbroad, in that it calls for a dragnet production of the private information of all querents who searched a particular name within a five week time frame without any further limiting factors,” she said. ®

Editor’s note: After obtaining a full copy of the warrant, we are happy to update this story to clarify that police are seeking from Google information on people within Edina, Minnesota.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/16/police_google_search_warrant/

News in brief: Yahoo ‘was spear-phished’; McDonald’s Twitter hijacked; Samsung moots face recognition for payments

Your daily round-up of some of the other stories in the news

Yahoo hack ‘probably the result of spear-phishing’

Spear-phishing emails to a “semi-privileged” Yahoo employee were probably the Achilles heel that led to the exposure of half a billion users’ details, the FBI told reporters in a follow-up briefing to the unsealing of the indictment against four men alleged to be behind the 2014 attack.

Malcolm Palmore of the FBI told Ars Technica that spear-phishing “was the likely avenue of infiltration” that led to the gang stealing the credentials of an “unsuspecting employee”, allowing them access to Yahoo’s internal networks.

Once inside the network, the four hackers discovered a tool that meant they could forge cookies for user accounts so that they could access them without changing the passwords.

The piece from Ars is a fascinating read, reporting that the hackers went after prominent Russian journalists, employees of a Russian security company and Russian and US government officials. It notes that the FBI believes the hack possibly goes back to the Kremlin, though agent John Bennett told reporters that they were unsure how far up the Russian chain the hack went.

McDonald’s Twitter account ‘compromised’

Just a day after many Twitter users’ accounts were compromised by hackers who exploited the access of a third-party app to post ugly swastika-splattered tweets in support of Turkish president  Recep Erdoğan, the official Twitter account of the McDonald’s burger chain was apparently hijacked and used to post an abusive tweet to Donald Trump, the US president.

The tweet was deleted not long after its appearance, and McDonald’s subsequently tweeted that Twitter had notified them that the account had been “compromised”, and added: “We deleted the tweet, secured our account and are now investigating this.”

Samsung to use face recognition for payments

Samsung’s next flagship mobile phone, the S8, will apparently feature facial-recognition technology to authenticate mobile payments via the South Korean manufacturer’s own Samsung Pay app, Bloomberg reported on Thursday.

It’s increasingly difficult for mobile phones to stand out in a crowded marketplace in which the devices are in many ways largely the same, and observers expect security features to be one way that device manufacturers seek to differentiate their phones.

Samsung is also up against the reputation-shredding experience of having to withdraw its previous flagship device, the Note 7, after only a matter of weeks, when the device’s batteries turned out to be behind the phones bursting into flames.

At Naked Security we like close attention to improving security, so we’ll be keeping an eye on how Samsung’s plans to add facial recognition to its payment services. Samsung is said to be working with banks to help them roll out facial recognition systems, too.

Catch up with all of today’s stories on Naked Security


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/iNj2C65zxOc/

Barrister fined after idiot husband slings unencrypted client data onto the internet

A barrister has been fined by the UK Information Commissioner’s Office after client information was accidentally uploaded to the internet.

According to the monetary penalty notice [PDF] issued against the senior lawyer, who is unnamed, she was only stung for £1,000. The note was published today.

We’re told information belonging to up to 250 people, including vulnerable adults and children, was uploaded to the internet. The cockup occurred when her husband backed the documents up using an online file directory service while he was updating software on the couple’s home computer.

Andy Lee, a senior associate at Brandsmiths, told The Register that according to the Bar Standards Board Code of Conduct [PDF] a barrister’s sixth core duty is to “keep the affairs of each client confidential” while the tenth core duty is to “take reasonable steps to manage your practice, or carry out your role within your practice, competently and in such a way as to achieve compliance with your legal and regulatory obligations.”

According to the ICO, some 725 unencrypted documents — which were created and stored on the computer — were temporarily uploaded to an internet directory as a back-up during the software upgrade.

They were apparently “visible to an internet search engine and some of the documents could be easily accessed through a simple search”, despite six of the files containing confidential and highly sensitive information relating to people who were involved in proceedings in the Court of Protection and the Family Court.

Steve Eckersley, head of enforcement at the ICO, said today: “People put their trust in lawyers to look after their data – that trust is hard won and easily lost. This barrister, for no good reason, overlooked her responsibility to protect her clients’ confidential and highly sensitive information.”

“It is hard to imagine the distress this could have caused to the people involved – even if the worst never happened, this barrister exposed her clients to unnecessary worry and upset,” Eckersley concluded.

Lee told The Register that considering the legal responsibilities of barristers, in addition to the data protection issues which the ICO handled, it was fair to say that “by reason of logic security measures must be taken and must be reasonable.”

As to what is appropriate security measures, there is no real hard and fast guidance but one can answer the question by seeing how the breach occurred and whether that was as a result of there being no security measures in place (in which case the answer is relatively clear) or for example inadequate measures which may be a little more difficult to answer but for example if client information is stored in the cloud the very least one would expect is that access to that cloud server is secure and password protected and/or the documents are encrypted/password protected.

The Bar Council’s advice on information security stresses that the onus is on barristers to “protect the confidentiality of each client’s affairs, except for such disclosures as are required or permitted by law or to which your client gives informed consent” and encourages them to encrypt everything.

Further advice regarding the reporting of security breaches in such incidents is available to barristers too, although neither advisories are “guidance” in the official sense.

The Bar Standards Board, which regulates barristers in England and Wales, told The Register that it does not comment as to whether or not individual barristers are the subject of a complaint or a disciplinary investigation.

If complaints are received they are usually treated confidentially unless they result in a listing for a Disciplinary Tribunal hearing. Such listings are published on the Bar Tribunals Adjudication Service website and hearings are held in public. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/16/barrister_fined_over_data_breach/

UK’s National Cyber Security Centre bungles simple Twitter Rickroll

The National Cyber Security Centre has ineptly tried – and failed – to Rickroll someone taking the piss out of them on Twitter.

A question and answer session organised by the NCSC on Twitter featured, for reasons nobody understands, “sociotechnical security experts” responding to the great unwashed’s inane queries on the controversy-stricken microblabbing website.

“What the fuck is a sociotechnical security expert and where do I get this made up qualification?” asked one user, who goes by the in-joke handle of Bobby ‘Tables.

“Great question!” replied the NCSC. “Here’s a useful link”.

As the picture below shows, this did not go to plan.

The NCSC's Rickroll fail

How not to rickroll someone on Twitter

For anyone who’s been living under a rock (or waiting for a ZX Spectrum Vega+), Rickrolling is when you trick somebody into clicking on a link to Rick Astley’s Never Going To Give You Up. The key part is that it has to be a trick. Revealing the link’s destination before your target clicks it is an epic fail, as da yoof say.

Evidently the NCSC was trying to play a harmless prank. That’s not a problem – but if the monolithic state infosec organisation’s “sociotechnical security experts” don’t understand how straightforward social media sites work, what hope for national web security when they’re let loose on real technology? ®

Tablenote

Little Bobby ‘Tables has a long and entirely dishonourable history amongst those who break databases for business or pleasure. Some joker registered a firm with Companies House back at the end of last year with this name:

; DROP TABLE "COMPANIES";-- Ltd

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/16/ncsc_twitter_rickroll_fail/

Canada’s privacy watchdog probes US border phone seizures

The Canadian privacy commissioner has opened an investigation into the Canadian border police and a recent uptick in phone seizures.

The commissioner has received a number of complaints from Canadian citizens about their phones being taken from them at the US border and wants to know exactly what the border police are doing with those phones.

The investigation follows a request last week by the commissioner to the Canadian government to press the US government to add Canada to a list of countries that are exempted from US president Donald Trump’s executive order on “enhancing public safety” – an order that strips privacy rights from all non-US citizens.

It also comes following a slew of recent reports of customs officers in both the United States and Canada taking people’s phones and requiring people to unlock them if they wish to cross the border. The phones are then taken away and the owners are not informed why they were singled out or what the customs officers did with their phones.

What are they up to?

The concern is that the border police are using a legal grey area in which they are given extraordinary powers to go beyond what is necessary, such as cloning phones and keeping copies.

Almost nothing is known about what the border police do with private data taken from electronic devices: how much data they take; where they store it; how long they store it; who they share it with; is there anything secretly installed on the phone; and so on.

In theory however, this personal and sensitive information is only supposed to be collected and used for the assessing someone’s access to the country and any sharing of data can only happen legally if there is a criminal case, a concern over illegal immigration or national security issues. How the Canada Border Services Agency (CBSA) chooses to define those exceptions is likely to be a key part of the investigation.

A spokeswoman for the privacy commissioner confirmed to the National Post that data retention was likely to play a part.

Currently, we are not aware of a similar investigation being held in the United States despite an alarming number of individuals being held and ordered to hand over and unlock their phones in recent weeks. But it may only be a matter of time, with lawyers currently tied up challenging Trump’s travel ban. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/16/canadian_privacy_commissioner_to_investigate_border_phone_seizures/

Intel touts bug bounties to hardware hackers

Intel has launched its first bug bounty program, offering rewards of up to $30,000.

The chip maker has partnered with specialist bug bounty outfit HackerOne to create a scheme that aims to encourage hackers to hunt for flaws in Intel’s hardware, firmware and software. Intel will pay up to $30,000 for critical hardware vulnerabilities (less for firmware or software holes). The more severe the impact of the vulnerability and the harder it is to mitigate, the bigger the payout.

Find bugs, bag rewards

Bug bounties have become a familiar part of the infosec ecosystem over recent years, with software vendors such as Google and Microsoft leading the charge. Over time, a greater range of vendors have joined in.

Intel Security (McAfee) products are not in-scope of the Intel bug bounty program. Flaws in third-party products and open-source code are also beyond the compass of the scheme. Intel’s web infrastructure has also been excluded.

More details of the program, announced at the CanSecWest security conference, can be found in a blog post by HackerOne here. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/16/intel_bug_bounty/

Judge issues search warrant for anyone who Googled a victim’s name

A judge in Minnesota, America, has granted cops a search warrant to direct Google to provide personal details about anyone searching for a specific name.

Tony Webster, who describes himself as a web engineer, public records researcher, and policy nerd, published a portion of the warrant out of concern that administrative subpoenas and search warrants are being used for what amounts to fishing expeditions.

Under the Fourth Amendment, searches and seizures must be reasonable and as such are generally limited in their scope, to balance privacy expectations. At issue is whether a warrant for the Google account data of anyone searching for a given term is unconstitutionally broad.

For Hennepin County Judge Gary Larson at least, the warrant was adequate.

According to Webster, the case involves attempted bank fraud in which an unknown party used the victim’s name to try to obtain a wire transfer of $28,500. The bank relied on a faxed copy of the victim’s passport to verify the transaction, but the document was faked.

In his post about the case, Webster said that Edina Police Department tried Bing and Yahoo! to learn more about the victim and found nothing useful. But searching Google Images, investigators found the photo of the victim that had been used to make the fake passport. This led them to suspect that the suspect searched Google for the victim’s name.

Reached by phone, Edina police lieutenant Timothy Olson told The Register a report that investigators had sought the Google searches of everyone in the town of Edina was “blatantly inaccurate.”

Olson declined to discuss what he characterized as an active case, but said the warrant was related to a felony that had been reported and that it outlined probable cause.

Indeed, the portion of the warrant presented by Webster does not ask for the Google searches of everyone in Edina. It seeks, “Any/all user or subscriber information related to the Google searches of” four variants of the victim’s name over a five-week period.

The warrant describes the information sought as including, but not limited to: “name(s), address(es), telephone number(s), dates of birth, social security numbers, email addresses, payment information, account information, IP addresses, and MAC addresses of the person(s) who requested/completed the search.”

Barring limitations of scope not evident in the published portion of the document, the warrant reaches beyond the town of Edina. It seeks the Google searches and associated account information of anyone, anywhere, who queried the victim’s name from December 1, 2016 through January 7, 2017.

Google may not cooperate, however. The internet king has an interest in fending off overreaching governments and police to avoid becoming an on-demand data dispensary.

“We aren’t able to comment on specific cases, but we will always push back when we receive excessively broad requests for data about our users,” a Google spokesperson said in an email to The Register. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/16/police_google_search_warrant/

Ubiquiti network gear can be ‘hijacked by an evil URL’ – thanks to its 20-year-old PHP build

Security researchers have gone public with details of an exploitable flaw in Ubiquiti’s wireless networking gear – after the manufacturer allegedly failed to release firmware patches.

Austrian-based bods at SEC Consult Vulnerability Lab found the programming cockup in November and contacted Ubiquiti – based in San Jose, California – via its HackerOne-hosted bug bounty program. Ubiquiti first denied this was a new bug, then accepted it, then stalled issuing a patch, we’re told. After repeated warnings, SEC has now shed light on the security shortcomings.

Essentially, if you can trick someone using a Ubiquiti gateway or router to click on a malicious link, or embed the URL in a webpage they visit, you can inject commands into the vulnerable device. The networking kit uses a web interface to administer it, and has zero CSRF protection. This means attackers can perform actions as logged-in users.

A hacker can exploit this blunder to open a reverse shell to connect to a Ubiquiti router and gain root access – yes, the builtin web server runs as root. SEC claims that once inside, the attacker can then take over the entire network. And you can thank a very outdated version of PHP included with the software, we’re told.

“A command injection vulnerability was found in ‘pingtest_action.cgi.’ This script is vulnerable since it is possible to inject a value of a variable. One of the reasons for this behavior is the used PHP version (PHP/FI 2.0.1 from 1997),” SEC’s advisory today states.

“The vulnerability can be exploited by luring an attacked user to click on a crafted link or just surf on a malicious website. The whole attack can be performed via a single GET-request and is very simple since there is no CSRF protection.”

Here’s a video of an example exploitation:

Youtube Video

The SEC team tested the attack against four Ubiquiti devices, and believes another 38 models are similarly vulnerable. All the affected equipment, according to SEC, is listed in the above advisory. Proof-of-concept exploits were not published as there is still no patch available for the insecure firmware.

Ubiquiti had no comment at time of publication.

This isn’t the first time Ubiquiti customers have been left with an unfixed security cockup by their supplier. A previous flaw was finally patched by a third party back in 2015 after the company failed to fix it in time, despite proof of concept code being in wide circulation.

Then again, security doesn’t seem to be Ubiquiti’s strong point. The firm lost $46.7m in 2015 when it fell prey to an invoice scammer and sent the money – most of which it couldn’t recover – to banks in Asia. Ubiquiti’s chief accounting officer resigned shortly afterwards. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/16/ubiquiti_networking_php_hole/