STE WILLIAMS

Lauri Love extradition appeal judges reserve decision

The High Court has reserved judgement on the extradition of accused hacker Lauri Love after hearing this morning that his appeal should be granted because conditions in the US prisons he may be sent to are “unconscionable”.

“For this particular appellant, going to MDC [the Metropolitan Detention Centre in Brooklyn, New York], there is a serious risk it will result in a serious health deterioration, or death,” Love’s barrister, Edward Fitzgerald QC, told the Lord Chief Justice of England and Wales, Lord Burnett of Maldon, and Mr Justice Ouseley.

Love sat in the same spot in the well of Court 4 as he had done yesterday, flanked once again by his parents and girlfriend and wearing his sombre, tieless suit. He was noticeably less tense than he was at the previous day’s hearing. The light in the courtroom was almost natural as the low winter sun filtered in through the skylight.

Key to Love’s appeal is the forum bar, formally known as section 83A of the Extradition Act 2003. This was introduced after the Gary McKinnon case, where an accused British hacker was eventually not extradited from the UK to the US.

This morning Fitzgerald drew heavily on the case of Haroon Aswat, an alleged jihadist who was extradited from the UK to America.

Aswat, a paranoid schizophrenic, was extradited in spite of pleading to the High Court that sending him abroad for trial would breach his rights under Article 3 of the Human Rights Act, namely that he would be subject to inhuman or degrading treatment.

Lauri Love and girlfriend Sylvia Mann leaving the Royal Courts of Justice. Pic: Richard Priday

Love and girlfriend Sylvia Mann seen leaving the Royal Courts of Justice. Pic: Richard Priday for The Register

Advancing a similar argument, Fitzgerald told the court that the district judge who previously approved Love’s extradition “failed to address the point that the mere fact of extradition to the US away from home, our style of environment, would almost inevitably create a serious deterioration in his health”.

This, said Fitzgerald, continuing the previous day’s theme, would result in Love being placed at an increased risk of suicide, particularly if he was separated from the support of his family.

“The district judge misdirected herself that the risk to his health was conjecture. He is fit now but anything could happen,” thundered Fitzgerald, causing both judges to momentarily pause in their methodical note-taking.

“There’s a high risk of suicide in MCC [New York’s Metropolitan Correction Centre] and MDC,” he added. “We submit he will be exposed to the ‘unconscionable’ conditions that the women’s judges’ report refers to,” said Love’s barrister, citing a report on conditions in those two prisons produced by American judges who inspected the women’s half of each jail.

“There is a substantial ground for real risk that he will be subjected to Article 3 inhumanity” – Edward Fitzgerald QC, Lauri Love’s barrister

Peter Caldwell, barrister for the Crown Prosecution Service and appearing on behalf of the US government, briefly responded to some points of law made by Fitzgerald, before the judges shuffled their papers and sat up. Earlier he had told the court that District Judge Tempia’s decision to extradite Love was “not wrong”.

The Lord Chief Justice announced he was reserving both verdict and full judgement to a later date and said he would “let the parties know in the usual way” when judgement was ready to be handed down. It is expected that this will take place in early 2018.

Outside court, Love’s father, the Reverend Alexander Love, said: “To be born in this country is to win the lottery of life. We trust in the justice system, and in God.” Love’s solicitor, Kevin Kendridge, added: “We are happy with how things went, and we trust the judiciary.”

Love himself said he was “glad” that people had taken an interest in his case. ®

The view from the public gallery

The public gallery in Court 4 of the Royal Courts of Justice this morning was barely a third full. But those present were all listening keenly to the final arguments.

Some were taking notes, typing on their phones or scribbling in notepads. Others were quietly discussing proceedings, but the most interesting person there, however, was a man I shall call “Sign Guy”. He arrived a few minutes late, sporting a grey suit, dishevelled hair and a notebook. It was clearly not for reporting, however, as his pen was a thick board marker, rather than the standard-issue biro.

After scribbling for a moment, he propped up his notebook on the edge of the gallery, which bore the message “Trial at Home”. When a spectator behind him leant over and informed him that this was not a wise course of action, Sign Guy wrote a new message, unseen by your reporter, which elicited a few giggles from those sat nearby.

An usher then walked over to Sign Guy and said a few words to him. His reply was quite gruff, like how one might speak to someone loudly rustling their popcorn in the cinema. The usher then fetched a member of security for backup, only to find Sign Guy had decided to have a lie down on the bench.

After some time, he sat up again and wrote Love’s name on his knuckles. The security guard had by this point moved to sit beside Sign Guy, but he appeared unfazed as he continued to work on his body art.

Sign Guy finally ran out of patience when Mr Fitzgerald finished speaking. He sidled his way past the security guard and into the corridor beyond.

Richard Priday reports

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/30/lauri_love_extradition_appeal_judgement_reserved/

The Good News about Breaches: It Wasn’t You this Time

What’s This?

Somewhere in every application there is a vulnerability waiting to be exploited. You can attack the problem by having the right mindset and answering two simple questions

This year, it seems like you can hardly turn around without bumping into some commentary on a breach. There’s expert analysis on every blog. The trade press eats up controversy stirred up by responses. Twitter trends. My inbox fills up with quotes and offers to hear more about the breach.

It’s all bad news, so it seems. But every cloud has its silver lining, and in the face of most breaches, the good news is pretty simple: It wasn’t you.

The reality is that a significant number of high-profile (and target-rich) organizations rely on the same vulnerable software that caught whoever the latest victim was with its security pants down. Many more outside that list probably do, too. You, for example.

But even if you have already inventoried your applications and breathed a sigh of relief at not finding anything potentially dangerous lurking under the surface, you aren’t out of the proverbial woods. There are tons of research reports about vulnerabilities and, no matter which you look at it, there is always a gap between the time a vulnerability is disclosed and when an organization applies the appropriate patches.

Some of those vulnerable systems are likely to be running in your data center right now.

Go ahead, I’ll wait while you check.

While you’re checking, you should also take stock of your application protection strategy. You see, at the heart of the many of vulnerabilities lies the truth that it could have been prevented by a comprehensive application protection strategy. We can argue all day about the existence of the vulnerability and the security practices employed by open-source software, but the reality is that a whole lot of software — third-party, open-source, closed-source, custom developed — is vulnerable. And we’ve seen many of those vulnerabilities lie dormant in software for years before being exploited. Shellshock, anyone?

In fact, just as you ought to assume you will be subjected to a DDoS attack, you ought to operate on the assumption that somewhere in an application there is a vulnerability waiting to be exploited. With that in mind, you can attack the problem by having the right mindset and answering two questions:

  1. How do I prevent the vulnerability from being exploited?
  2. If it is exploited, how do I detect it?

Prevention: Proactive Security
The answer to question one is to embrace a proactive approach. Patching is table stakes. It’s probably past time you audited your policies and processes with respect to patching, so if you haven’t done that lately, do it now. In addition, you should also be putting into place protections that help you strictly adhere to Security Rule Zero:

THOU SHALT NOT TRUST USER INPUT. EVER.

We’ve seen too many breaches that were ultimately triggered by unfiltered, unsanitized user input. In other words, the developers simply pass user input to frameworks, libraries, and even other code of their own without ever giving it a second glance. That’s how SQL injection happens. That’s how cross-site scripting happens. That’s how breaches happen.

The number of breaches that can be traced to the violation of Security Rule Zero boggles my mind. To prevent it, you need to seriously focus on secure development practices and—because we know that can fail, too—employ a Web application firewall (WAF) to assist in sanitizing data. Here’s a helpful tip from Lori: the WAF has to be active. Learning mode is great for, well, learning the app and fine-tuning policies. But if you don’t actually let the thing do its job, you’re not mitigating risk.  You’re just helping to encourage a false sense of security.

Be proactive. Adopt Security Rule Zero and enforce it everywhere.

Detection: Reactive Security
Question two (If it is exploited, how do I detect it?) assumes that even if you faithfully follow Security Rule Zero, still somehow an attacker made it through and has managed to obtain sensitive data. That’s data like account numbers and personal identifiable information (PII) such as social security numbers and login credentials.

There is still time to stop the breach from happening because a breach doesn’t actually happen until that data leaves your network. Inspecting outbound data is a critical component of a comprehensive app protection strategy. That means inspecting outbound responses for sensitive data. That’s data leak prevention solutions. A WAF can do it. A programmable proxy can do it. And I’m sure there are other solutions out there that lie in the data path and can detect the presence of sensitive data indicative of a breach in progress.

Inspecting outbound responses for size of content can also be a boon in detecting a successful exploit. If you know the response to a given URL is supposed to return exactly one record with approximately 4K of data, then a content response size of 64K ought to raise an alarm somewhere. Again, a WAF or programmable proxy can detect this kind of anomalous behavior and more importantly, do something about it.

The good news is (hopefully) you weren’t breached, and you have time to do something about the existential possibility that you will be at some point. Not being a “high profile” site doesn’t protect you anymore. Automation and the rise of botnets means attackers are getting more efficient in seeking out and exploiting more and more organizations because, these days, it costs them almost nothing to scan and attack.

You absolutely should patch everything you can. But just as importantly, you should have a serious and actionable application protection strategy that covers both directions — in and out. Security Rule Zero is not an option anymore. 

Be proactive. Be reactive. Most of all, be active and involved in securing your apps and the platforms they rely on.

Get the latest application threat intelligence from F5 Labs.

Lori MacVittie is the principal technical evangelist for cloud computing, cloud and application security, and application delivery and is responsible for education and evangelism across F5’s entire product suite. MacVittie has extensive development and technical architecture … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/f5/the-good-news-about-breaches-it-wasnt-you-this-time/a/d-id/1330378?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Qualys Buys NetWatcher Assets for Cloud-based Threat Intel

The cloud security company plans to add threat detection, incident response, and compliance management to its platform.

Qualys has agreed to acquire certain assets of NetWatcher for an undisclosed amount of cash, the company announced Nov. 29. It plans to integrate NetWatcher’s technology into the Qualys Cloud Platform.

NetWatcher offers a system built on open-source components, which combine asset discovery, vulnerability management, intrusion detection, behavioral monitoring, SIEM, log management, compliance reporting, and continuous threat intel. Over the next 12 months, Qualys plans to leverage these tools in its cloud security system.

The Netwatcher team will join Qualys as part of the acquisition. CEO Scott Suhy will become vice president of strategic alliances and business development; CTO and founder Kenneth Shelton will become vice president of engineering and real-time threat correlation platform.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/qualys-buys-netwatcher-assets-for-cloud-based-threat-intel/d/d-id/1330523?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

5 Free or Low-Cost Security Tools for Defenders

Not all security tools are pricey.

INSECURITY CONFERENCE 2017 – Washington, DC – Defending the enterprise is increasingly getting complex, with cloud, mobile, and IoT services expanding the potential attack surface and yet IT security budgets may remain constrained to address new threats, Arun DeSouza, CISO and privacy officer with Nexteer Automotive, said in a presentation here today.

But a number of free, or low-cost, tools exist to help security teams deliver a strong defense and remain within their budget, according to DeSouza, who offered tips on this topic. “We are in the fourth industrial revolution and there is great opportunity, but also more risk,” DeSouza said.

Here are five free, or low-cost, security tools to extend a security budget:

  • Bloodhound, a new open source pen test tool for Microsoft’s Active Directory environment. “This is a cool tool that identifies attack paths, so you can see how to shut them down,” DeSouza said.
  • Nikto, an open source Web server scanner. “It’s a powerful tool that can scan over 65,000 known vulnerabilities,” DeSouza said.
  • Reputation Monitor, a free service from AlienVault that conducts threat analysis. “If your public IPs and domains are compromised, it will alert you,” DeSouza said.
  • Ghostery, a free browser extension for private web browsing. “It’s another way to sanitize data,” DeSouza noted.
  • Google Authenticator, a two-step verification code generator. “Companies that don’t have an identity management framework in place can get a higher level of protection with this,” DeSouza said.

Related Content:

 

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/mobile/5-free-or-low-cost-security-tools-for-defenders/d/d-id/1330520?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hey girl, what’s that behind your Windows task bar? Looks like a hidden crypto-miner…

Miscreants have found a way to continue running cryptocurrency-crafting JavaScript on Windows PCs even after netizens browse away from the webpage hosting the code.

Researcher Jerome Segura of Malwarebytes said on Wednesday his team discovered scumbags had written some custom code to keep Coin Hive‘s freely available in-browser Monero miner running even after someone closes the tab or surfs to another site: it’s a low-tech trick that web ads have employed for years – yes, it’s a pop-under window.

The idea, said Segura, is that when you visit a site, a small hard-to-spot window is opened up. That pop-under window runs the actual mining code, rather than the main page, and is tucked under the Windows task bar.

Because of this, the site owner – or hackers who injected the code – can continue to use the victim’s CPU to mine alt coins even after they have navigated away from the page or closed the main browser window entirely.

“The trick is that although the visible browser windows are closed, there is a hidden one that remains opened,” Segura explained in a blog post.

Crypto-jackers enlist Google Tag Manager to smuggle alt-coin miners

READ MORE

“This is due to a pop-under which is sized to fit right under the taskbar and hides behind the clock. The hidden window’s coordinates will vary based on each user’s screen resolution, but follow this rule.”

Malwarebytes says that in addition to using the hidden pop-under windows, the miner also tries to skirt detection by limiting its CPU use so as to avoid slowing down the machine enough to alert users. The sites hosting the miners, via embedded ads, are also designed to avoid ad-blocking tools, making it even harder to stop the illicit crypto-mining.

“Unscrupulous website owners and miscreants alike will no doubt continue to seek ways to deliver drive-by mining, and users will try to fight back by downloading more adblockers, extensions, and other tools to protect themselves,” wrote Segura.

“If malvertising wasn’t bad enough as is, now it has a new weapon that works on all platforms and browsers.”

There are, however, ways to catch the covert coin contraptions. Malwarebytes notes that the Windows Task Manager will show the activity as a browser process that can be ended, and the Windows taskbar will show that the browser is still running after all windows have been closed.

Once the browser application itself has been fully closed, the crypto-mining session will cease. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/30/crypto_mining_persistent/

First US Federal CISO Shares Security Lessons Learned

Greg Touhill’s advice for security leaders includes knowing the value of information, hardening their workforce, and prioritizing security by design.

INSECURITY CONFERENCE – Washington, DC – Greg Touhill encouraged his audience of security leaders, whom he dubbed “the cyber neighborhood watch,” to swap war stories and lessons learned during his keynote at Dark Reading’s inaugural INSecurity conference, held this week in Washington, DC.

As the first CISO of the US federal government, and with an extensive background in government cybersecurity and the military, Touhill has several stories of his own. Drawing from years of experience, the Cyxtera president shared his own lessons learned to kick off an event created to bring cyber defenders together so they can discuss problems and challenges.

One of the biggest problems is explaining to the business how cybersecurity is a risk management issue. Most security pros struggle to communicate with business leaders, who “speak a different language than we do,” he explained.

“I keep on hearing executives talk about cybersecurity being a technology problem, and they keep pouring money into buying new stuff,” said Touhill as an example. The enterprise instinct to buy new protective tools often distracts them from the core problem of managing risk.

One of Touhill’s lessons was to avoid chasing fads. Sometimes new doesn’t mean improved, he noted. Security leaders need to keep tech current, not buy every new tool. They should do their homework and base their product decisions on both risk potential and business value.

Knowing the value of corporate information is a key part of evaluating and managing risk. Business leaders know their data exists but can’t explain what it means or how much it’s worth. It’s tough to know where to prioritize security if you don’t know which data is most valuable.

“Information is one of the most valuable assets any business, any operation has,” Touhill emphasized. “Look at your infrastructure, look at how you architect. Know the value of your information and don’t try to defend everything. Defend what you need to defend.”

Security leaders must also prioritize security by design, he continued, using the transition to the cloud as an example. “A lot of folks jumped into the cloud without knowing about the tall, craggy mountains on the other side of that cloud,” he pointed out.

Touhill’s lessons extended to security employees. “Humans fail all the time,” he said, but you can bring down the risk of catastrophic events by training people and making sure they’re appropriately resourced. Hardening the workforce is “critically important.”

“People are your weakest link but also your greatest assets,” Touhill continued. It’s up to security leaders to make the business case for additional training, which is necessary but expensive. The need for education will never go away. Team members, and colleagues across the enterprise, should be taught to “think like a hacker” and “be very suspicious.”

The sentiment extended to another lesson: have a zero-trust model. Most security pros haven’t taken a full inventory of all the trust relationships they have, he argued, encouraging the audience to look at where their trust lies and “be skeptical.” Knowing and remembering the value of information will be critical as a new wave of professionals enters the workforce.

“We’re raising a generation of folks who are freely surrendering their privacy – your privacy – by giving up information and not recognizing the value of it,” Touhill said.

Other lessons touched on security fundamentals. He urged the audience to identify where they aren’t mastering basics or being consistent. “How many times has someone gotten breached and left the backdoor open?” he asked, relating his advice back to thinking like a hacker.

Attackers will go for the underbelly, Touhill continued. They will check every door and window to make sure they are locked. And if they’re not, they will take advantage of it.

Ultimately, along with protective measures and strategies, leaders must also “be prepared for a really bad day,” he concluded. Security teams identify risk and threats, protect against them, and often build response plans but rarely exercise them to practice for a real incident. Those who need to practice the most often don’t.

In the best organizations, everyone participates in cyber exercises and drills – even the boards and the CISOs. “A bad day is going to come for each and every one of us,” Touhill emphasized.

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/first-us-federal-ciso-shares-security-lessons-learned/d/d-id/1330519?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Apple closes that big root hole – “Install this update as soon as possible”

Yesterday we wrote about a publicly-disclosed problem in Apple’s macOS 10.13, better known as High Sierra.

For reasons that aren’t yet clear, you could trick macOS into letting you authenticate as root – the all-powerful system administration account that you aren’t even supposed to use – with a password of…

…nothing. Blank. Empty. Just press [Enter].

Even though you couldn’t exploit this hole remotely, at least by default, it was an astonishing lapse by Apple.

At first, the Twitter user who publicised this flaw was criticised by some people, who considered his tweet to be “irresponsible disclosure”, because he didn’t report the bug to Apple privately so that the hole could be closed first and only disclosed once a patch was ready.

But others soon realised that this was not a brand new discovery – indeed, it had been discussed more than two weeks ago on Apple’s own support forum.

Ironically, the support forum thread, a community discussion that seems to have gone unnoticed by Apple itself, was about losing administrator access after updating to High Sierra – and this very bug was presented as a handy hack to restore things to normal.

Apple’s official policy of saying nothing about security issues until a fix is out meant that there wasn’t much to go on once the news broke, except to assume that Apple’s programmers were frantically coding up a fix…

…and, fortunately, that turns out to have been true.

Apple just published HT208315, entitled Security Update 2017-001, patching this very hole.

There isn’t anything in the way of detail in the security bulletin, just a deadpan remark that says:

Description: A logic error existed in the validation of credentials. This was addressed with improved credential validation.

Some logic error! Some improvement!

This is the first time we’ve seen the App Store tagging an update as bluntly as this:

Install this update as soon as possible.

No by your leave or if you please – just a simple and unambiguous imperative: install this update.

We agree, and while we’re about it, we want to say, “Well done to Apple for acting quickly.”

Maybe the “irresponsible disclosure” served its purpose after all?

Note. To get the update or to check if it’s already installed, go to the Apple Menu (top left hand corner of the screen) and choose About This Mac, press the [Software Update...] button and then click on the Updates icon on the top of the App Store window that appears. (That’s the window you can see in the screenshot above.)


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mZhgDbxcN3s/

As Apple fixes macOS root password hole, here’s what went wrong

Code dive Apple has emitted an emergency software patch to address the trivial to exploit vulnerability in macOS High Sierra, version 10.13.1, that allowed miscreants to log into Macs as administrators without passwords and let any app gain root privileges.

The Cupertino iPhone giant kicked out the fix, Security update 2017-001, today after word of the bug and methods to exploit it ran wild over the internet. It was discussed on Apple’s developer support forums two weeks ago, and hit Twitter on Tuesday.

The patch addresses a flaw in its operating system that allows anyone sitting at a Mac to gain administrator access by entering “root” as the username and leaving the password box blank in authentication prompts. This works when altering system settings, logging into the machine, and accessing it remotely via VNC, RDP, screen sharing, and so on. It can also be used to log into system accounts, such as _uucp, and via the command line, which is useful for malware seeking to gain superuser privileges.

If you’re running High Sierra, you’re urged to install the update as soon as possible.

“An attacker may be able to bypass administrator authentication without supplying the administrator’s password,” read Apple’s description of the flaw. “A logic error existed in the validation of credentials. This was addressed with improved credential validation.”

Now, let’s take a look at this so-called #IAmRoot flaw, and how it affected High Sierra. The problem starts with the fact that the powerful root account is disabled by default. Essentially, it appears to be a fumble in some internal error code handling, leading to the enabling of root with a blank password.

When the OS tries to authenticate a user, in this case root, the security daemon opendirectoryd calls an internal function called odm_RecordVerifyPassword. This attempts to retrieve the shadow hash password for the account to check against the supplied password. Since root is disabled, and has no shadow hash entry, the subroutine returns a fail code. So far, so good.

Seeing as that shadow hash lookup failed, opendirectoryd next tries to retrieve and check a crypt password for the account using od_verify_crypt_password. Weirdly, that function returns the value 0x1, signaling it was successful, and rather than bail out and deny access, the code falls through to function calls that upgrade the crypt password to a shadow hash and stores it for the account.

That means the blank password is stored as the root account’s password. This works for any system account that has its login disabled.

Reverse-engineered opendirectoryd code showing the heart of the bug, with od_verify_crypt_password returning 0x1 and screwing up the 0x0 check … Source: Patrick Wardle

Mac security specialist and Synack chief researcher Patrick Wardle explained the programming cockup in more detail, summarizing it as:

For accounts that are disabled (i.e. don’t have ‘shadowhash’ data) macOS will attempt to perform an upgrade. During this upgrade, od_verify_crypt_password returns a non-zero value. The user (or attacked) specified password is then ‘upgraded’ and saved for the account.

It appears that od_verify_crypt_password should fail (maybe it does and the check of the return code for 0x0 is just inverted?) Or perhaps the call to odm_RecordVerifyPassword assumes can only be called in a validated/authenticated context?

Fortunately, Apple has addressed the vulnerability. The fact that it was able to slip into production, however, could give fans, particularly in the enterprise market Apple is so keen to grab, pause when it comes to deploying macOS 10.13.

“We greatly regret this error and we apologize to all Mac users,” Apple told Reuters. “Our customers deserve better. We are auditing our development processes to help prevent this from happening again.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/29/apple_macos_high_sierra_root_bug_patch/

Wondering why your internal .dev web app has stopped working?

Network admins, code wranglers and other techies have hit an unusual problem this week: their test and development environments have vanished.

Rather than connecting to private stuff on an internal .dev domain to pick up where they left off, a number of engineers and sysadmins are facing an error message in their web browser complaining it is “unable to provide a secure connection.”

How come? It’s thanks to a recent commit to Chromium that has been included in the latest version of Google Chrome. As developers update their browsers, they may find themselves booted out their own systems.

Under the commit, Chrome forces connections to all domains ending in .dev (as well as .foo) to use HTTPS via a HTTP Strict Transport Security (HSTS) header. This is part of Google’s larger and welcome push for HTTPS to be used everywhere for greater security. Essentially, you have to use HTTPS to connect to .dev websites, and if you haven’t bothered configuring secure HTTP on your internal .dev work servers, your browser won’t connect.

Why on Earth would Google start breaking internal domains by insisting on unnecessary security measures for a top-level domain that doesn’t exist on the public internet?

Ah, well, that’s the thing: .dev does exist on the public internet.

In fact, the .dev global top-level domain is owned by Google. And even though it has only made one domain live so far – the contractually obliged nic.dev – the search engine giant has the ability to add whatever .dev domains it wants to the public internet at any time it wants.

Gone

Which means that since December 16, 2014 your custom-slider.dev or standard.dev could have vanished at any point, overridden by a Google-owned property, depending on your DNS settings, of course.

In fact, probably the only reason that hasn’t happened before now was Google’s decision to keep the top-level domain all to itself, combined with a scaling back of what was once a huge domain expansion plan but was dropped after Google became constrained by investors and was turned into Alphabet.

The use of .dev domains are pretty common for internal software and web app testing: an alternative to .localhost, .local and .test. But as we noted long ago, under ICANN’s top-level domain expansion program, Google applied for and secured ownership of the generic top-level domain .dev.

Unlike .local, .test, and .example, .dev is not on a list of specially protected names. No one lodged a complaint with ICANN to ring-fence the gTLD while the Chocolate Factory was applying for it in 2012 – most likely because very few people in the web development community engage with the DNS overseer.

In fact, both Google and Amazon applied for .dev, and Google got hold of it when it cut a deal with Amazon where the online retailer was given control of .book and .talk in return for Google having .dev and .drive.

Protection

In case you were wondering, there are actually quite a few protected domain names. There are the 32 special use domains reserved by IANA under RFC 6761 which are mostly to do with internet routing but also include domains that internet engineers use a lot – like example.com.

Then there are the 11 IANA-managed reserved domains that were created to test the use of non-English languages as domain names. That was back in 2007 when ICANN finally recognized (or, more accurately, was forced to recognize) the importance of other languages on the internet and realized it had to make sure it didn’t break the web (internationalized domain names, or IDNs, or becoming increasingly important for many online, although they still have compatibility problems with the broader internet.)

And then there are the literally hundreds of domain names that ICANN has been forced to protect at the second-level after international organizations made a huge fuss during the gTLD expansion program. Those names won’t affect web devs, however.

Oh, and then there are the 25 names that ICANN has decided can never be approved at the top level and has also forced gTLD operators under contract with it to never include in their root zone files – almost all of which are purely internal names, like “gnso” which stands for Generic Names Supporting Organization and is a part of the organization’s internal structure.

But despite these dozens of protected and reserved names, the .dev top-level domain was not on the right people’s radar and so was sold to Google for $185,000. Now Google can do whatever it likes with it.

Solution

What does this mean if you are a web developer and have been using .dev domains? Well, it means you have two options: you either self-certify your own domains, or you shift to new top-level domains. While you can grab free HTTPS certs from Let’s Encrypt, it’s not feasible to use those internally on .dev domains.

If you want to avoid the ad giant’s touch, shift everything over to one of the protected names – .invalid, .local, .localhost, .test, .example. Or if you want to revel in the pecularities of the domain name system, why not build your internal development environment on ‘test.icann’? ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/29/google_dev_network/

Why Security Depends on Usability — and How to Achieve Both

Any initiative that reduces usability will have consequences that make security less effective.

Security and usability are a zero-sum game. Effective security has to come at the expense of usability, and vice versa. That’s the way it always has been, and so it always will be. Right?

Well, not necessarily — in fact, not at all. In the real world, any security initiative that degrades usability will lead to unintended consequences — user workarounds, rejection by the business — that make it less effective. The highest levels of security can be achieved only with an equally high level of usability.

Rapidly improving technologies and methods are making it more feasible than ever to achieve the best of both worlds: effective security with a better user experience. The first step is to reframe your objectives. “Perfect” security isn’t perfect if it makes the product or solution unusable. Instead, your goal should be to satisfy your security requirements while maximizing usability — and thus ensure that your methods will deliver effective protection as intended. Here’s how.

The Department of Yes
Security is rightly seen as a mission-critical requirement — something that must be built into your products and your business. Traditionally, this has led to security teams operating as the Department of No. Developers submit their code to the application security team and get back a list of no-no’s to be cleaned up. Business users seeking access to cloud services, new software, and third-party integrations are told “No — it’s not secure.” The message is clear: security is a constraint, not an enabler. If you want to get something done, the last thing you want to do is ask permission. The result is shadow IT, where users (and developers) do whatever they want without working through official channels, and without the scrutiny of the Department of No.

It’s easy to see how this leads to a much less secure state of affairs. Operating with the default stance of “no” creates a scenario where users, operators, and developers actively avoid the help and influence of those who best understand security requirements. 

Customers and Users Care about Features — not Security
The past is littered with examples of developers and businesses that placed a premium on security without considering usability and ended up with failure. For example, look no further than the ubiquitous login and password.

Login-based security has been around for decades and has been the target of hackers for nearly as long. Its familiarity makes it easy to overlook its poor design, but in reality, it’s one of the worst authentication methods currently available. Even before the advent of mobile devices, users routinely weakened the passwords they used in order to be able to type them quickly or remember them easily. They chose simple-to-guess words or strings (which were equally simple to brute-force), and reused them everywhere.

With the introduction of mobile keyboards, typing speed has dropped drastically. On a full-size laptop keyboard, the average person types approximately 38 to 40 words per minute (WPM). On an iPad, that number drops, and on an iPhone, the average probably drops further, to 20 WPM. It’s getting harder to enter text as passwords, making it even more tempting to use the shortest, simplest password possible.

Fortunately, we’re finally seeing better alternatives for user authentication. But how many hacks on logins have been executed in the meantime?

The Goal: Security Controls That Enhance the User Experience
Layering in usability as a principle of security design helps ensure that controls aren’t bypassed, ignored, or dumbed down in trade for convenience, increasing the level of effective security. One way to connect security controls with usability is by leveraging data and context. Again using authentication as an example, there has been significant innovation where contextual data such as location, device fingerprint, retina, eye-tracking, fingerprint, and even heartbeat enable more user-friendly authentication.

The goal of any security control should be the best security that can be implemented while enhancing the user experience for the technology being secured. Many times, this will come in the form of security invisibility, but sometimes it’s simply a matter of lessening the interface burden on the user. People may balk at memorizing an endless number of long, complex passwords, but a quick biometric input, complemented behind the scenes with device, time, and location analytics, can be accepted painlessly without a second thought.

Minimum Viable Security
To help product designers, developers, and business owners create secure solutions while focusing on usability, I propose a concept I call “Minimum Viable Security.” 

Source: Signal Sciences

Let’s look at security as a continuum from “no security” to “perfect security” and overlay that with two circles. In one circle, we have levels of security that people will accept in their products before this degrades the usability to an unacceptable degree. In the second circle, we have the levels of security deemed acceptable by security stakeholders. The overlap between these two represents the point where customers and users will accept the solution, and security team members will be satisfied that it meets their requirements. This is the target we’re aiming for: the point of minimum viable security, where the solution provides both viable security and maximum usability.

By embracing this concept, businesses can build solutions, applications, and products that provide effective security while maximizing value and utility for customers and users. Security teams can respond more meaningfully to requests for new technologies and services; instead of simply saying no, they can provide guidance for meeting security requirements without impairing usability. When the focus shifts from perfect security to viable security, and usability is treated as an equally vital design priority, organizations can meet the needs of users and the business.

Perhaps most importantly, this more nuanced approach can help reaffirm the credibility of the security team within the organization. Developers and users become more willing to work through proper channels, increasing visibility and control for security teams by reducing the drivers of shadow IT. The security strategy becomes fully operational, not aspirational — and the business gets the protection it needs. 

Related Content:

Tyler Shields is Vice President of Marketing, Strategy, and Partnerships at Signal Sciences. Prior to joining Signal Sciences, Shields covered all things applications, mobile, and IoT security as distinguished analyst at Forrest Research. Before Forrester, he managed mobile … View Full Bio

Article source: https://www.darkreading.com/risk/why-security-depends-on-usability----and-how-to-achieve-both/a/d-id/1330485?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple