STE WILLIAMS

Brave browser explains Facebook whitelist to concerned users

Privacy-conscious web browser company Brave was busy trying to correct the record this week after someone posted what looked like a whitelist in its code allowing its browser to communicate with Facebook from third-party websites.

Launched in 2016, Brave is a browser that stakes its business model on user privacy. Instead of just serving up user browsing data to advertisers, its developers designed it to put control in the users’ hands. Rather than allowing advertisers to track its users, the browser blocks ad trackers and instead leaves users’ browsing data encrypted on their machines. It then gives users the option to receive ads by signalling basic information about their intentions to advertisers, but only with user permission. It rewards users for this with an Ethereum blockchain-based token called the Basic Attention Token (BAT). Users can also credit publishers that they like with the tokens.

Brave’s FAQ explains:

Ads and trackers are blocked by default. You can allow ads and trackers in the preferences panel.

Yet a post on the YCombinator Hacker News site reveals that the browser has whitelisted at least two social media sites known to be aggressive about slurping user data: Facebook and Twitter. The post points to a code commit on Brave’s GitHub repository from April 2017 that includes the following code:

const whitelistHosts = ['connect.facebook.net', 'connect.facebook.com', 'staticxx.facebook.com', 'www.facebook.com', 'scontent.xx.fbcdn.net', 'pbs.twimg.com', 'scontent-sjc2-1.xx.fbcdn.net', 'platform.twitter.com', 'syndication.twitter.com', 'cdn.syndication.twimg.com']

The code was prefaced with this:

// Temporary whitelist until we find a better solution

The whitelist was in an archived version of the repository but also turns up in the latest current master branch.

Brave staff have separately commented on the issue in different threads. CTO Brian Bondy commented directly in the YCombinator thread saying:

There’s a balance between breaking the web and being as strict as possible. Saying we fully allow Facebook tracking isn’t right, but we admittedly need more strict-mode like settings for privacy conscious users.

He added that Brave’s Facebook blocking is “at least as good” as uBlock origin, which is a cross-platform ad blocker.

So if the entries in the whitelist aren’t ad trackers, what are they?

Brave’s director of business development Luke Mulks dived deeper, calling stories in the press about whitelisting Facebook trackers inaccurate. He explained that the browser has to allow these JavaScript events through to support basic functionality on third-party sites.

The domains listed in the article as exceptions are related to Facebook’s JS SDK that publishers implement for user auth and sharing, likes, etc.

Blocking those events outright would break that Facebook functionality on a whole heap of sites, he said.

Along with Bondy, he cites GitHub commits from three weeks ago that updated the browser’s ad blocking lists, explicitly blocking Facebook requests used for tracking.

So, these JavaScript exceptions can’t be used to track people? That’s right, according to Brave co-founder Brendan Eich. He weighed in on Twitter and in the Reddit forums, arguing that the Facebook login button can’t be used as a tracker without third-party cookies, which the browser blocks.

Mind you, data slurps like Facebook can also track people by fingerprinting their browsers and machines. Eich doesn’t think that’s enough. He said:

A network request does not by itself enable tracking – IP address fingerprinting is not robust, especially on mobile.

The company used the whitelist when it was relatively small because it didn’t have the resources to come up with a more permanent solution, he said, adding that Brave will work to empty the list over time.

Eich has a solid track record in the tech business, having invented JavaScript and co-founded Mozilla. He was eager to avert any user doubt over Brave’s privacy stance – after all, privacy-conscious users might well take their browsing elsewhere if they feel that Brave is deliberately deceiving them. He added:

We are not a “cloud” or “social” server-holds-your-data company pretending to be on your side. We reject that via zero-knowledge/blind-signature cryptography and client-side computation. Can’t be evil trumps don’t be evil.

Eich and his team could have opted to break things like Facebook likes and Facebook-based authentication on third-party sites, but that would have left users wondering why hordes of sites didn’t look the same in Brave as they did in other browsers. That would have been a big risk for a consumer-facing browser trying to gain traction, and it was one that Brave was understandably unwilling to take.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/a-5irms-6oA/

Kids as young as eight falling victim to online predators

Barnardo’s, a major children’s charity in the UK, has found that children as young as eight are being sexually exploited online via social media. In prior years, the youngest respondents to the Barnardo’s survey were 10, suggesting an unfortunate downward trend in progress.

The newest draw for young children, and sadly those who prey on them, is live streaming. Barnardo’s says that video streaming apps like TikTok, as well as streaming within already-popular apps like Instagram, are both extremely popular and very hard to moderate. When you add in the real-time comments posted directly to the person streaming, unfortunately you have an environment that’s ripe for exploitation.

Just last year, Barnardo’s ran a survey via YouGov in the UK and found 57% of 12-year-olds surveyed and 28% of 10-year-olds had live-streamed content on apps that are supposed to be used only by over-13s. In addition, about a quarter of the 10 to 16-year-olds surveyed said they regretted something they had posted online via live streaming.

Barnardo’s Chief Executive Javed Khan:

It’s vital that parents get to know and understand the technology their children are using and make sure they have appropriate security settings in place. They should also talk to their children about sex and relationships and the possible risks and dangers online so children feel able to confide in them if something doesn’t feel right.

Contrary to some popular opinion on the subject, Barnardo’s says that based on the children they have helped, there’s no typical profile of a child who tends to fall victim to sexual exploitation online. The stereotype of the child from a troubled home being a ripe target for exploitation online doesn’t appear to hold true.

Any child can become a victim of sexual exploitation. All children are vulnerable and there is no stereotypical ‘at risk’ profile for victims of any type of sexual abuse or exploitation.

Protecting kids online

As with most security issues, there’s no one-size-fits-all solution to protecting kids online – there need to be layers of protection in place to address the complexities at hand.

Parents, some questions to ask yourself:

  • Do you have protective measures in place on the technology your children use? If they have social media, are their profiles locked down from public view?
  • Have you talked to your child about not sharing their accounts or passwords? (Passwords should stay secret, and accounts should never be shared with anyone else, not even your closest buddies.)
  • Have you talked to your children about appropriate behavior online, what kind of sharing is okay and what kind is not, and why?
  • Do you know who your child is interacting with online? Are they only people they know in person?
  • Do you and your children know what kinds of questions can be red flags? Not just obvious things like asking for their name and address, but also where they go to school, what kinds of landmarks they might live near, their parents’ names, even problems they’re having – predators use this kind of information to establish trust and try to meet in person.
  • Do your children feel safe talking to you about what they’re experiencing online, and do they feel comfortable telling you if something feels wrong?

It is incumbent on parents not only to put protective measures in place and to establish a trust with their children, but also to know how to spot the warning signs of grooming, exploitation, or bullying.

According to “Enough is Enough,” a US-based nonprofit that works to stop online child predation and wants to make the internet safer for children, changes in a child’s behavior around online activity should be a red flag. For example:

  • If your kid becomes angry (or angrier than normal) about not being able to go online.
  • If your child is suddenly secretive about what they’re doing online, to the point of trying to hide what they’re doing, like suddenly putting the phone away or closing a laptop screen.
  • If your child withdraws from being around family or friends in order to be online.

If your solution is to simply “turn the internet off” at home, remember that your kids have ways to access the internet at school or with friends when you’re not around. It would be wonderful if the solution to all this was to keep children in a walled garden until they are 18, but it does them a disservice to not give them the tools they need to make good decisions and protect themselves.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/tTGjrgeEPt8/

Apple sued for ‘forcing’ 2FA on accounts

New York resident Jay Brodsky has filed a class action lawsuit against Apple, claiming that the company forces users into a two-factor authentication (2FA) straitjacket that they can’t shrug off, that it takes up to five minutes each time users have to enter a 2FA code, and that the time suck is causing “economic losses” to him and other Apple customers.

The lawsuit, filed on Friday in Newport Beach, California, is accusing Apple of “trespass,” based on Apple’s “locking [Brodsky] out” of his devices by requiring 2FA that allegedly can’t be disabled after two weeks.

From the filing:

Plaintiff and millions of similarly situated consumers across the nation have been and continue to suffer harm. Plaintiff and Class Members have suffered economic losses in terms of the interference with the use of their personal devices and waste of their personal time in using additional time for simple logging in.

The reference to two weeks comes from support email that Apple sometimes sends out to Apple ID owners after it enables 2FA. That email contains what the lawsuit claims, with italicized emphasis, is an unobtrusive last line that says that owners have two weeks to opt out of 2FA and go back to their previous security settings.

The suit claims that around September 2015, Brodsky’s Apple devices – including an iPhone and two MacBooks – were updated to have 2FA turned on, “without [his] knowledge or consent,” thus “[locking] up access” to Brodsky’s own devices and making them “inaccessible for intermittent periods of time.”

How dare you smear security all over my device

The main gist of Brodsky’s claim: it’s my device, you didn’t ask me if I wanted 2FA in the first place, using it is a pain, and you don’t give users the right to stop using 2FA.

The suit iterates what it claims is the onerous slog of logging in:

Logging in becomes a multiple-step process. First, Plaintiff has to enter his selected password on the device he is interested in logging in. Second, Plaintiff has to enter password on another trusted device to login. Third, optionally, Plaintiff has to select a Trust or Don’t Trust pop-up message response. Fourth, Plaintiff then has to wait to receive a six-digit verification code on that second device that is sent by an Apple Server on the internet. Finally, Plaintiff has to input the received six-digit verification code on the first device he is trying to log into. Each login process takes an additional estimated 2-5 or more minutes with 2FA.

Apple is causing injury to class members by “intermeddling” with the use of their devices and not letting them choose their own security level or “freely enjoy and use” their gadgets, the suit claims.

Also, by “injecting itself in the process by requiring extra logging steps,” Apple is allegedly violating California’s Invasion of Privacy Act – Section 637.2 of the California Penal Code. A third count is allegedly violating California Penal Code section 502: California’s Computer Crime Law (CCL). A fourth count is that Apple allegedly violates the Computer Fraud and Abuse Act (CFAA) by accessing people’s devices without authorization.

Finally, count five: Unjust Enrichment. By better-securing people’s devices, Apple has the gall to make money off all this, be it by selling devices or because it…

… received and retains information regarding online communications and activities of Plaintiff and the Class.

The suit wants Apple to knock it off with the 2FA. It’s also seeking disgorgement of Apple’s “ill-gotten gains,” payable to Brodsky and other class members.

What the what, now?

Where to start? When Apple introduced 2FA for Apple ID for iOS 9 and OS X El Capitan, it did so with opt-in. The feature became available first for iCloud after a spate of celebrity iCloud hacking incidents, and then more broadly to secure Apple devices soon after.

Implementing 2FA requires an explicit, multiple-step opt-in procedure requiring users to consent. However, 2FA is, in fact, required to take advantage of some of Apple’s services, like Home Sharing and HomeKit Hubs.

As far as Brodsky’s claims that logging in with 2FA eats up 2-5 minutes of his time, well, user mileage may vary. Apple Insider reports that it “hasn’t been randomly presented” with 2FA authentications, even following OS updates to an iPhone XS Max, an iPhone X, and two sixth-generation iPads. However, the publication managed to force the issue on a new device.

Apple Insider’s Mike Wuerthele whipped out a stopwatch and found that the resulting 2FA time sink was 22 seconds.

Of course, even if Apple didn’t force users into 2FA, it certainly isn’t shy about nudging them into it… for good reason.

2FA: It’s not perfect, but it’s good

2FA – particularly older forms that use SMS to deliver the code – isn’t an impenetrable shield. Way back in 2016, the US National Institute for Standards and Technology (NIST) updated its official “rules for passwords“, announcing that phone-based 2FA would no longer be considered satisfactory, at least as far as the public sector goes.

More recently, we’ve seen new methods to attack 2FA: Last month, researcher Piotr Duszyński published a tool called Modlishka (Polish: “Mantis”) capable of automating the phishing of one-time passcodes (OTPs) sent by SMS or generated using authentication apps.

If you’re worried about the risks of SMS-based 2FA for your own accounts, consider switching to an app-based authenticator instead, such as the one built into Sophos Free Mobile Security (available for Android and iOS).

Of course, the security of an authenticator app depends on the security of your phone itself, because anyone who can unlock your phone can run the app to generate the next code you need for each account.

What else causes “economic losses”?

You can’t really argue with people over 2FA being a bit of a bother. It does take more time to enter a second authentication factor, for sure. But whether it takes up 22 seconds of your life or the two to five minutes of Brodsky’s life, how much time, and potentially money, does it take to untangle a hijacked bank account, or that of a kidnapped Facebook or Twitter account?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wObYvgrIDEU/

Russian ISPs plan internet disconnection test for entire country

At a time and date during 2019 yet to be confirmed, Russia’s major ISPs will in unison temporarily disconnect their servers from the internet, effectively cutting the country off from the outside world.

From the point of view of Russian internet users, everything will appear normal – as long as they are connecting to websites hosted in Russia, which will still work. Anything beyond its borders will suddenly become unavailable, presumably with a message telling them why. It’s not clear how long the test disconnection will go on for.

According to a translated report by Russian news agency RosBiznesKonsalting (RBK), the aim will be to test the feasibility of a concept dubbed the “sovereign RuNet”, or the Russian Internet.

A draft law proposing such a thing reached Russia’s parliament in December, since when the implications of the test disconnection, however temporary, have dawned on nervous local ISPs.

ISPs want more money to help with the test, as well as guarantees they won’t be saddled with the bill to implement a separate proposed system of control in which internet traffic will be routed via the country’s telecom regulator, Roskomnazor.

Reading between the lines, however, it’s clear that the ISPs are worried about an idea that goes against the grain of what the internet is.

On the one hand, not implementing the disconnection would upset the Russian politicians who came up with the idea, as well as Roskomnazor and the Russian Government to which the idea of an internal Russian internet-intranet sounds dandy.

On the other, suddenly not routing internet protocols on this scale risks upsetting a lot of foreign ISPs to whom uptime is somewhere between a matter of pride and a religious commandment.

Internet in splinters

Russian sources have been nattering about the RuNet since 2015, although barely anyone outside the country has paid much attention.

RuNet looks like a plan to kill several birds with one rather hefty stone.

On a technical level, the Government perhaps wants a way to counter serious DDoS attacks aimed at the country’s infrastructure, although disconnecting the whole country is an extreme way to achieve such a thing.

More likely, RuNet is a way of exerting the sort of control on internet traffic – and content – that is becoming more fashionable in a growing number of countries.

Part of that ambition is to create a backup DNS system that could be used by the BRICS (Brazil, Russia, and India) as a counterweight to US domination of internet technologies and standards.

At home, meanwhile, the Russian Government has been fighting an ongoing battle to control encrypted apps such as Telegram, used by some to evade censorship.

The concept of hiving off bits of the internet into national intranets has been around almost since the internet began. However, arguably the only country that has managed to make the idea work is China with its Great Firewall.

China started with one big advantage over Russia in that respect – the Great Firewall was integrated from the start as the country expanded its internet infrastructure. Retrofitting a similar concept in Russia might take longer, cost a lot more money and, some suspect, still not work.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qXTfMQEvP0o/

Linux container bug could eat your server from the inside – patch now!

If you’re a fan of retro gaming, you’ve probably used an emulator, which is a software program that runs on computer hardware of one sort, and pretends to be a computer system of its own, possibly of a completely different sort.

That’s how your latest-model Mac, which has an Intel x64 CPU, can run original, unaltered software that was written for a computer such as the Apple ][ or the Nintendo GameBoy.

One advantage of emulators is that even though the running program thinks it’s running exactly as it would in real life, it isn’t – everything it does is controlled, instrumented, regimented and mitigated by the emulator software.

When your 1980s Sinclair Spectrum game accidentally corrupts system memory and crashes the whole system, it doesn’t really crash anything at the hardware level, and the crashed pseudo-computer doesn’t affect the operating system or the hardware on which the emulator is running.

Similarly, if you’re an anti-virus program “running” some sort of malicious code inside an emulator, you get to watch what it does every step of the way, and even to egg it on to show its hand…

…without letting it actually do anything dangerous in real life, because it’s not accessing the real operating system, the real computer hardware or the real network at all.

Another advantage of emulators is that you can run several copies of the emulator at the same time, thus turning one real computer into several pseudo-computers.

For example, if you want to analyse multiple new viruses at the same time, all without putting the real computer – the host system – in any danger, you can run each sample simultaneously in its own instance of the emulator.

One disadvantage of true emulators – those that simulate via software every aspect of the hardware that the emulated programs think they are running on – is the performance overhead of all the emulation.

If you’re pretending to be a 1990s-era GameBoy on a 2020-era Macbook Pro, the modern-day host system is so much faster and more capable than any hardware in the original device that the relative slowness of the emulator is irrelevant – in fact, it typically needs slowing down so it runs at a similar speed to the original.

But if you are trying to emulate a full installation of Linux, where the emulator is running the very same version of Linux on very same sort of hardware, true emulators are generally too slow for anything but specialised work such as security analysis and detailed bug testing.

In particular, if you want to run 16 instances of a Linux – for example, to host websites for 16 different customers – on a single physical server, then running 16 copies of an emulator just isn’t going to cut it.

Enter virtualisation

These days, the most popular way to split a computer between multiple different customers is called virtualisation, which is a hardware trick that’s a bit like emulation, but in a way that gives each virtualised computer – called a guest – much closer access to the real hardware.

Most modern processors include special support for virtualisation, where the host computer remains in overall control, but the guest systems run real machine instructions directly in the real processor.

The host computer, the host operating system and the virtualisation software are responsible for keeping the various virtual computers, known as VMs – short for virtual machines – from interfering with each other.

Without proper guest-to-guest separation, cloud-based virtual computing would be a recklessly dangerous proposition.

For all you know, the VM running your company’s web server could inadvertently end up running directly alongside a competitor’s VM on the same physical host computer.

If that were to happen, you’d want to be sure (or as sure as you could be) that there was no way for the other guys to influence, or even to peek at, what your own customers were up to.

Pure-play virtualisation, where each VM pretends it’s a full-blown computer with its own processor, memory and and other peripherals, usually involves loading a fresh operating system into each VM guest running on a host computer.

For example, you might have a host server running macOS, hosting eight VMS, one running a guest copy of macOS, three running Windows, and four running Linux.

All major operating systems can run as guests on, or act as hosts for, each other. The only spanner in the works is that Apple’s licensing prohibits the use of macOS guests on anything but macOS hosts, no matter than running so-called “hackintoshes” as guests on other systems is technically possible.

But even this sort of virtualisation is expensive in performance terms, not least because each VM needs its own full-blown operating system setup, which in turn needs installing, managing and updating separately.

Enter containerisation

This is where containerisation, also known as lightweight virtualisation, comes in.

The host system provides not only the underlying physical hardware, but also the operating system, files and processess, with each guest VM (dubbed, in this scenario, a container) running its own, isolated application.

Popular modern containerisation products include Docker, Kubernetes and LXC, short for Linux Containers.

Many of these solutions rely on a common component known very succinctly as runc, short for run container.

Obviously, a lot of the security in and between containerised applications depends on runc keeping the various containers apart.

Container segregation not only stops one container messing with another, but also stops a rogue program bursting loose from its guest status and messing with the host itself.

What if the container bursts open?

Unfortunately, a serious security flaw dubbed CVE-2019-5736 was found in runc.

This bug means that a program run with root privileges inside a guest container can make changes with root privilege outside that container.

Loosely put, a rogue guest could get sysadmin-level control on the host.

This control could allow the rogue to interfere with other guests, steal data from the host, modify the host, start new guests at will, map out the nearby network, scramble files, unscramble files…

…you name it, a crook could do it.

Precise details of the bug are being witheld for a further six days to give everyone time to patch, but the problem seems to stem from the fact that Linux presents the memory space of the current process as if it were a file called /proc/self/exe.

Thanks to CVE-2019-5736, accessing the memory image of the runc program that’s in charge of your guest app seems to give you a way to mess with running code in the host system itself.

In other words, by modifying your own process in some way, you can cause side-effects outside your container.

And if you can make those unauthorised changes as root, you’ve effectively just made yourself into a sysadmin with a root-level login on the host sever.

For what it’s worth, the runc patch that’s available includes the following new program code, intended to stop containers from messing indirectly with the host system’s running copy of runc, something like this:

static int is_self_cloned(void) {...}
static int parse_xargs(...) {...}
static int fetchve(...) {...}
static int clone_binary(...) {...}
static int enure-cloned_binary(...) {...}

/*---*/

void nsexec(void) {
    . . .
    * We need to re-exec if we are not in a cloned binary. This is necessary
    * to ensure that containers won't be able to access the host binary
    * through /proc/self/exe. See CVE-2019-5736.
    */
   if (ensure_cloned_binary()  0)
      bail("could not ensure we are a cloned binary");
   . . .

What to do?

Any containerisation product that uses runc is probably vulnerable – if you have a version numbered runc 1.0-rc6 or earlier, you need to take action.

Docker users should check the Docker release notes for version 18.09.2, which documents and patches this bug.

Kubernetes users should consult the Kubernetes blog article entitled Runc and CVE-2019-5736, which explains both how to patch and how to work around the bug with hardened security settings.

As the Kubernetes team points out, this flaw is only exploitable if you allow remote users to fire up containers with apps running as root. (You need root inside your container in order to acquire root outside it – this bug doesn’t allow you to elevate your privilege and then escape.)

You typically don’t want to do that anyway – less is more when it comes to security – and the Kubernetes crew is offering a handy configuration tip on how to ensure your guests don’t start out with more privilege than they need.

Of course, you may be a container user without even realising it, if you have some of your software runnning in a cloud service.

If in doubt, ask your provider how their service works, whether they’re affected by this bug, and if so whether they’re patched.

Quickly summarised:

  • Patch runc if you’re using it yourself.
  • Stop guest containers running as root if you can.
  • Ask your provider if they’re using runc on your behalf.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ZvfxGwpS95M/

First they came for Equifax and we did nothing because America. Now they are coming for back-end systems and we’re…

A company that develops and supports software for consumer reports and background checks has admitted to exposing thousands of people’s information to an unknown hacker.

Imag-I-Nation Technologies (not to be confused with the chipmaker of the same name) said that in November of last year, someone was able to access a database containing records its stores for its consumer reports service. The software developer/service provider disclosed the incident on January 30 and is now in the process of letting the victims know.

Based out of North Carolina, Imagination is a subsidiary of FRS, a software developer specializing in consumer information reports, background checks and human resources products.

The consumer report database in question was accessed some time around November 1 and the intrusion was discovered and locked down on November 14.

In the meantime, the mystery hacker would have had access to sensitive information, including the full name, date of birth, home address, and social security numbers of those in the database.

“Upon discovering this incident, we immediately conducted an investigation to determine how this incident occurred and who was impacted. We have retained a forensic IT firm to conduct an analysis and remediation of our system,” victims are being told.

“We have also reviewed our internal data management and protocols and have implemented enhanced security measures to help prevent this type of incident from recurring. We are also notifying the three major credit bureaus, Experian, Equifax and TransUnion, to advise them that your personal information may have been improperly accessed and that they should take appropriate action.”

While the exact number of people exposed in this breach is unknown (Imagination did not return a request for comment), the number is well into the thousands, if state reports are anything to go by. In Washington alone, the company says it will have to notify some 3,695 citizens their details were among the information lifted by the attacker. A similar notification was filed in Vermont.

So far, Imag-I-Nation said it was not aware of anyone selling or misusing the pilfered information. Still, it would be a good idea for anyone who does receive a notification letter to keep a close eye on their bank accounts and consider signing up for a credit monitoring service or placing a credit freeze.

The incident is similar to (but far less severe than) 2017’s mega-breach that occurred with credit reporting giant Equifax. In that incident, millions of Americans and Brits had the personal information used for credit checks lost to hackers. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/12/consumer_reporting_firm_hacked/

Q. What’s a good thing to put outside a building of spies? A: A banner saying ‘here we are!’

The Huawei Cyber Security Evaluation Centre (HCSEC) has a giant banner hanging outside declaring its purpose to the world.

The British source code inspection facility is part-staffed by GCHQ spies and its location is only ever given as “Banbury, Oxfordshire”.

But HCSEC HQ, which would otherwise have fitted The Guardian’s description of it as “deceptively humdrum”, has a great big banner draped over its main entrance of the sort you’d find hanging outside a pub.

Spotted by the eagle-eyed Alan Turnbull of secret-bases.co.uk, the banner reads, simply: “Huawei Cyber Security Evaluation Centre”. Ever the helpful chap, Alan also found it on (where else?) Google Street View.

HCSEC's banner, proudly proclaiming who they are and what they do

HCSEC’s banner, proudly proclaiming who they are and what they do. Pic: Google Street View

If you want to view it yourself and have a virtual drive around the Banbury Office Village, just click here.

HCSEC rather breaks the mould of secretive British governmental functions advertising their presence – though as HCSEC is administratively part of Huawei, despite being part-staffed by people from eavesdropping agency GCHQ, this may not be such a big breach of protocol after all.

The London office of GCHQ’s cuddly National Cyber Security Centre arm, named Nova South by the agency, has no external or even internal signs that it is a GCHQ outpost; lots of trendy (and not-so-trendy) Londoners walk past the building, opposite Victoria Station, every day without realising what’s inside it.

In other Huawei news, the American Secretary of State (foreign minister), Mike Pompeo, repeated calls made by the US envoy to the EU, Gordon Sondland, warning Western companies and countries alike against using Huawei technology. By contrast, EU cybersecurity functionary Despina Spanou told The Register last week that the EU Commission, the bloc’s ruling body, has seen no evidence Huawei poses a security threat. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/12/huawei_gchq_hcsec_banner/

2019 Security Spending Outlook

Cybersecurity and IT risk budgets continue to grow. Here’s how they’ll be spent.PreviousNext

Image Source: Pixabay

Image Source: Pixabay

With data breach numbers continuing to creep upward and digital transformation efforts exposing enterprises to new cybersecurity risks across numerous business dimensions, it’s no surprise that analysts expect another big year for security spending worldwide. Dark Reading takes a look at some of the recent projections to help security leaders wrap their arms around the spending outlook for 2019.

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full BioPreviousNext

Article source: https://www.darkreading.com/2019-security-spending-outlook/d/d-id/1333826?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Black Hat Asia Business Hall Sessions Offer New Cybersecurity Insights

Don’t overlook these promising Business Hall Sessions in Singapore next month. They’re short, sweet, and open to all Black Hat Asia 2019 passholders.

As you lock in your schedule for Black Hat Asia in Singapore next month, make sure not to overlook all the practical sponsored sessions happening in the Business Hall. Presented by leading researchers and security experts, these sessions are short, sweet and open to all Black Hat Asia 2019 passholders — so don’t miss out!

Browser Isolation That Actually Works” will show you how the team at Garrison has engineered a unique, hardware-enforced approach to browser isolation that is being deployed at large scale into commercial businesses and sensitive government networks around the world. This session introduces their approach along with their global service, which is something worth studying if you want to protect businesses and governments from falling prey to the Internet’s worst denizens.

If you’re more curious about cloud security check out “Protecting Your Data from Data Exfiltration in a Cloud Environment,” which aims to teach you best practices for plugging one of the most widely-used attack vectors: DNS. Expect to enjoy some discussion about marrying network and threat intelligence to achieve more successful incident response events. You’ll also see some examples of DNS-based threats and hear an argument about why a one-size-fits-all approach to threat response isn’t effective.

The Digital Risk Dilemma: How To Protect What You Don’t Control” will outline tools, tactics, and best practices to safeguard your entire digital footprint and prevent malicious lookalikes on everything from social networks to criminal marketplaces. You’ll learn why you must shift security priorities from prevention to detection and remediation and how to align activities to three core steps of digital risk protection (map, monitor, mitigate). These practical tools and techniques also include sample classifications and decision-tree diagrams, that you can use in your own work.

In addition to great sessions like these, the Black Hat Asia 2019 Business Hall provides ample opportunities to expand your infosec knowledge and network. It’s a great place to connect with expert security practitioners and cutting-edge solution providers, discover new open-source tools at Arsenal, meet with your peers and relax in the Networking Lounge, and pick up some new Black Hat gear in the merchandise store.

Black Hat Asia returns to the Marina Bay Sands in Singapore March 26-29, 2019. For more information on what’s happening at the event and how to register, check out the Black Hat website.

Article source: https://www.darkreading.com/black-hat/black-hat-asia-business-hall-sessions-offer-new-cybersecurity-insights/d/d-id/1333843?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Identifying, Understanding & Combating Insider Threats

Your organization is almost certainly on the lookout for threats from outside the company. But are you ready to address threats from within?

Most, if not all, organizations are vigilant about safeguarding against threats that can penetrate endpoint systems via email, websites, and other known and unknown pathways. But what about threats that come from within your organization? Even worse, from those who you assume can be trusted?

What Are Insider Threats?
“Insider threats” are far from monolithic. Although we tend to think of vengeful former employees or contractors on the lookout for ill-gotten gains, the truth is that many insider breaches are wholly unintentional. CA’s Insider Threat 2018 Report states that companies should be at least as worried about the 51% of data breaches that are accidental or unintentional —  caused by user carelessness, negligence, or compromised credentials — as they are about the slightly smaller percentage caused by deliberate malicious insider activity (47%).

This year, 90% of organizations are estimated to be vulnerable to insider threats, and over 50% have experienced an insider attack in the past year.

When it comes to preventing mishaps that lead to unintentional breaches of organizational systems, user education is a top priority. Make security training an essential part of the onboarding process. When systems are updated, or new processes or procedures put in place, education and training sessions will lessen the likelihood of attacks from within due to errors or negligence.

But how can you anticipate malicious attacks from within and prevent them and the damage they cause before it’s too late? 

Identify the Early Warning Signs
Business owners, IT staff, and cybersecurity professionals must always be alert to the possibility of insider threats because keeping data safe from those who have permission to access it depends on rapid identification of and response to breaches. Malicious insiders have the upper hand; they’re already past your defenses. With authorized system access, they have sensitive data at their fingertips, know the organization’s weaknesses, and can easily acquire valuable assets. The Insider Threat 2018 Report indicated that both regular employees (56%) and privileged IT users (55%) pose the greatest insider security risk to organizations, followed by contractors (42%).

Attend to Unhappy Employees
Are employees dissatisfied? Remember that unhappy employees might be easily tempted. Keeping an eye on employees and considering their state of mind is more than just good HR practice — it is a good cybersecurity policy as well.

So, reach out to your employees. Meet with them, speak with them. Try to understand how they feel about the state of your organization. Fixing issues before they boil over into malicious cyberactivity can save the company from the trouble that would result from an insider breach.

Revoke Access Immediately upon Termination
Former employees who still have access to company networks and data pose a significant security threat — those who were let go from their positions pose particular risks. For example, in 2014, when Sony Pictures was hacked, researchers from Norse Corporation found that a group of six people, including one ex-employee, were directly involved. The individual who had a previous relationship with Sony was laid off just a few months prior to the attack. Coincidence? More like revenge.

Institute IT procedures for employee termination and adhere to them thoroughly and promptly. Notify IT immediately when a user’s employment has ended and ensure that all access privileges to networks, data, and computer equipment are quickly revoked.

Keep an Eye Out for Financial Distress
Insider breaches have been known to be motivated by financial need — and greed. Whether an employee is experiencing a credit squeeze, didn’t receive an anticipated promotion or raise, or is facing unexpected pressures from a health crisis or other cause, outside pressures may lead an individual to generate much-needed cash however he or she can.

In addition to posing a cybersecurity threat, financial stress affects employee productivity and health. HR professionals should ensure that managers are aware of the signs of financial stress and alert them to worrisome employee behaviors that may result.

Watch for Sudden, Unexplained Changes in Interests or Behaviors
Is an individual suddenly working late or odd hours? Be aware of out-of-the-ordinary, unexplained behaviors. Employees who have a sudden interest in accessing classified material or information outside of their current assignments should obviously raise a red flag. Investigate why they might be interested. Don’t assume you can trust them, no matter what their role in the company might be.

Consider Edward Snowden. The former CIA agent and US government contractor publicly released records on government surveillance before fleeing the country and to this day says he has no regrets.

Fighting Back with Least-Privilege Access
Best practices for preventing insider threats start with standard practices for overall employee risks: Be aware of suspicious or erratic behavior. Follow termination procedures to the letter. Notice dissatisfaction and try to address it.

Today, to reduce the attack surface for insider threats, organizations are increasingly limiting employee permissions based on the principle of least-privilege access. Each user has access privileges only for the systems and data needed for the job. Likewise, each system has the least authority necessary to perform its assigned duties.

The zero-trust concept takes the least-privileges approach to combatting insider threats still further, based on the principle that no person or device should be trusted, whether it is within or outside of the network perimeter. It requires every person and/or device to be validated and authenticated to access each resource. Microsegmentation stops the spread of malicious agents, should they get in. 

As a result, fewer employees can take malicious actions, fewer accounts can be hacked, and fewer people can make errors that result in breaches.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Ilan Paretsky leads the global marketing activities of Ericom. He is responsible for positioning, demand generation, go-to market planning, strategic market direction research, online and offline marketing planning, and execution for Ericom’s broad portfolio of security, … View Full Bio

Article source: https://www.darkreading.com/endpoint/identifying-understanding-and-combating-insider-threats/a/d-id/1333830?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple