STE WILLIAMS

Black Hat Q&A: Understanding NSA’s Quest to Open Source Ghidra

National Security Agency researcher Brian Knighton offers a preview of his October Black Hat USA talk on the evolution of Ghidra.

The National Security Agency (NSA) made a splash in the cybersecurity industry this year when it released its Ghidra software reverse-engineering framework as open source for the community to use. Now that the tool is in the public’s hands, NSA senior researcher Brian Knighton and his colleague Chris Delikat, will be presenting a talk at Black Hat USA about how Ghidra was designed, and the process of rendering it open source.

We recently sat down with Brian to learn more about Ghidra and his Black Hat Briefing.

Alex Wawro: Can you tell us a bit about who you are and your recent work?

Brian Knighton: I’ve worked at NSA for about 20 years. The past 18 years I’ve been a member of the GHIDRA team, developing various aspects of the framework and features. My focus these days is applied research, utilizing Ghidra for cybersecurity and vulnerability research of Internet of Things (IoT) devices from smartphones to autonomous and connected vehicles.

My educational background includes a BS in Computer Science from University of Maryland and an MS in Computer Science from Johns Hopkins University.

Alex: What are you planning to speak about at Black Hat, and why now?

Brian: I’m going to use this opportunity to discuss some implementation details, design decisions, and the evolution of Ghidra from version 1.0 to version 9.0, and of course open source.

Alex: Why do you feel this is important? What are you hoping Black Hat attendees will learn from your presentation?

Brian: It’s important to describe how Ghidra came about, why certain things are implemented the way they are, why we selected Java, and why it’s called a framework. In the end, I hope it will allow the community to better utilize Ghidra for cyber-related research.

Alex: What’s been the most interesting side effect, so far, of taking Ghidra from internal tool to open-source offering?

Brian: The entire team is amazed and humbled by the overwhelming interest and acceptance of Ghidra. I knew it would be well received, but I’m surprised by how much. I feel honored to have been a part of it. For me personally, two specific things jump out.

The first was being on the floor at RSA and experiencing the energy, the excitement, and the positive interactions with so many folks during the three-day conference. The second was delivering a Ghidra lecture at a local university. One of the many reasons for releasing Ghidra was to get it into the hands of students and ultimately help advance cyber proficiency, and now I was actually doing it first-hand.

For more information about this Briefing check out the Black Hat USA Briefings page, which is regularly updated with new content as we get closer to the event! Black Hat USA returns to the Mandalay Bay in Las Vegas August 3-8, 2019. For more information on what’s happening at the event and how to register, check out the Black Hat website.

Article source: https://www.darkreading.com/threat-intelligence/black-hat-qanda-understanding-nsas-quest-to-open-source-ghidra/d/d-id/1335123?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Intelligent Authentication Market Grows to Meet Demand

Confidence in user identity is critical to prevent fraud and theft, and companies are looking for new ways to get the necessary assurance.

It’s 2019 and we still don’t know who the users are. That’s a conclusion that both IT executives and growing security companies are eager to see solved. And according to a report from Research and Markets, that eagerness should drive the advanced authentication market to a 12% compound annual growth rate (CAGR) from 2019 to 2024.

The real issue in authentication is increasing the confidence in the user’s identity while decreasing the time and effort required for legitimate users to go through the authentication process. It’s a complex problem that has seen proposed solutions as diverse as Google’s Android-based two-factor authentication, Auth0’s Sign In with Apple program, and Arkose Labs’ challenge and response mechanism. Companies are investing in developing winning authentication strategies for a simple reason: Billions of dollars are at stake.

Jeremiah Grossman, founder of WhiteHat Security and chief of security strategy for SentinelOne, has joined the advisory board of Arkose Labs. He says the companies developing advanced authentication strategies are trying to change the basic economics with which the criminals work. Today, he says, “If you give any company a million dollars to spend on computer security, they’re not going to be able to do very much with it because an adversary might have to spend a thousand dollars to counteract their millions. The only way that we’re going to make ground in computer security is by reversing it, meaning every thousand we spend they have to spend a million to beat us. Then we’ll get somewhere.”

That “somewhere” would seem to involve a place in which it’s more difficult to steal and use credentials — especially credentials for accounts with elevated privileges in the network and application infrastructure. A breach at cloud service provider PCM Inc., revealed by Krebs on Security in mid-June, illustrates the importance of enhanced authentication routines.

The credentials taken by the criminals in this case were for administrative accounts used to manage Office 365 installations for PCM’s customers. Once the customer accounts were breached, the criminals then used individual user information to perpetrate gift card fraud, an increasingly common way for criminals to monetize their activities without involving banks or other mainstream financial institutions.

“To avoid suffering the same fate as PCM, enterprises must implement security solutions that scan and monitor all assets and detect vulnerabilities that could be exploited — like PCM’s lack of multifactor authentication or other identity verification features within its Office 365 system,” says Jonathan Bensen, CISO of Balbix. “By failing to secure its Office 365 with tighter controls and therefore putting its clients’ bottom lines at risk due to gift card fraud, PCM and its customers stand to suffer significant damage.”

In response to the PCM breach and similar crimes, Krebs on Security reports that Microsoft will now require multifactor authentication for all its managed service providers offering Office 365. It’s not a new technology solution, but it is now being applied by contractual force.

The sheer size of the damage is finally getting the attention of the enterprise, though. According to a new report by Industry Research, the global fraud detection and prevention market was valued at $13.59 billion in 2018 and is expected to reach $31.15 billion by 2024, a CAGR of 16.42%

Grossman says that the willingness to apply a solution is as critical as the technology involved. “If we look at the vast majority of breaches over the last 10 or 20 years, with rare exceptions, infosec knew how to prevent the break-in.” He explains, “In every one of the cases, we had technological solutions and controls that we could have put in to stop everything except zero days.”

What has been lacking, Grossman says, is the financial incentive to build in security. “Those in the best position to do something about it aren’t necessarily incentivized to do something about it. It’s why we have identity theft and not loan fraud, because the incentives were in the wrong place.”

Related content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/endpoint/authentication/intelligent-authentication-market-grows-to-meet-demand/d/d-id/1335155?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

UK Forensics Firm Paid Ransom in Cyberattack

Victim firm Eurofins Scientific handles more than 70,000 criminal cases per year in the UK.

UK forensics giant Eurofins Scientific reportedly paid ransom to the attackers behind a “highly sophisticated” ransomware infection that hit the organization last month.

BBC News reported today that Eurofins, which handles computer forensics, DNA testing, toxicology analysis, and firearms testing for UK police departments, paid the ransom in order to regain control of its data that was locked down in the attack.

Just how much ransom was paid was not known as of the BBC posting, but the National Crime Agency (NCA) told the media outlet it’s up to victims whether they pay up or not in a ransomware attack.

Read more here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/uk-forensics-firm-paid-ransom-in-cyberattack/d/d-id/1335156?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

5 tips to stay secure on social media

Here at Naked Security, we’re well aware that social networks aren’t for everyone, and if you’ve decided to stay away from them, we’re good with that.

After all, the best way to prevent privacy blunders and data breaches is simply not to give out the data in the first place – or, if you’re a vendor, not to pressurise people into sharing things that they don’t need to give you and that you’ll probably never use anyway.

But we’re not killjoys, either.

We enjoy spending time on social media – it’s a fun and effective way to keep in contact with our followers and to spread the word about cybersecurity without relying entirely on written articles.

We think you can be part of the social media scene and yet keep enough of your life and lifestyle private that you end up enjoying the benefits without being squashed by the risks…

…but you do need to follow some simple guidelines, both to protect yourself from online rogues, and to stop those same online rogues abusing your account to attack your friends.

Anyway, last weekend was #SocialMediaDay, which was meant to be a way to celebrate all the cool things that social networks let you do, but NOT a call to throw all caution to the winds and start sharing everything with everyone!

So we thought we’d take to social media ourselves, and give you 5 social media security tips to help you get the balance right:

(Watch directly on YouTube if the video won’t play here.)


By the way, if you’d like to learn more about staying safe online, why not sign up for one of our live podcasts next week?

We’re doing one audio webinar a day, Monday to Friday, on the topics of phishing and privacy, social media, web security, the cloud, and threat detection.

Next Tuesday’s episode (2019-07-09 at 14:00 UK time, 9am East Coast) features Naked Security’s own Mark Stockley talking about Social Media – How To Be In It and Win It [external registration link].

Mark is a veritable fountain of great social media advice – he’ll give you techniques you can use to keep yourself, your business, and your kids safe online without cutting yourself off altogether.

Join us to learn how to do it!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/G-AMVjouez0/

Wide of the net: Football Association of Ireland says player, manager data safe after breach

The Football Association of Ireland (FAI) has confirmed it suffered a security breach of its payroll systems, which was discovered last month, saying no staff data had been compromised.

It was previously feared that hackers could have stolen bank details for leading FAI employees and officials, like Ireland manager Mick McCarthy, and staff were told to monitor their bank accounts for unusual activity – but it looks like cyber-crooks failed to exfiltrate their bounty.

The FAI confirmed that at the source of the recent hacking attempt was a malware infection that was targeting payroll systems at its Abbotstown headquarters and was discovered over the June Bank Holiday.

Football goes into the net/ photo by shutterstock

Own goal for Leicester City FC after fan credit card details snatched in merch store hack

READ MORE

The organisation previously told the Irish Independent that these systems stored names, salaries, contact details, bank account details and Personal Public Service numbers of staff.

“Upon becoming aware of the incident, the FAI immediately engaged external computer forensic experts to assist with investigating the incident,” the org said in a statement issued on Wednesday to all current and former staff.

“These investigations found malware on a payroll server but the FAI have assured staff, and former staff, today that there is no evidence of any of their data being extracted from the server.”

In the latest statement, the football body noted that all payment data was actually stored off-site, and details relating to ticket sales were handled by a third party, and were thus not affected. Nor was the FAInet system that handles player registration details, introduced in 2016, it said.

“The FAI have treated this matter very seriously and are focused on closing out this incident and preventing any further security incidents,” the org added.

The episode can serve as a great example of how to comply with GDPR: the FAI got in touch with the Irish Office of the Data Protection Commission as soon as the breach was discovered – even though there was a chance it could turn into a huge PR disaster. It also informed the police service.

“The Office of the Data Protection Commission has been notified of the incident as well as our efforts to ensure that no data subjects were adversely impacted,” the FAI said. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/05/fai_says_player_manager_data_safe_after_recent_breach/

Deepfake revenge porn now a crime in Virginia

Virginia has expanded its ban on nonconsensual pornography to include deepfakes, or what it’s calling the distribution of “falsely created” images and videos.

The updated law, which went into effect on Monday, expands an existing law that says anyone who shares or sells nude or sexual images and videos to “coerce, harass, or intimidate” is guilty of a Class 1 misdemeanor. Violators could face up to 12 months in prison and up to $2,500 in fines.

Here’s the new rule, with the added language about deepfakes in bold:

Any person who, with the intent to coerce, harass, or intimidate, maliciously disseminates or sells any videographic or still image created by any means whatsoever, including a falsely created videographic or still image, that depicts another person who is totally nude, or in a state of undress so as to expose the genitals, pubic area, buttocks, or female breast, where such person knows or has reason to know that he is not licensed or authorized to disseminate or sell such videographic or still image is guilty of a Class 1 misdemeanor.

It’s not just Virginia. Deepfakes are getting plenty of people plenty worried. It started out as a way to do artificial intelligence (AI)-generated face-swaps on porn stars, politicians and other celebrities, but it’s since evolved. The latest iteration came out last week, in the form of DeepNude: a $50 app that automatically undressed a photo of any woman with a single click, swapping the clothes for breasts and a vulva.

After Motherboard reported on DeepNude, the internet threw up at the thought, motivating the anonymous creator of this app, going by the name Alberto, to allegedly shut it down.

Despite the safety measures adopted (watermarks) if 500,000 use it the probability that people will misuse it will be too high. […] The world is not yet ready for DeepNude.

Oh, Alberto, the world is sooooo ready for DeepNude. It’s so ready, it’s spread on file-sharing networks across the land regardless of your purportedly shutting it down. That horse didn’t just escape that barn before you shut the door. It tucked a copy of your compiled code into its saddle bag, hopped on a transatlantic and decided to backpack across Siberia for summer break.

As The Register reported on Tuesday, the DeepNude app, sold in Windows, Linux, and Android versions, was downloaded by “hordes of internet pervs,” who are now sharing the packages on file-sharing networks.

They’re not just sharing it: they’re reverse-engineering it, and they’ve reportedly even improved on its creators’ buggy first release, and have also reportedly removed the automatic placing of watermarks on generated images that labeled the doctored photos as “fake.” Not that those watermarks couldn’t be rinsed off with Photoshop in a nanosecond, mind you.

At any rate, although Virginia may be the first to get a deepfakes ban enacted, it’s looking like it won’t be the last.

Here’s a list of pending bills to regulate deepfakes:

  • There’s a bipartisan effort going on in Congress: Sen. Ben Sasse (R-NE) and Rep. Yvette Clarke (D-NY) have both introduced bills that would regulate deepfakes.
  • Texas passed its own anti-deepfakes law, which goes into effect on 1 September 2019. However, it addresses election manipulation, not nonconsensual porn.
  • New York is considering a bill that would ban creating “digital replicas” of people without their consent – a bill that the Motion Picture Association of America is leery about, given that it would “restrict the ability of our members to tell stories about and inspired by real people and events.”

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/8ak9ak33zfE/

Mannequin Challenge videos teach computers to see

Remember the Mannequin Challenge? It was a short-lived 2016 phenomenon where groups of people would stand still in elaborate poses while someone moved around and filmed them. It looked a lot like the ‘bullet time’ visual effect, famously used in the Matrix movies, where people would seem to stop in mid-air as bullets whizzed around them. Some of the submissions were elaborate and took a lot of effort. Just look at James Corden’s:

Like lots of other online content, the thousands of Mannequin Challenge videos that made their way onto YouTube have been repurposed.

A team of Google researchers have collected these videos for training an AI system that will help computers see 3D scenes as people do.

In their paper, the scientists explain that our understanding of object persistence lets us keep track of how far objects are away from each other in 3D space, even when they move around and go behind each other, even when we have one eye shut (which turns off visual depth perception). That’s harder for computers to do.

Computers use AI to learn this kind of thing, but they need lots of data to learn from. In this case, what they needed were videos of static objects with a camera that moves around them.

Thanks to the crazy place that is the internet, they surfaced thousands of Mannequin Challenge videos to help. The videos were just what the researchers needed to teach computers about the depth and ordering of objects. They said:

We found around 2,000 candidate videos for which this processing is possible. These videos comprise our new MannequinChallenge (MC) Dataset, which spans a wide range of scenes with people of different ages, naturally posing in different group configurations.

Because the people in the videos are static, the researchers can match their key features across multiple frames and use them to compare depth. The data wasn’t all clean, and they had to do some cleanup for things like camera blur. They also had to remove parts of the video with synthetic background (like posters, say) or people that just had to scratch an ear as the camera moved past.

The results were positive, although there were some limitations. The technique is good at recognizing depth and ordering between humans, but not so good at non-human subjects, like cars.

Like all technology research, an AI that lets computers judge the distance between people using a single lens could have many applications. You could envisage its use in smartphone cameras, making them better at shooting people, or in monocular hunter-killer drones, making them better at, um, shooting people.

That raises a question: should people have a say in whether their image or other personal data is used in AI training? The participants in those YouTube videos couldn’t have known what an obscure Google research team would use them for, and now have no say in where that research goes or how it’s used. Surely this is something that GDPR is there to protect, with its demand that companies explain exactly what personal data will be used for?

This isn’t the first time peoples’ data has been co-opted for AI datasets. IBM compiled a dataset of one million faces, harvested from the Flickr photo sharing site, to improve the diversity of its facial recognition system. In March, NBC discovered that those people had not given permission.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/XNi1TxTmoZ0/

Bitcoin eats as much energy as Switzerland

Bitcoin is eating up about seven gigawatts per year, according to a new tool from University of Cambridge’s Centre for Alternative Finance, called the Cambridge Bitcoin Electricity Consumption Index (CBECI).

That’s a bit more than the entire country of Switzerland is using, according to the CBECI – a number that’s admittedly hard to visualize, which is why researchers provided this set of comparisons.

It’s equal to 0.21% of the world’s supply. In science fiction terms, that’s the amount of electricity required to power approximately six DeLoreans to fly Marty and Doc Back to the Future. In nonfiction terms, it’s the amount of power generated by seven Dungeness nuclear power plants, the BBC notes.

Some fun facts to put that energy consumption into even more recognizable terms:

Always-on but inactive home devices in the USA consume enough electricity per year to power the Bitcoin network for 4 years. That’s right: somebody needs to figure out how to harness the power of the always-on recirculation pumps in the nation’s goldfish bowls, our TVs, our computers, our printers, and our game consoles and get those suckers crypto-mining for us.

But wait, is that the main takeaway? Or is it that all this energy use is going to contribute to climate change? In other words…

Is Bitcoin boiling the oceans?

One study estimated that the electricity used in Bitcoin production produces about 22 megatons of CO2 emissions annually – a level that’s somewhere between that produced by the nations of Jordan and Sri Lanka, or roughly as much as Kansas City in the US.

The authors of that study, which appeared in the scientific journal Joule in June, said that the work of Bitcoin to validate transactions via a decentralized data protocol requires “vast” amounts of electricity, which translate into “a significant level of carbon emissions.”

Our approximation of Bitcoin’s carbon footprint underlines the need to tackle the environmental externalities that result from cryptocurrencies.

Michel Rauch, one of the co-creators of the CBECI tool, told the BBC that his team wants to use comparisons to set the narrative of whether Bitcoin’s gobbling up too much energy and producing too much emissions:

Visitors to the website can make up their own mind as to whether it seems large or small.

The Cambridge researchers say that there’s currently “little evidence suggesting that Bitcoin directly contributes to climate change.”

Even when assuming that Bitcoin mining was exclusively powered by coal – a very unrealistic scenario given that a non-trivial number of facilities run exclusively on renewables – total carbon dioxide emissions would not exceed 58 million tons of CO2, which would roughly correspond to 0.17% of the world’s total emissions.

The Joule researchers stress that their own work underlines the need to tackle “the environmental externalities that result from cryptocurrencies” and highlights the necessity of cost/benefit trade-offs for blockchain applications in general.

We do not question the efficiency gains that blockchain technology could, in certain cases, provide. However, the current debate is focused on anticipated benefits, and more attention needs to be given to costs.

They think that policy makers need to pay attention to aspects of Bitcoin production such as the fact that global electricity prices don’t reflect the future damage caused by today’s carbon emissions, in spite of the fact that cryptocurrencies cause “a relatively small fraction of global emissions”.

The Cambridge researchers agree: in spite of Bitcoin only creating a small portion of global emissions, that’s no reason to ignore environmental concerns about Bitcoin’s energy consumption:

There are valid concerns that Bitcoin’s growing electricity consumption may pose a threat to achieving the United Nations Sustainable Development Goals in the future.

However, current figures should be put into perspective: available data shows that even in the worst case (i.e. mining exclusively powered by coal), Bitcoin’s environmental footprint currently remains marginal at best.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/uQupV9p54mE/

OpenPGP experts targeted by long-feared ‘poisoning’ attack

Somebody out there has taken a big dislike to Robert J. Hansen (‘rjh’) and Daniel Kahn Gillmor (‘dkg’), two well-regarded experts in the specialised world of OpenPGP email encryption.

It’s not known who launched the attacks in late June 2019 (Hansen says he has suspects in mind), but it’s the nature of the campaign against them that has people in this corner of encryption worried – a “poisoning” attack against their personal certificate signatures held on the OpenPGP Synchronizing Key Server (SKS) network.

It sounds arcane but the effects of this on the sizeable number of people using implementations of the OpenPGP protocol – GnuPGP, SequoiaPGP, OpenPGP.js – are to varying degrees potentially very serious. Daniel Kahn Gillmor blogged last week:

My public cryptographic identity has been spammed to the point where it is unusable in standard workflows.

The most disconcerting thing about these attacks is how easy they were to launch simply by spamming large numbers of fake certificate signatures to the keyservers, effectively burying the real one belonging to the two men under thousands of bogus additions.

This sort of attack has been feared for a decade, with smaller attacks recorded a year ago fulfilling that prediction. What’s novel this time, however, is the scale and highly targeted nature of the campaign. As Hansen sums it up in his own reaction:

To have my own certificate directly spammed in this way felt surprisingly personal, as though someone was trying to attack or punish me, specifically.

And it really is a flood – comprising 55,000 fakes directed at Daniel Kahn Gillmor and twice that number at Hansen. This causes problems (see below) but what matters is that the pair now fear the attack will be used against others, expanding its scope in ways that will be very hard to counter.

“Spammed into oblivion”

Almost from the moment Phil Zimmermann invented Pretty Good Privacy (PGP) in the early 1990s, securing emails with this type of public or asymmetric key cryptography struggled with the problem that to communicate with someone, you had to have their public key.

The solution was to publish an individual’s certificates using a distributed keyserver directory that anyone could check and verify offline for themselves so they’d know that the person using it was who they said they were.

But in order to battle the possibility that governments might try to delete or falsify signatures, the SKS infrastructure was set up to be write-only.  Certificates could have information added to them but neither this or the certificate itself could ever be deleted.

This required making the key server infrastructure distributed – if a certificate was deleted or removed on one server, that would be quickly noticed through a comparison with the information held by the others.

This resilience created what were then hypothetical weaknesses, including the one now being exploited in the attacks Robert J. Hansen and Daniel Kahn Gillmor, that would allow attackers to keep adding more and more data.

The OpenPGP protocol allows for up to 150,000 signatures for each user’s certificate, but popular implementations such as GnuPG can effectively be broken by fakes, notes Hansen:

Any time GnuPG has to deal with such a spammed certificate, GnuPG grinds to a halt. It doesn’t stop, per se, but it gets wedged for so long it is for all intents and purposes completely unusable.

Fixing things

It seems that fixing this weakness isn’t going to be quick or easy, assuming it’s possible to fix it at all.

According to Hansen, the keyserver design itself is the main barrier. The algorithm used by SKS to perform reconciliation requires specialised knowledge, as does the obscure dialect of the programming language, OCaml, in which it was written by a single developer decades back.

The complicated nature of this software has resulted in it being unmaintained, which in turn has stymied the development necessary to solve larger issues such as how a deliberately federated system could even agree and implement security reforms in the first place.

Hansen’s advice is that high-risk users should stop using the keyserver network until some kind of mitigation is worked up.

That might take some time. As Daniel Kahn Gillmor puts it:

This is a mess, and it’s a mess a long time coming.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/UOIPdXFyQ9s/

Worried about hackers? Catch a lifeline with this month’s Sophos SOS cybersecurity podcasts

Promo If worries about cybersecrity threats to your business and private data are keeping you awake at night, soothe your nerves by tuning into the Sophos SOS Security Week series of podcasts from 8-12 July, and find out what you need to know to be better prepared.

Five top experts from the UK encryption and antivirus company will share their experience and advice in a series of 40-minute interviews with Sophos senior technologist Paul Ducklin. The podcasts cover everything from phishing to common web, cloud, and social media dangers.

Tune in and brush up your knowledge of the following topics:

Phishing and privacy online

Any information you reveal about yourself could help crooks bypass your defences, guess your passwords, and sneak into your life or your company’s network. Sophos senior sales engineer James Burchell explains how to avoid the pitfalls in clear and entertaining language.

Make the best of social media

Sites such as Twitter, Facebook, Instagram, and Snapchat are great for building your brand – though they can also tempt you into giving away more than you should. Join Mark Stockley, the man who keeps the Naked Security site running, to learn how to keep both your business and your family safe on social media.

Don’t let the cybercriminals in

From your apps in the cloud to the valuable online services you offer, you need to make sure criminals don’t get round your security measures. Sales engineer Benedict Jones knows how hard they try to get at your business data, and how to stop them succeeding.

How to love the cloud

Moving to the cloud is great: someone else will look after your servers and provide you with all the apps you need. But do you know who else is in the server room? Are you up to date with the latest patches? Who’s been working on the configuration files? Security expert and keen honeypot researcher Matthew Boddy offers guidance.

The tricks and traps of malware

The knowledgeable Fraser Howard of SophosLabs explains how to deal with multi-stage, multi-pronged malware attacks.

As a bonus, five lucky attendees at every podcast will receive a zip jacket and a pair of Sophos socks.

All the details you need are here.

Sponsored by Sophos.

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/05/sophos_sos_podcasts/