STE WILLIAMS

25 Years Later: Looking Back at the First Great (Cyber) Bank Heist

The Citibank hack in 1994 marked a turning point for banking — and cybercrime — as we know it. What can we learn from looking back at the past 25 years?

The banking industry was at a crossroads 25 years ago, marking the beginning of the digital world we know today. Banks were struggling to lower costs while improving customer access, and we saw physical branches and human tellers being replaced by ATM machines and electronic services.

It was also the time where Citibank fell victim to what many consider one of the first great cybercrimes. Vladimir Levin made headlines in 1994 when he tricked the bank into accessing $10 million from several large corporate customers via their dial-up wire transfer. Levin transferred the money to accounts set up in Finland, the United States, the Netherlands, Germany, and Israel. He was eventually caught, and Citibank ultimately recovered most of the money.

Looking back, it may be the first successful penetration into the systems that transfer trillions of dollars a day around the globe. The moment not only captured the attention of the world, but it caught the attention of my teenage self, inspiring my curiosity — and eventually a career — in the world of cybersecurity.

My young mind struggled to comprehend how something so seemingly simple could baffle the defenses of one of the world’s largest financial institutions. As the Los Angeles Times reported in 1995, “The incident underscores the vulnerability of financial institutions as they come to increasingly rely on electronic transactions. … But as they seek to promote electronic services — and cut the high costs of running branch offices — they face risks.”

I think we could easily say we’re in a similar situation today.

From Bonnie and Clyde to Black Hat
When I first learned of the heist through a documentary on local British television, I was shocked to know that someone could take money from a bank without even having to step into a branch. It was armchair fraud — the responsible person never left a physical fingerprint, all while essentially penetrating the impenetrable.

The 1990s and 2000s were abuzz with the excitement of the Internet and proliferation of access to Internet browsers. As we welcomed this new and wild World Wide Web, banks began to digitize their storefronts. However, the Internet wasn’t inherently designed with digital security in mind. The framework of the Internet was born in academia, an altruistic environment built around trust and exploration.

But with every gain, there was someone trying to game the system for a variety of reasons. Some were just curious what was accessible in this digital frontier. Others, like Levin, had more nefarious goals in mind.

Fast forward 20-plus years, and while we are in an entirely unrecognizable digital world, we’re still facing a similar battle. Rather than spoofing dial-up systems, we have industrial and government-level cybercrime, unpredictable intelligent bots, and vast amounts of computing power to deal with. Yet while there are similarities, there are a few important differences:

  • Scalability: While fraudsters were sophisticated for their time, scalability is what really affects how we understand fraud today. In the past, there were thousands of smaller banks and just a handful of people around the world with the capability to be able to “digitally” break in and make off with the loot. Now there are fewer — but larger — banks to steal from, yet with the digital resources today, fraudsters can maximize the footprint of their criminality. They target governments or large enterprises, or they simply get out of the robbery business and make their riches selling the tools globally across the Dark Web, which allows anyone with a computer, Internet connection, and a few hundred dollars to become a cybercriminal.
  • The rate of change: There was massive acceleration from the Industrial Revolution to digital revolution. While it’s well known that rate of change in the Industrial Revolution was swift, today’s rate of change is unmatched. Change inherently brings risk, and with the finance industry rapidly transforming, threats often move faster than the solutions that target them. This new rate of change has transformed the job of the CISO, who now must think strategically, and even abstractly, about protecting what isn’t even known yet.
  • Digital identity: The concept of digital identity wasn’t on the radar 24 years ago. But today, we have hundreds of websites where we must manage our identity, even if only about 10 are actually important. Consider Facebook, where nearly one-third of the global population log on and also use the same credentials to access millions of other accounts and services. In the digital world, you can become anyone as long as you can get a hold of their credentials — whether that is a password, Social Security number, a fingerprint.

Don’t Fight Fraud Alone
Today’s solutions must absolutely be comprehensive, involving much more than simple cross-industry collaboration. Regulations and frameworks can provide guidance and foster a productive global conversation about the issues at hand, but they take time to put into place and can’t adapt as quickly as the threats they are meant to mitigate. Fighting fraud today requires real-time intelligence. For security executives, it means a continual education on the latest tools, trends, and trials of the cybersecurity market.

The financial services industry has adapted to this new age of fraud by promoting strategic partnerships — often between financial technology companies (fintechs) and banks. While banks bring to the table many decades of refined, robust security measures and regulatory knowledge, fintechs offer their innovative initiatives, agility, and scalability to develop even more sophisticated methodologies for fighting fraud. Every organization is facing an uphill battle as the “what” to protect and “who” to protect it from are rapidly changing. Fortunately, these partnerships offer the right mix of expertise, experience, and innovation to quickly adapt and respond to changes in the cyber ecosystem, often providing a blueprint for others to follow suit.

Will There Be a Great Bank Heist of 2024?
Today, we have a better understanding of what comprises our digital assets, but it remains a constant battle to determine how best to secure them. The monetary losses financial institutions suffer from fraud and theft are staggering. Worse, the cybersecurity space is maturing in more insidious directions, suggesting we need to reconsider the value placed on different assets. Compared with a traditional bank theft, when such commodities fall in the wrong hands, it affects the livelihood of many more citizens and the backbone of our modern society and economy.

Related Content:

Zia Hayat is CEO of Callsign, a company that specializes in frictionless identification. Zia has a PhD in information systems security from the University of Southampton and has worked in cybersecurity for both BAE systems and Lloyds Banking Group. He founded Callsign in … View Full Bio

Article source: https://www.darkreading.com/perimeter/25-years-later-looking-back-at-the-first-great-(cyber)-bank-heist/a/d-id/1333502?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

What happens when a Royal Navy warship sees a NATO task force headed straight for it? A crash course in Morse

Boatnotes What’s it like aboard a warship? Aside from the glamorous bits when Russian jets are whizzing past and there’s lots to do? El Reg not only went aboard HMS Enterprise to find out – we scored a trip to the Arctic Circle courtesy of the Royal Navy.

As related in previous instalments, your correspondent was lucky enough to be invited aboard the seabed survey ship at the end of October by the Ministry of Defence.

So far we’ve seen Enterprise‘s seabed survey and data-gathering gear and we’ve had a sneaky look at the ship’s collection of mid-2000s operating systems. But what was it actually like, sailing towards the Arctic Circle?

Shipboard routine is fairly, well, routine. A pipe (Tannoy announcement) is made at 07:00 to wake everyone up with the traditional bosun’s whistle, “Call the Hands”. For the officers, with whom your correspondent ate and relaxed, breakfast was served between 07:00 and 08:00. A traditional affair, this consists of a help-yourself buffet featuring the usual British fare of bacon, sausage, eggs, beans and toast, or cereal and milk.

Enterprise's wake as we entered Trondheim Fjord

The ship’s wake as we sailed through the Norwegian fjords

After 08:00 the work day begins for most. All of the ship’s company stand four-hour watches at least once a day; when your correspondent joined the ship in Kristiansund, Norway, some of the junior officers were jokingly moaning about having been on defence watches, which is a routine of six hours on, six hours off. “Most people take a big sleep in one [off period] and do their other work in the other,” one sub-lieutenant told me.

A rocky outcrop framed by HMS Enterprise

Still a warship: A dramatic rocky outcrop is framed by HMS Enterprise‘s superstructure and one of her covered 20mm guns

After breakfast I wandered up to the ship’s bridge. This is where the whole show is run. In charge is the officer of the watch (OOW), or, if he’s around, the captain, Commander Phil Harper. For now it is one of the ship’s clutch of sub-lieutenants, who are all recent graduates from Britannia Royal Naval College Dartmouth, the Royal Navy’s training school for new officers.

The OOW is assisted on the bridge by around half a dozen other crew members. One of these, a rating, sits in the main control position, with the ship’s wheel and the central throttles to hand, while the others were scattered around the bridge as lookouts.

HMS Enterprise's bridge while at sea

HMS Enterprise‘s bridge while at sea

At various points during the day, the ship’s canteen is opened. This is announced to all through a cheery one-word pipe consisting of the word “Caaaaanteen!” Upon receipt of the signal, everyone not immediately busy tends to rush to the little serving hatch to buy snacks and the like.

Lunchtime for the officers, this time served in the wardroom (officers’ mess), is between noon and 13:30 ship’s time, broken into two sittings. You pick your options in advance at some point in mid-morning; the wardroom steward was good enough to hunt your correspondent down to ensure my order was received, though the choice between “chicken” and “meat kebab” was a pretty clear one.

Like a good hotel, there are two stewards who serve the officers their meals and clear the table. Though this might seem a bit antiquated, I had already seen one of the stewards on the bridge as we left harbour, monitoring instruments and calling out changes, so they’re not just waiters for the upper class. Manners are everything in the wardroom at meal times, though otherwise off-duty officers lounge in deep, comfortable leather sofas and watch films on a TV bigger than Elon Musk’s ego.

HMS Enterprise's funnel, seen while leaving Kristiansund harbour

Enterprise’s funnel, pictured against the stunning Norwegian fjords near Kristiansund

This being a ship crossing the North Sea in November, it was choppy. As I wrote this the deck was rolling back and forth in both planes. Not too badly – for the nautically inclined reading this, we were in sea state 2 or 3, which equated to a movement of around 10-15 degrees from vertical and horizontal. Sea sickness tablets (Stugeron, for those wanting to do as the Navy do) were available, and their use was strongly encouraged by all the crew.

The kit

In the afternoon other tasks take place as part of the ship’s programme. One day there were simulated machinery breakdowns. I followed these from the machinery control room (MCR) and down in the machinery spaces (engine rooms) themselves. Enterprise‘s machinery spaces are unmanned during normal running, with a high level of automation that lets supervisors in the MCR remotely close and open valves, or start and stop the three generators that power the ship’s two main azipod thrusters.

The machinery control room

The machinery control room, with human in position

Even the MCR is normally unmanned, with duty personnel carrying bleepers that alert them to rush to their posts if needed.

When something goes wrong with the ship’s main machinery, however, a human has to go in and fix the problem.

HMS Enterprise's main machinery room, or engine room

HMS Enterprise‘s main machinery room, or engine room

During the breakdown drill, Stan the stoker did a good job of fault-finding the simulated failure (a half-open valve to a cooling system), with “Kenny” (think of a 20th century entertainer) the supervising petty officer doing an equally good job of pretending to check vital equipment for your correspondent’s benefit before debriefing Stan.

Enterprise's main high voltage electrical cabinets

Danger, danger! High voltage!

The azipods themselves are, as the ship’s upbeat marine engineer officer (who wore, entirely incongruously, a far-from-pristine white boiler suit) explained, two large DC electrical motors mounted under the ship’s stern on an arrangement similar to a tank turret.

The selectors for the azipods on the port bridge wing

The selectors for the azipods on the port bridge wing

The propellers can be run in forward or reverse; in addition, the azipods can also be individually rotated through 360 degrees to move the ship in any desired direction. Enterprise‘s top speed is around 15-16kts. On the flip side, if anything goes wrong with the azipods the ship has to be dry docked, “which costs time and money”.

One of the azipods aboard the Enterprise. Note thick electrical cabling

One of the azipods aboard the Enterprise. Note thick electrical cabling at bottom right, feeding the main motor

Powering the azipods, and the ship’s bow thruster, are three great big diesel generators plus an auxiliary harbour generator. The level of automation designed into Enterprise is evident: as one watchkeeping lieutenant explained, the ship’s command sets what speed they want and the ship herself works out how many generators she needs running to generate the required motor power and achieve that speed. If the “spinning reserve” of excess kilowatts above the ship’s power draw falls below a certain level, the ship starts another generator.

Towards the end of the day comes dinner. Once again served in the wardroom, at around 6.30pm, the chicken and chorizo served over spaghetti really hits the spot. It wasn’t cordon bleu (sorry, chef) but your correspondent was quite happy to wolf it down. Others at the table, sadly, seemed to be feeling the effects of the ship’s constant motion and just picked at their food.

After excusing myself from the table I wandered up to the bridge. Naval tradition has it that you ask the officer of the watch’s permission before entering: aboard the Enterprise, the full phrase “Permission to come on the bridge, please, officer of the watch?” has morphed into “bridge, please, officer of the watch!” Nonetheless, it is still a question to be answered and not a formality.

Commander Phil Harper, captain of HMS Enterprise, on the bridge in the white jumper

Commander Phil Harper, captain of HMS Enterprise, on the bridge as we left Kristiansund. AB Richards is at left, with “Navs”, Lt Kyle O’Regan, leaning over the compass

Up on the bridge, now we’re out in the open ocean, things have just become interesting. A NATO carrier task force is sailing at us, comprising the US Navy ships New York (a 25,000-ton amphibious warfare ship), Iwo Jima (40,000 tons of flat-topped helicopter carrier), a US Naval Service supply ship and the Polish Navy destroyer General Pulaski.

Cdr Harper later tells me that according to the nautical “rules of the road” we have right of way – but it certainly doesn’t feel that way when looking out of the bridge windows. The OOW, Sub-Lieutenant “Deeps” (she wants to join the submarine service after finishing her training aboard Enterprise, geddit), is looking strained, to put it mildly. Enterprise slows, alters course, alters again. Off-duty officers start appearing on the bridge, supporting the OOW and subtly rubbernecking at the task force.

WECDIS plot of the NATO task force in the Arctic

The WECDIS (naval navigation software) display of the task force

“They keep altering course, sir,” said Deeps as the captain arrived.

“Perhaps they’re zig-zagging,” replied Cdr Harper, applying 28 years’ seagoing experience. Having the Enterprise appear precisely in the middle of the task force’s intended course is not the easiest of things to deal with, especially with the Americans constantly changing course – only the General Pulaski is passing far enough in front of us, around 6 or 7 nautical miles, that her activities don’t matter too much.

The Polish destroyer ORP General Pulaski, seen in the Arctic

The Polish destroyer ORP General Pulaski, seen in the Arctic

I moved outside to the port bridge wing (door on the left that leads to an outside platform running behind the bridge) and found two signals ratings operating an Aldis signal lamp. One was pressing the key, opening and closing the shutters in front of the lamp to send a Morse code message to the New York, while the other had his binoculars to hand.

Tap tap – clunk, tap. Tap tap – clunk, tap. Morse code letter R.

Two Royal Navy ratings with an Aldis lamp on HMS Enterprise

The signallers doing their thing with an Aldis lamp

A light, a little concentrated white dot of it, surprisingly bright in the afternoon Arctic sun, shone back at us from the New York‘s bridge.

“Dash, dash, dot… G,” says the rating with the binos. “Means repeat.”

Tap tap – clunk, tap, goes his mate on the lamp. G, comes the answering sequence of flashes.

“Look at this, I’ve got the script here” – the lamp rating shows me a piece of laminated card covered in code letters arranged in challenge and response sequences – “we send this, they reply with their callsign, we send ours, and that’s how it’s meant to go.”

I looked up in time to see the New York‘s signaller tapping out G again. “Good luck with it,” I said, wandering to the rear of the bridge to rubberneck at the rest of the task force.

The USS New York, seen at the end of NATO exercise Trident Juncture

A rather grainy blown-up picture of the USS New York, as we saw her in the Arctic

Surely, in this day and age, I asked Cdr Harper, there’s no need for such antiquated communication methods? The CO enthusiastically replied, referring to the Morse code signalling routine: “It’s secure because it’s line-of-sight only; we can encrypt it; it’s very difficult to eavesdrop on it unless you’re inside line-of-sight; it’s a valuable naval skill.”

He had a point. I got the distinct impression that the Royal Navy values Morse code highly as a matter of professional pride.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/02/hms_enterprise_embed_life_at_sea/

Open-source devs: Wget off your bloated festive behinds and patch this user cred-blabbing bug

Happy New Year! Oh, and if you include GNU’s wget utility in software you write, pull down the new version released on Boxing Day and push out updates to your users.

The popular utility retrieves internet-hosted HTTP/HTTPS and FTP/FTPS content and some years ago began storing extended attributes on disk as URIs.

On Christmas Day, security researcher Gynvael Coldwind (@voltagex) noted on Twitter that the stored attributes can include user credentials:

Though only stored locally, user IDs and passwords weren’t protected, and as Hanno Böck pointed out on the OSS-Sec mailing list, URLs can even contain “secret tokens” used for external services like file hosting.

“The URL of downloads gets stored via filesystem attributes on systems that support Unix extended attributes,” Böck wrote, and they were easily accessible on any logged-in machine using the getfattr command.

The bug has been designated CVE-2018-20483 and could be present in other systems as noted by the Mitre entry. “This also applies to Referer information in the user.xdg.referrer.url metadata attribute. According to 2016-07-22 in the Wget ChangeLog, user.xdg.origin.url was partially based on the behavior of fwrite_xattr in tool_xattr.c in curl.”

Böck said the same behaviour has been reported to the Chrome team and is awaiting a fix. Hector Martin tweeted that the stored information survives being moved to a different filesystem, so someone wanting to steal stored URLs from can move it from the target’s hard drive to a USB key with no trouble.

Wget dev Tim Rühsen wrote that the utility stopped using xattrs by default in the newly issued version 1.20.1. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/02/wget_bug_fix/

How to secure your Instagram account using 2FA

With our archives full to bursting with stories of hijacked social media accounts, it’s a very good idea to set up two-factor authentication (2FA) on all the platforms you use. 2FA combines your password with something else – a text message to your phone, a code generated by an authenticator app, or a physical key.

Although Instagram is part of Facebook, and Facebook supports several 2FA methods, the 2FA setup process isn’t exactly the same as it is for Facebook, so if you need a bit of help on how to get two-factor authentication on your Instagram account, we’ve outlined the steps in detail below.

While you can browse Instagram and use some Instagram features from a web browser, it’s really meant to be accessed within the Instagram app. To follow the steps below, you’ll need to be logged into the Instagram app on your smartphone or tablet.

    • Go to your Profile by tapping the person icon in the bottom right of the app.
    • Open the “hamburger” menu in the top right of the screen. Tap Settings at the very bottom of that menu.
    • Scroll down to the Privacy and security section and open it up.
    • Under the Security section you’ll find the Two-factor authentication option.
    • Instagram will now show you a screen with a basic introduction to 2FA and the methods they support: Text message-based 2FA and app-based. Again, since Instagram is primarily app-based, authentication methods that play nicely with smartphones are what Instagram supports. (USB key-based 2FA devices like a Yubikey wouldn’t work in a mobile context.)
    • On the next screen, you can choose the method(s) you’d like to use for two-factor authentication. While you can choose to enable both Text message and Authentication app-based 2FA, it may make things needlessly complicated for you – unless you’re confident you need both options at once, it’s best to stick with just one of these methods.
    • The more secure of the 2FA options is to use an Authentication app. You’ll need to install a free app like the Google Authenticator or Duo Mobile app to complete the initial 2FA setup on Instagram, and you’ll also need to keep it installed to log in to Instagram afterward. So if you don’t have an authenticator app installed, go ahead and install one right now.
    • Back on the Instagram 2FA setup screen, select the Authentication app option and tap Next, and you’ll be prompted to have Instagram work with your authentication app automatically – which takes care of some of the annoying setup legwork for you, so hit yes. You can use whatever trustworthy authenticator you prefer.
    • Your phone will then switch you over to your authenticator app, and you’ll be asked if you want to add the token attached to your Instagram user name. Hit yes, and you’ll see your Instagram account name within the authenticator app, and a 6-digit numerical code underneath it. That code is your authentication token, and it will change at very frequent intervals. So you’ll next want to copy that numerical code and quickly go back to Instagram, where it is waiting for you to input your confirmation code (the numerical code you just copied).
  • Paste the code in and you should get a confirmation from Instagram that app-based 2FA is now set up.

You’re not 100% done just yet. The next screen will show you your Recovery codes, which are sort of like an emergency escape hatch if you can’t get 2FA to work – say if you lose your phone and can’t use the authentication app, but need to log in to your account.

In the wrong hands, these codes would also let someone bypass your 2FA protections, so you want to keep them confidential and in a safe place. Some people take a screenshot of the codes and email the screenshot image to themselves, save it in their cloud photo storage, or they even print them out and put them in a locked safe — whatever works for you, as long as the chances of it falling into the wrong hands are minimal.

Once 2FA is set up on your account, Instagram will also send you an email confirming that this new security measure is in place, or if 2FA is ever disabled on your account.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/pVOZd-tARas/

How to secure your Twitter account

Intrusions into your Twitter account might range from mild annoyance, to a serious PR fail, to an international political gaffe.

Regardless of how you use it, there’s no need to make it easier for someone who wants to hijack your Twitter account. It’s quite easy to improve the security of your Twitter account and it only takes a few minutes.

Enable two-factor authentication (2FA)

Having a strong, unique password is an important first step to securing your account, but passwords can be easily guessed or generated by an attacker, so by themselves they’re not enough to stop someone in their tracks.

Your best bet to keep someone out of your account is to also enable two-factor authentication, which means you’ll need a second factor – like a numerical code or physical key – to prove it’s you when you log in to your account. It’s extremely unlikely that someone trying to break into your account has both your password AND access to your unlocked phone, so it significantly reduces the chance of an account break-in by enabling two-factor authentication.

How to do it: To enable 2FA on your Twitter account, log in and click your profile icon, then go to Settings and privacy. Scroll down to Login verification, which is what Twitter calls two-factor authentication.

Twitter begins the setup with a text message (SMS) code, but once you have 2FA set up you have the option to stick with an SMS code, use a physical security key, or use a mobile authenticator app. Many people prefer to use SMS as it’s easiest, but this method has its own security flaws, so we recommend using an authenticator app on your phone.

For good measure, you may also wish to enable password reset verification, which will require you to confirm your email or phone number if someone (hopefully you) asks to reset your password.

Screen who can contact you

Twitter is great as a big, open platform where anyone can join in the conversation. But that openness can also be a bit of a pain, as harassers and crooks love the platform’s openness too. There’s a very simple way to make sure you aren’t bothered by lazy spammers who are just out to blast Twitter accounts with links to malware as quickly as possible: Screen who can contact you via direct message or by public reply.

You can opt to only allow people you have opted in to follow to send you a direct message (a private message that does not have a character limit, unlike standard tweets), and you can also opt to enable quality filters on regular tweets that you receive, so tweets by profiles of “low quality” will never reach you. This means that if someone with a phony account tries to send you a potentially phishy link – which can and does happen on Twitter, so always click with caution! – they’ll have to do a lot more work just to set up their account and get past basic quality filters, and most spammers won’t bother.

How to do it: To only allow people you follow to send you a direct message, go to Settings and select Privacy and safety from the left-hand menu, and then deselect Receive direct messages from anyone.

To enable the Twitter quality filters, go to your Settings and select Notifications from the left-hand menu. Under Advanced, select Quality filter.

On this page you can also opt to Mute notifications from people who have a default profile photo and haven’t confirmed their email address, which will filter Twitter accounts that haven’t finished their basic profile setup.

Check your connected apps

Do you remember which apps you’ve authorized to have full access to your Twitter account? It’s painlessly easy to sign up to a service using Twitter, but how long do you want that service to have that kind of access? It’s worth reviewing your connected apps to see what’s still lingering in there, and if you see something you don’t remember authorizing or haven’t used in a while, it’s time to revoke its permission to your account.

How to do it: In your Settings, select Apps and devices from the menu and take a look at the apps that are listed as connected to your account. Hitfor any app that you no longer need or want.

The nuclear option: protect your tweets

While the idea behind Twitter is that the conversation is public and open to everyone, you can opt to protect your account, which makes your tweets visible only to people that you’ve opted to follow.

Twitter itself notes that if you have tweeted publicly and then later change your account to “protected,” it’s very possible those initially-public tweets will continue to live on publicly in perpetuity – so protecting your account is not an “oops” button for erasing tweet you’ve regretted sending, but it is a good way to make sure you know exactly who’s reading your words. It’s the nuclear option for sure, but if you want control over who’s reading you, it’s the right option for you.

How to do it: In Settings, select Privacy and safety. Under Tweet privacy check Protect your Tweets. (You can always un-protect your tweets and make your tweets public if you ever change your mind!)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nWCTXE3B3Bo/

How to protect your Facebook account: a walkthrough

Those of you who have joined team #DeleteFacebook may avert your eyes. There are some of us – okay, many of us – who remain on the ubiquitous social media platform, and if you’re one of them, there are some things you can do to make your account more secure from prying eyes.

Here we walk you through the important settings you can change and behaviors you can implement to lock down your privacy on the social network.

Note: To change many of the settings below, Facebook will ask you to input your password. It’s a good reminder that if your password isn’t strong or unique to the site, now is the perfect time to change it!

Enable 2FA

If you only do one thing on the list in this article, do this: enable two-factor authentication (2FA). This means someone trying to break into your Facebook account needs more than just your password, they also need a second token that you own, be it a code or a physical key. The chances of someone having this in their possession are pretty small, so this step will stop most intruders in their tracks.

Facebook will walk you through the steps to enable 2FA on your account to help you get set up. You have a few options available to you for how you want to authenticate: you can choose to use a code sent to you by text message, which is easiest but not completely secure, or to use a code generated by an authenticator app on your phone, which takes a little more setup work.

If you’re really savvy and browsing using the website on a computer, Facebook also supports U2F keys like YubiKey, which is a physical key you plug into your computer’s USB port as your authentication token.

How to do it on your desktop: Go to your Facebook Settings and select Security and Login from the menu on the left. Next to Two-Factor Authentication click Edit and then Get Started.

How to do it in the app: Open Privacy shortcuts from the hamburger menu in the bottom left. Scroll down to the Account Security section and tap Use two-factor authentication. Choose whether you want to set up SMS 2FA or use an authenticator app.

You can turn on 2FA for your account from either the website or the app, you don’t have to do it in both places.

Get login alerts

If someone does manage to get into your Facebook account, you’ll want to know about it as soon as possible. If requested, Facebook can alert you to any strange-seeming logins to your account. You can be alerted via email, text message, Facebook message or even a Facebook in-app notification. It’s a little peace of mind and a very simple measure to set up.

How to do it on your desktop: In your Facebook settings, select Security and Login and scroll down to Setting up Extra Security. Hit the Edit button on Get alerts about unrecognized logins and customize how you’d like to be notified.

How to do it in the app: Open Privacy Shortcuts from the hamburger menu in the bottom left. Scroll down to the Account Security section and tap Receive alerts about unrecognised logins.

Check your connected apps

That quiz you took years ago about your star sign that you promptly posted and forgot about? All these years it’s had permission to see your profile, posts, and friends’ posts into perpetuity, so why does it still have this access?

You could have any number of apps like this quietly sniffing your information in the background. There’s an easy way to check what apps you might still have enabled, and disable them if you like. It’s best to have as few apps enabled as possible – and definitely remove permissions for any apps that you don’t recognize or remember using.

How to do it on your desktop: In your settings, go to Apps and Websites. Check the apps in your Active and Expired categories and remove any or all of them.

How to do it in the app: Open Settings from the hamburger menu in the bottom left. Scroll down to the Security section and tap Apps and Websites. Open Logged in using Facebook and check the apps in your “Active” and “Expired” categories and remove any or all of them.

Note, there is also a Business Integrations section, separate to Apps and Websites, that you might want to check for connected services too.

Be discriminating in how people find and contact you

The whole idea of Facebook is to reach out to friends and family and grow your network, but spammers and fake profiles seem to be some of the most enthusiastic users of the platform lately.

If you’re tired of getting suspicious Facebook friend invitations, or would rather not invite the risk of getting a phishy or malicious link on your Facebook wall, be discriminating in who you befriend. We suggest limiting who can contact and find you on the platform to “Friends of friends,” and to limit email and phone lookups to “Friends of friends” as well.

How to do it on your desktop: In settings, select Privacy. Modify your preferences for how you can be found on Facebook under the How people can find and contact you section.

How to do it in the app: Open Settings from the hamburger menu in the bottom left. Scroll down to the Privacy section and hit Privacy settings. Scroll down to How people can find and contact you. 

Call for backup: Choose friends to help if you’re locked out

If you’ve had issues in the past with your account being compromised – say if you’re a public figure or just very unlucky – Facebook has an option to let you select three to five people in your friends list who you can call on to help you gain control over your account if you’re ever unable to log in (say, because someone else has locked you out.)

This is not a feature that everyone will need, so if you don’t think it’s going to be that big a deal if you’re locked out of your account, feel free to skip this one. But if Facebook is your primary means for earning a living, or communicating with customers or your fanbase, this setting is worth your consideration.

The people you choose to be your backup – which Facebook calls your “trusted contacts” – should be people you know will be tech-savvy enough to know how to help you quickly (so, ideally someone who knows how to use a smartphone), and they should also know ahead of time that you’re choosing them to be a trusted contact, as Facebook will notify them that you’ve tapped them for this ‘honor’.

At no point will any of your trusted contacts have access to your Facebook account personally, nor will they be able to commandeer it at any time – they will be able to send you a code and a URL to help you log back into your account in case of an emergency.

How to do it: In Settings, go to Security and Login and scroll down to Setting up extra security. Hit edit on Choose 3 to 5 friends to contact if you get locked out and follow the instructions.

How to do it in the app: Open Settings from the hamburger menu in the bottom left. Under Security, tap Security and login and scroll down to Setting up Extra Security. Hit Choose 3 to 5 friends to contact if you are locked out.

Face recognition and tag privacy

Facebook maintains that it has face recognition capabilities for our own benefit – so we can know if we’re in a photo but haven’t been tagged, and someone can’t impersonate us by using our profile photo (we’re wise to your tricks, spambots!). But many of us also find this kind of tech creepy and intrusive. If you don’t want Facebook to proactively find you and identify you in photos, you can disable face recognition.

How to do it on your desktop: In Settings, select Face Recognition and then choose No.

How to do it in the app: Open Settings from the hamburger menu in the bottom left. Scroll down to Privacy and open Face recognition. Select No.

Note that face recognition isn’t the same as when people you know tag you in photos. If you don’t want people to tag you in photos or posts without your approval first, there’s another setting you’ll want to enable.

How to do it on your desktop: In Settings, go to Timeline and tagging and then choose On for both options in the Review section.

How to do it in the app: Open Settings from the hamburger menu in the bottom left. Scroll down to Privacy and open Timeline and tagging. Scroll down to Review and ensure both are set to On.

Keep your posts friends-only

You wouldn’t leave your front door open all the time. Why make the details of your personal life open and public for all the cybercriminals in the world to mine? Leaving your posts all public-facing is a gold-mine for criminals looking for details to try and guess security questions, or impersonate you to scam friends or family.

There’s a really easy solution here: Keep your Facebook posts out of the public eye and make the default privacy level friends-only. That way only the people you have approved and friended can see what you’re up to.

How to do it on your desktop: In settings, select Privacy. Under Your Activity set Who can see your future activity? to Friends, and click Limit past posts to retroactively make all your previous posts Friends-only as well.

How to do it in the app: Open Settings from the hamburger menu in the bottom left. Scroll down to Privacy and open Privacy settings. Under Your Activity set Who can see your future activity? to Friends, and also go back a step and turn on Limit who can see past posts too.

Be discriminating in what you do

Unfortunately, the risks to Facebook users are no longer just from external forces trying to break their way into your account. Unfortunately, we’ve learned in the last year or so that there have been a few Facebook-approved data miners, like Cambridge Analytica, that were given unfettered access to what Facebook users were up to behind the garden walls.

So the steadfast internet advice applies here as anywhere: Mind what you post, and remember that the internet is forever. Even content you post behind the friends-only filter on Facebook is not an ironclad guarantee of privacy, so use discretion and if your gut is telling you to not hit that “post” button, it’s best to listen.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/hTzjZsQS5Zo/

The Coolest Hacks of 2018

In-flight airplanes, social engineers, and robotic vacuums were among the targets of resourceful white-hat hackers this year.

It was a year where malicious hackers waged shockingly bold – and, in some cases, previously unimaginable – false flag attacks, crypto-jacking, social engineering, and destructive malware campaigns. But even with this backdrop of more aggressive and nefarious nation-state and cybercrime attacks in 2018, security researchers still found creative breathing room to pre-empt the bad guys with some innovative hacks of their own.

White-hat hackers – including “tweenagers” – this year cracked into high-profile targets such as in-flight airplane satellite equipment and simulated US election websites, as well as robotic vacuums. They also pwned social engineers and phishers by turning both their verbiage and artificial intelligence (AI) against them in the hopes of beating the bad guys at their own game and exposing the holes before they could be abused.

So forget about that failed bitcoin mining experiment, the Russians in your home router, and the weaponized PowerShell lurking in your network. Instead, take a few minutes to peruse some of the most innovative (aka cool) hacks by security researchers that we covered this year on Dark Reading.

Hacker on a Plane
It took four years, but Ruben Santamarta finally proved his theory that the major vulnerabilities he first discovered in the firmware of satellite equipment and reported in 2014 could be abused to weaponize it. To do so, the IOActive researcher, from the ground, cracked into on-board Wi-Fi networks, saw passengers’ Internet activity, and reached the planes’ satcom equipment, all of which in his previous research he had concluded would be possible – but had been met with some skepticism by experts.

“Everybody told us it was impossible. But basically, it’s possible, and we [now] have proof,” Santamarta told Dark Reading prior to presenting his new findings at Black Hat USA in August.

Santamarta said he found an alarming array of backdoors, insecure protocols, and network misconfigurations in satcom equipment affecting hundreds of commercial airplanes flown by Southwest, Norwegian, and Icelandair airlines. Although the vulnerabilities could allow hackers to remotely gain control of an aircraft’s in-flight Wi-Fi, Santamarta was reassuring that there were no safety threats to airplanes given the way the networks are isolated and configured.

In addition, while scanning the Wi-Fi network on a Norwegian Airlines flight from Madrid to Copenhagen in November 2017, Santamarta revealed at Black Hat that he stumbled on actual malware: A backdoor was running on the plane’s satellite modem data unit, and a router from a Gafgyt Internet of Things (IoT) botnet was reaching out to the satcom modem on the in-flight airplane and scanning for new bot recruits. Luckily, none of the satcom terminals on the plane were infected, but it was a wakeup call for possible threats to come for airlines.

Semantics Expose Phishers
Social engineering is one of the easiest and most foolproof ways to infect Patient 0 in a cyberattack, and not all phishing emails get trapped in a spam filter. So a pair of researchers devised a way to detect social engineers/phishers by “hacking” the language attackers use in their text: They built a tool that runs a semantic analysis to determine malicious intent, using natural language processing to identify sketchy behavior.

Ian Harris, professor at the University of California, Irvine, and Marcel Carlsson, principal consultant at Lootcore, basically exposed the attackers via the language they used in their text and spoken words converted to text. Harris and Carlsson’s phisher-hacking tool detects in emails both questions looking for private data and nefarious commands – which typically are signs of a possible social engineering attack. The tool can be used to flag malicious text messages and phone calls, too.

This word-hacking tool of sorts compares verb-object pairs in the text with a blacklist of randomly chosen phishing emails to analyze semantics and word choice.

“The reason why social engineering has always been an interest … it’s sort of the weakest link in any infosec conflict,” Carlsson told Dark Reading. “Humans are nice people. They’ll usually help you. You can, of course, exploit that or manipulate them into giving you information.”

Playing Mac-A-Mal
The old adage of the Apple Mac’s immunity to viruses – propagated, in part, by marketing on Apple’s own website until 2012 – has fallen to the reality of malware writers increasingly targeting MacOS.

Pham Duy Phuc, a malware analyst with Netherlands-based Sfylabs BV, and Fabio Massacci, a professor at the University of Trento in Italy, decided to hack the painstakingly manual process of detecting and analyzing the growing ecosystem of malicious code targeting Macs. They developed a framework called Mac-A-Mal that blends static and dynamic code analysis to find and unmask the inner workings of Mac malware – even the stealthiest variants.

Their tool can operate undetected while it grabs malware binary behavior patterns, such as network traffic, evasion methods, and file operation. “It takes actual behavioral data of malware samples, executions, inside a sandbox,” Phuc said.

The pair has discovered hundreds of new Mac malware samples with the tool. Half of all Mac malware on VirusTotal in 2017 were backdoors, they found, and most of the variants were adware. 

‘God Mode’
Hardware hacking was hot in 2018. In a year that began with the revelation of the now-infamous Spectre and Meltdown flaws in most modern-day microprocessors and a mass scramble to mitigate their abuse, a researcher this summer revealed his chilling hack of a CPU security feature.

Researcher Christopher Domas found a way to break the so-called “ring-privilege model” of modern CPUs, giving him kernel-level control of the machine and bypassing software and hardware security. He demonstrated this at Black Hat USA during his “God Mode Unlocked: Hardware Backdoors in X86 CPUs” talk.

Domas shared the details on how he cracked into the ring and obtained “God mode” control of the machine via a hardware backdoor found in some machines and embedded x86 microprocessors. The backdoor was enabled by default on some systems, which he exploited to obtain kernel control. The good news: Domas said he believed only VIA C3 CPUs were vulnerable to this attack and not later generations of the processor.

His tool, Project Rosenbridge, is on GitHub for other researchers to experiment with. “This work is released as a case study and thought experiment, illustrating how backdoors might arise in increasingly complex processors, and how researchers and end-users might identify such features. The tools and research offered here provide the starting point for ever-deeper processor vulnerability research,” he wrote on the site.

Robotic Vacuums Hoover Data
First your fridge and now your vacuum cleaner.

Researchers from Positive Technologies discovered flaws in the Dongguan Diqee 360 robotic vacuum that could turn it into a mobile surveillance device able to eavesdrop on consumers’ conversations or spy on them via its built-in webcam or smartphone-controlled navigation feature.

A remote code execution bug let the attacker gain superuser rights on the device, after authenticating to the device’s weak default login feature. Another flaw the researchers found in its firmware-update process would allow an attacker to physically input a malicious microSD card.

The obvious dirty little secret: An attacker could use the vacuum cleaner as a hub for stealing information from consumers and spying on them – or even commandeer it for an IoT botnet army. It’s yet another example of consumer IoT devices coming equipped with Internet access and little to no security.

AI as a Weapon
One way to beat adversaries is to think like them. That’s what inspired researchers from Cyxtera Technologies to build an algorithm that simulated how bad guys could weaponize AI for more foolproof phishing attacks.

DeepPhish is all about learning how attackers ultimately could use AI and machine-learning tools to bypass security tools that spot malicious behavior and content. Alejandro Correa, vice president of research at Cyxtera, said that by the end of the year, more than half of phishing attacks will have come via sites with malicious TLS Web certificates. “There is no challenge at all for the attacker to just include a Web certificate in their websites,” he said.

Correa and his team took URLs that had been manually created by attackers and then built a neural network that learned which URLs got past blacklists or other defenses. From there, they could generate new phishing URLs with the best chance of success for attackers. In one test, an attacker that previously had a success rate of 0.7% improved to 20.9% with the DeepPhish AI tool.

“[It will] enhance how we may start combatting and figuring out how to defend ourselves against attackers using AI,” Correa said.

Script Kids
Two 11-year-olds at DEF CON this year pointed SQL injection code at a website replicating the look and feel of the Florida Secretary of State site. Within 15 minutes, they broke in and altered the vote count reports.

Emmett Brewer, aka @p0wnyb0y, was first to crack the simulated state website, in 10 minutes, followed five minutes later by his contemporary Audrey, who changed the vote counts on the simulated Florida Division of Elections site. Brewer awarded himself all of the vote counts and then tweeted: “I think I won the Florida midterms.”

The good news was that the website wasn’t the exact duplicate of the state’s website. The bad news was that all it took for the kids to hack the model website was reading a handout on SQL injection and how to use it – information the organizers gave them and other kid hackers at the R00tz kids’ event within DEF CON.

Jake Braun, co-founder and organizer of the DEF CON Voting Village, said the voting and election hacking events as well as R00tz weren’t meant to be a “gotcha” moment. “The most vulnerable part [of the election system] are these websites,” he said.

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/the-coolest-hacks-of-2018/d/d-id/1333520?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Start Preparing Now for the Post-Quantum Future

Quantum computing will break most of the encryption schemes on which we rely today. These five tips will help you get ready.

Search on the phrase “quantum computing,” and you’ll find a furious debate. On the one hand, you’ll read breathless articles predicting groundbreaking advances in artificial intelligence, genomics, economics, and pretty much every field under the sun. On the other, you’ll find the naysayers: It’s all hype. Large-scale quantum computers are still decades away — if they’re possible at all. Even if they arrive, they won’t be much faster than standard computers except for a tiny subset of problems.

There’s one area, however, where you’ll find all sides agree: Quantum computing will break most of the encryption schemes on which we rely today. If you’re responsible for your organization’s IT or security systems, and that sentence made the hair on the back of your neck stand up, good. To get ready for a post-quantum world, you should be thinking about the problem now.

So Long, Encryption
Much of the debate around what quantum computers can do remains speculative, but there are a few areas where we know they’ll excel. Back in 1994, mathematician Peter Shor developed a quantum algorithm that can perform certain types of calculations, such as finding the prime factors of huge numbers, far more quickly than classical computers. Well, today’s most widely used encryption systems rely on those types of calculations.

According to the Cloud Security Alliance’s Quantum Safe Security Working Group (emphasis added):

Large-scale quantum computers will be able to use Shor’s algorithm to break all public key systems that employ RSA (integer factorization-based), Diffie—Hellman (finite field discrete log-based), and Elliptic Curve (elliptic curve discrete log-based) Cryptography. These algorithms underpin essentially all of the key exchange and digital signature systems in use today. Once reasonably sized quantum computers capable of operating on tens of thousands of logic quantum bits (qubits) exist, these public key algorithms will become useless.

For the moment, quantum computing at those scales is still hypothetical. Current quantum computers, like those being developed by IBM and Google, can process a limited number of qubits. But researchers are pushing those limits every day.

“It might still cost an enormous amount of money to build,” says one of those researchers, MIT’s Isaac Chuang. “But now it’s much more an engineering effort, and not a basic physics question.”

Time Is Not on Your Side
So, breaking RSA and other common encryption schemes sounds pretty bad. But if large-scale quantum computers are still 10 to 15 years away, as even optimistic researchers believe, we have plenty of time to develop post-quantum cryptography solutions, right? Not really. There are two issues.

First, if you accept that 10- to 15-year window, products shipping right now will still be in the field when the first large-scale quantum computers come online. Consider Internet of Things (IoT) devices like connected cars, smart power and water meters, control systems for major power, and transportation infrastructure. Many of those devices are designed to operate for a decade or longer. Almost all of them use RSA.

Second, while some of the world’s brightest minds are working on “quantum-safe” encryption mechanisms, the process will take time. Implementing the new standards they ultimately recommend will take even longer.

Think about every process and device in your organization that relies on public key systems: Email. Authentication. Every online financial transaction. How long will it take to change and update those systems? Years, most likely. If you’re in a heavily regulated industry like financial services, with complex and specific compliance requirements, expect the process to take even longer.

“It has taken almost 20 years to deploy our modern public key cryptography infrastructure,” notes the National Institute of Standards and Technology (NIST) in its “Report on Post-Quantum Cryptography.” “It will take significant effort to ensure a smooth and secure migration from the current widely used cryptosystems to their quantum computing resistant counterparts. Therefore, regardless of whether we can estimate the exact time of the arrival of the quantum computing era, we must begin now to prepare our information security systems to be able to resist quantum computing.”

Take Action
It may take a while for industry groups to settle on the best approaches to post-quantum encryption and authentication, but you don’t have to wait. There are steps you can take now to prepare:

  • Keep an eye out: Monitor the development of both quantum computers and post-quantum standards and protocols, especially when designing IoT devices with a 10-year-plus lifespan.
  • Double key sizes: If you think your current systems will still be around when quantum computing debuts, double your key sizes for symmetric algorithms. A good place to start is AES-256, which is not much less efficient than the shorter key versions. For collision-resistant hash functions, use SHA-512.
  • Embrace the hash: Hash-based signatures are a viable quantum-safe trust mechanism you can use in the near future, with NIST expected to standardize them in 2019. These signatures can also be used to securely deploy more advanced quantum-safe technologies in the future.
  • Mix and match crypto: Some in the financial industry are exploring hybrid cryptography, which combines conventional RSA or elliptic-curve cryptography with one or more of the new “quantum-resistant” algorithms. In this model, cracking a key exchange would require an attacker to break multiple encryption schemes at once.
  • Talk to your provider: Make sure you’re talking to your cryptography provider about their plans for quantum-resistant computing, particularly if you’re producing IoT-enabled products with long operating lives. An experienced provider should be able to help you build quantum-resistant crypto into your deployments, such as certificate-based authentication using public key infrastructure.

The debate around quantum computing will likely rage on, and we may not have clear answers to the biggest questions for several years. But smart IT and cybersecurity professionals are taking a proactive approach. By starting to prepare now for a post-quantum world, you can make sure that when the wave comes, you’re able to ride it — instead of getting crushed.

Related Content:

Timothy Hollebeek has 19 years of computer science experience, including eight years working on innovative security research funded by the Defense Advanced Research Projects Agency. He then moved on to architecting payment security systems, with an emphasis on encryption and … View Full Bio

Article source: https://www.darkreading.com/perimeter/start-preparing-now-for-the-post-quantum-future/a/d-id/1333517?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

US Petroleum Employee Charged with Stealing Trade Secrets for Chinese Firm

Longtime US resident allegedly stole information for petroleum firm in China that had offered him a position.

A Chinese national was arrested in the US last week for allegedly stealing intellectual property from a US petroleum company where he was employed. Hongjin Tan, 35, is charged with pilfering some $1 billion in trade secrets on behalf of a Chinese petroleum firm where he was offered a new job.

The stolen data was for the manufacture of a “research and development downstream energy market product,” according to a US Department of Justice criminal complaint filed in the case. According to the complaint, Tan downloaded hundreds of data files from the US petroleum company in the alleged theft.

“The theft of intellectual property harms American companies and American workers. As our recent cases show, all too often these thefts involve the Chinese government or Chinese companies. The Department recently launched an initiative to protect our economy from such illegal practices emanating from China, and we continue to make this a top priority,” said US Assistant Attorney General for National Security John C. Demers in a statement. 

US government officials last week also indicted two members of a Chinese nation-state hacking team known as APT10.

Read more details here.

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/us-petroleum-employee-charged-with-stealing-trade-secrets-for-chinese-firm/d/d-id/1333564?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

It’s the end of 2018, and this is your year in security

The 2018 calendar year saw an interesting mix of both technical and strategic questions, as engineers were met with new problems and execs were forced to cope with stark new realities.

Here are a few of the most interesting and memorable stories to break over 2018.

Meltdown menace

It’s a bit anticlimactic for the biggest news in the year to erupt in its first days, but that is exactly what happened when, on January 4, The Register broke word on security vulnerabilities present in one form or another in nearly all modern desktop, mobile, and server chips.

Known as Meltdown and Spectre, the design flaws would allow an attacker to pull information from the processor’s kernel memory, potentially allowing an attacker to access things like passwords and decryption keys.

Fallout from the disclosure was swift and severe, with Intel, AMD, Microsoft, and virtually every other major vendor forced to spend months dealing with the flaws.

US election hackers tell voters to get stuffed

Normally, America’s mid-term election years hardly draw any attention outside of Washington DC. However, with a heated political climate and investigations of the 2016 presidential election still going on, the 2018 vote became one of the biggest ongoing security stories of the year.

First, there was the talk that both foreign and domestic hackers could be looking to sway public opinion by targeting certain groups or by spreading misinformation. By the summer, security companies were already warning that Russian crews were trying to infiltrate campaigns in order to steal and leak sensitive information and tip the scales toward one candidate or another.

Then, there was the possibility that the machines themselves could be hacked. These fears were underscored in August when the Defcon voting village exhibit showed how even a novice hacker could break into a machine and do everything from erasing vital data to changing the actual vote tally.

Fortunately for the voters, and in spite of the best efforts of Congress, the elections went off with very little fanfare and only one instance of outright voter fraud.

Considering the mess we were looking at earlier in the year, that wasn’t such a bad outcome.

Docket to Russia

Speaking of elections, 2018 also saw US authorities begin efforts to bring to justice the Russian groups who oversaw the 2016 election meddling.

In February, Republican-appointed special prosecutor Robert Mueller charged 13 members of the Russian Internet Research Agency (IRA), a well-known troll factory, with conspiring against the United States in 2016.

Among the charges were that the trolls stole the identities of American citizens and used fake companies to create a front for their disinformation campaign. This included moving millions of dollars from Russia to the US in order to fund bogus rallies and purchase promoted posts and tweets.

Of course, the IRA crew have yet to even be apprehended in Russia, let alone extradited to the US. There is also no indication that Moscow will ever cooperate, as the Kremlin has said that the 13 professional trolls would be protected by diplomatic immunity.

Cambridge Analytica shows that Facebook hasn’t learned a damn thing

Facebook not giving a crap about data privacy and getting hammered for it has become something of an evergreen story these last few years.

2018, however, saw perhaps the worst of the social network’s scandals when it was revealed that influence-peddling research company Cambridge Analytica had harvested tens of millions of Facebook profiles to gather information and covertly shape public opinion.

This would eventually lead to the end of Cambridge Analytica as a going concern and would force Mark Zuckerberg to once again get up in front of the world and say how really, truly, truly, absolutely he sorry was that, yet again, he and his company made a huge profit by selling out people’s personal lives.

How sincere was that apology? Well, it only took a few months for Facebook to do the same thing all over again.

Malware goes cuckoo for cryptocoins

The year saw the emergence, or better yet, explosion, of a new type of malware: the cryptocoin miner. Lured by the promise of big payoffs and low risks, malware writers began to load up their payloads with scripts that would use the compute cycles of infected machines to generate malware for the attacker.

Eventually, cryptocoin miners and wallet-stealing trojans would make their way into everything from in-the-wild exploits to injected scripts on charity websites and even otherwise legitimate software packages.

Sadly (or perhaps not sadly) for the malware writers, the year also saw the price of Bitcoin and other crypto currencies plummet from all-time highs in January to depressing lows in December that matched pre-boom levels.

Summer of the leaky buckets

Of all the head-pounding security cockups to occur this year, perhaps none were as consistently frustrating as the data leaks created by poorly secured cloud storage buckets.

Armed with little more than Shodan and a lot of spare time, researchers have made a career out of rooting out AWS S3 storage instances that were not adequately walled off from public access.

When the buckets are left open to the internet, it often results in the mass exposure of private business information and in many cases, the personal info of customers and citizens. Some of the targets to fall victim this year included political campaigns, robocalling companies, and social networking strategists.

Amazon has done what it can to get a hold of the issue, including placing stricter default settings on S3 buckets and giving administrators more control over when and where data can be shared. Ultimately, however, the responsibility will lie with the admins themselves, and companies will need to tighten up their practices if they want to avoid a repeat of this issue in 2019.

Equifax breach finally gets its post-mortem

A report more than a year in the making was finally issued in December of 2018 when Congress delivered its formal account of the 2017 breach at Equifax saying 145 million Americans’ personal data was leaked to hackers.

In the 96-page writeup, investigators condemned the credit agency’s mega-breach as “entirely preventable” and savaged Equifax for, among other things, taking more than a year and a half to spot the lapsed security certificate that left its network vulnerable to the hackers.

The breached application itself was also faulted, found to be woefully out of data and connected to dozens of external databases that it no longer needed access to, the system allowed the hackers to get at tens of millions of customer records they would otherwise have not been able to access.

Equifax has since disputed portions of that report.

It wasn’t only Equifax’s IT operation that caught heat in 2018. The year also saw one of the executives who profited off the incident be brought to justice (sort of).

Software development boss Sudhakar Reddy Bonthu was given eight months’ home confinement and was fined $50,000 as well as forced to turn over $75,000 in gains he made when, upon learning of the breach, he purchased options on Equifax stock that allowed him to turn a quick profit when the breach was revealed to the public and the share price plummeted.

Mirai saga carries on

The Mirai botnet has emerged as one of the largest and most influential botnets in recent memory as it showed just how vulnerable, and effective, unsecured IoT devices can be.

In 2018, the Mirai story took a number of new twists. In June, criminals hung some more flesh on the framework bones of Mirai to create a nasty new group of IoT botnet menaces. In May, researchers calculated the cost of the attack and found that each infected internet thing cost device owners about $13.50 in power, bandwidth, and repair costs.

In September, we learned that the three crooks behind Mirai would escape jail time thanks to a deal that saw them work on behalf of the FBI. And two months later, we found out that Mirai had become even more dangerous, thanks to code tweaks that let it infect Linux servers alongside IoT hardware.

Clearly, Mirai is the awful, unwanted, infectious gift that keeps on giving.

Windows 10 can’t get out of its own way

This one wasn’t exactly unexpected, but it was still noteworthy.

In 2018, as it has in previous years, Microsoft released a major new update to Windows. And as it had in previous years, that update managed to cause all sorts of strange and wondrous new problems for PC owners once it was installed.

For Microsoft, the big cockup came in October when the Windows 10 Fall Creators Update landed. Within days customers began reporting that the new firmware was prone to randomly wiping files, prompting Microsoft to pull the update just four days into its availability.

The problems would not end there. Even after it was re-released, Windows 10 continued to be beset by errors, and by the end of the month El Reg had declared the Fall release “officially a shit show.”

But the complaints didn’t stop there. In late November Microsoft was still adding new bugs to its advisories on the release, and as the year rolled to a close new reports surfaced of unwanted data collection.

So all in all, it was a pretty standard year for Windows. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/27/2018_the_year_in_security/