STE WILLIAMS

Got an SMS offering $$$ refund? Don’t fall for it…

SMS, also known as text messaging, may be a bit of a “yesterday” technology…

…but SMS phishing is alive and well, and a good reminder that KISS really works.

If you aren’t familiar with the acronym KISS, it’s short for “keep it simple, stupid.”

Despite the rather insulting tone when you say the phrase out aloud, the underlying ideas work rather well in cybercrime.

Don’t overcomplicate things; pick a believable lie and stick to it; and make it easy for the victim to “figure it out” for themselves, so they don’t feel confused or pressurised anywhere along the way.

Here’s an SMS phish we received today, claiming to come from Argos, a well-known and popular UK catalogue merchant:

You have a refund of £245. Request refund and allow 3 days for it to appear in your account.
http://argos.co.uk.XXXXXXX.shop/login

The wording here probably isn’t exactly what a UK retailer would write in English (we’re not going to say more, lest we give the crooks ideas for next time!), but it’s believable enough.

That’s because SMS messages, of necessity, rely on a brief and direct style that makes it much easier to get the spelling and grammar right.

Ironically, after years of not buying anything from Argos, we recently purchased a neat new phone for our Android research from an Argos shop – the phone we mentioned in a recent podcast, in fact – so we weren’t particularly surprised or even annoyed to see a message apparently from the company.

We suspect that many people in the UK will be in a similar position, perhaps having done some Christmas shopping at a genuine Argos, or having tried to return an unwanted gift for a genuine refund.

The login link ought to be a giveaway, but the crooks have used an age-old trick that still works well: register an innocent looking domain name, such as online.example, and add the domain name you want to phish at the start.

This works because once you own the domain online.example, you automatically acquire the right to use any subdomain, all the way from http://www.online.example to some.genuine.domain.online.example.

Because we read from left-to-right, it’s easy to spot what looks like a domain name at the left-hand end of the URL and not realise that it’s just a subdomain specified under a completely unrelated domain.

These crooks chose the top-level domain (TLD) .shop, which is open for registrations from anywhere in the world.

Although .shop domains are generally a bit pricier than TLDs such as .com and .net, we found registrars with special deals offering cool-looking .shop names starting under $10.

What if you click through?

What harm in looking?

Well, the problem with clicking through is that you put yourself directly in harm’s way.

Visting the link provided takes you to a pretty good facsimile of the real Argos login page, shown below on the left (the real page is on the right):

There’s not much fanfare, just a realistic clone of exactly the sort of content you’d expect to see, except for the lack of HTTPS and the not-quite-right domain name.

Getting free HTTPS certificates is pretty easy these days, so the crooks could have taken this extra step if they’d wanted.

Perhaps they were feeling lazy, or perhaps they figured that anyone who’d take care to check for the presence of a certificate might also click through to view the certificate, which would only serve to emphasise that it didn’t belong to Argos?

If you do fill in a username and password, then you have not only handed both of them to the crooks, but also embarked on a longer phishing expedition by the crooks, because the next page asks for more:

We didn’t try going any further than this, so we can’t tell you what the crooks might ask you next – but one thing is clear: by the time you get here, you’ve already given away far too much.

What to do?

  • Check the full domain name. Don’t let your eyes wander just because the server name you see in the link starts off correctly. What matters is how it ends.
  • Look for the padlock. These days, many phishing sites have a web security certificate so you will often see a padlock even on a bogus site. So the presence of a padlock doesn’t tell you much on its own. But the absence of a padlock is an instant warning saying, “Go no further!”
  • Don’t use login links in SMSes or emails. If you think you are getting a refund, find your own way to the merchant’s login page, perhaps via a bookmark, a search engine, or a printed invoice from earlier. It’s a bit slower than just clicking through but it’s way safer.

Here’s to a phish-resistant 2019!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/VR8LnxSyl9c/

Politicians who block social media users are violating First Amendment

Keep your hands off that “block” button, an appeals court told a government bureaucrat who temporarily blocked a constituent who’d posted criticism on one of her Facebook Pages. Blocking is unconstitutional, the court declared, given that aspects of the page in question “bear the hallmarks of a public forum.”

The ruling was handed down on Monday by the US Court of Appeals for the Fourth Circuit and could serve as a precedent, given that it’s the first decision from an appellate court that addresses the applicability of the First Amendment to social media accounts run by public officials.

A similar case, in which a New York judge banned President Donald Trump from blocking Twitter users on the grounds that it’s a violation of free-speech rights, is pending appeal in the Second Circuit Appeals Court.

The Trump Administration argues that the @realDonalTrump account is a personal one, meaning that the First Amendment doesn’t apply. The appeal is due to be heard soon. The case could wind up in the Supreme Court, as US courts deal with the question of what constitutes an “official” account on social media.

The difference between a personal vs. an official social media account was at the crux of the case decided on Monday.

That case was about what Phyllis Randall, Chair of the Board of Supervisors in Loudoun County, Virginia, got up to with one of her Facebook pages.

As the ruling describes it, Randall has three Facebook profiles: her personal account, a page devoted to her campaign, and another page that she created the day before she was sworn in as chair: the “Chair Phyllis J. Randall” Facebook page.

Profiles and pages

As Facebook itself describes it, pages aren’t like personal Facebook profiles. Rather, pages such as Randall’s Chair’s Facebook page are designed to “help businesses, organizations, and brands share their stories and connect with people.” As such, Randal designated the Chair’s Facebook page as a “governmental official” page.

She and her chief of staff use this page to communicate about things like upcoming meetings and what would be discussed; community news, such as when a student fell from a water tower or which streets needed to be plowed after a big snow storm; news about her job-related travel; and advisements about official Loudoun Board action, such as funding new breathing apparatus for local firefighters.

On her campaign page, Randall characterized the Chair’s Facebook page as her “county Facebook page, inviting input from one and all:

I really want to hear from ANY Loudoun citizen on ANY issues, request, criticism, complement [sic] or just your thoughts. However, I really try to keep back and forth conversations (as opposed to one time information items such as road closures) on my county Facebook page (Chair Phyllis J. Randall) …

But one of her Loudon constituents is the active, outspoken Brian Davison. At a contentious public meeting, Davison accused Board members of having financial conflicts of interest. He continued that line of criticism on Randall’s Facebook page. After briefly interacting with Davison, Randall decided to delete the thread and to block Davison from the page.

Violation of rights

By the next day, second thoughts made her reverse course. After 12 hours, she unblocked Davison, but the damage was already done. A furious Davison sued, claiming that his due-process and free-speech rights had been violated. The Loudon Board and Randall argued that the page wasn’t an official municipal page and hence Davison’s rights hadn’t been affected, but a district court sided with Davison.

Randall and the Board appealed, but the appeals court sided with the original verdict, saying that the page was effectively a public forum for the Chair’s job as a public official. Hence, the First Amendment obligation applies.

The court wasn’t impressed by the fact that Randall lifted the ban after half a day, given that she’s insisted that the First Amendment doesn’t apply to the situation. Randall has argued that it’s her right to ban anyone she likes from the page, based on their views. Her persistence in this belief makes it a “credible threat” that she could and would cut off Davison in the future, the appeals court ruled.

The court’s decision shows that it’s clearly aware that this is new, uncharted legal territory:

Although neither the Supreme Court nor any Circuit has squarely addressed whether, and in what circumstances, a governmental social media page – like the Chair’s Facebook Page – constitutes a public forum, aspects of the Chair’s Facebook Page bear the hallmarks of a public forum.

Given the Trump Administration’s appeal, the Supreme Court could weigh in on the issue of what constitutes a public forum sooner rather than later. In the meantime, the Knight Institute, which argued the appeal on behalf of Davison and which is also behind the case against Trump’s blocking of Twitter users, is considering Monday’s decision to be a First Amendment victory against government censorship.

Katie Fallow, the lawyer who led Davison’s appeal:

Today’s decision confirms that the First Amendment prohibits government censorship on new communications platforms.

Public officials, who increasingly use social media accounts as public forums to foster speech and debate among their constituents, have no greater license to suppress dissent online than they do offline.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/GmkIEWwV7y8/

How to share photos – without using Facebook

Ah, Facebook. We don’t know how to quit you.

Some pundits tell us blithely: Just delete the app, delete your account! Extricate yourself forever. If only it was that easy.

Many of us who want to quit would do it in a heartbeat if we could replicate the functionality that keeps us coming back. Aside from its massive user base, the other major hook Facebook has for many of us is that it’s an one-stop-shop for a number of different tasks.

One of the key features that keeps people going back to Facebook seems to be photo sharing. Especially with far-flung family and friends, folks with little kids and/or much-loved pets at home really, really want to share those photos with their adoring grandmama. And since grandmama is on Facebook, and all the aunties and uncles are too, photo sharing there is the path of least resistance.

But there are other options for photo sharing that don’t hand over every pixel to the Facebook megamind. These options fall under a few categories, so let’s explore:

Smartphone-native photo sharing apps

Smartphone makers have their own solutions to the problem of photo sharing as well, as both Apple and Google have ways to share photo albums stored on their phones with other people — iOS Photos for Apple and Google Photos for Android.

They work pretty well if you are sharing with other people who are also fellow Apple iOS or Google Android users; however, rather predictably, they get a little clunky if you try to share your photos across platforms. Generally if you’re trying to share a photo album from one smartphone OS to another, the photo recipient will get an email invitation to view the photos in their browser. It works well enough, though it might get a little daunting for some people to manage with a larger volume of photos.

Cloud-based file storage services

Photo sharing exists well outside of Facebook and the smartphone walled garden. There are a lot of choices out there, from photo-specific services like Smugmug or Flickr, to cloud storage services like Dropbox, all of which offer photo sharing from within their services with varying levels of security – the ability to view photos is often invite-only, or in some cases requires creating an account with the service.

They’re usually easy enough to use, though in many cases, you’re limited to how much you can upload before you end up needing to buy a subscription plan, so unless you’re already paying for these services they may not be the best option.

Third-party photo sharing apps

There is a burgeoning category of photo-sharing apps out there, many of which are marketed at new parents who undoubtedly will soon have their phones bursting with snaps to share with family. The majority of these apps tout not only how easy it is to share photos straight from the phone, but also that the photos are ‘private’ to the recipients, not blasted on a social network.

Most of these apps work in a straightforward manner: The person sharing the photos (let’s say a parent) sends invitations to people they want to send the photos to (let’s say grandparents and friends). The photos are sent via the app to the recipients only. As the parent takes photos, they choose the photos to send via the app, and off they go to the recipients.

Some of the popular apps in this category know that they’re often used as a replacement for Facebook photo sharing, so they’ll often go through pains to mention that they have no interest in monetizing your photos or selling your personal information – instead, they make money often through add-on services like photo prints or albums, or charge a small fee to users who might want to remove in-app ads (if they exist).

The benefit of this kind of service is it’s very friendly to those who aren’t super-tech-inclined, as many of these apps work outside of the smartphone app itself, and can work within a web browser or send the photos to recipients by plain old email. For folks looking for something that’s operating system-agnostic and easy to use for both sender and recipient, these third-party apps can be the easiest Facebook alternative.

Email

Yep, this may not be a stylish option but everyone nowadays has email. Setting up a BCC list in your email client doesn’t take much time, and if you only send a few photos every once in a while this can work well. If you tend to do massive photo blasts though, you’re likely to hit your email provider’s file size limit pretty quickly.

Oh yeah, and…

SMS and text messaging apps remain an option for folks who don’t care much about organizing photos in the long-term. The burden is on the recipient to keep track of the photos they want to keep. (Also, if your goal is to stay off of Facebook, using WhatsApp means you’re unfortunately still using Facebook!)

If you don’t want to risk any service or server getting a copy of your family’s photos, keep them off the internet entirely and go properly old-school: You can always print them out yourself and mail them to people. Yes, in the mail! It’s by far the most time-consuming and expensive of all the other options, but it’s also the most private.

One size doesn’t fit all

Personally, I use a mixed approach in sharing my family photos, as I don’t post them on Facebook and many of my family and friends were never on Facebook to begin with. In my case, I use my smartphone’s native app (iPhoto) for sharing photos with family who are also on Apple who tend to want frequent updates, email for family that do not have smartphones or inexpensive data plans, and print photos for once-a-year correspondences to family or friends who just want to generally know how we’re all doing and don’t need a play-by-play. It’s a hybrid approach that works well for my family, but it’s not for everyone.

Anyone who wants to share family photos off Facebook should take a look at how many photos they want to share and who they want to send them to, and decide accordingly what method makes the most sense.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OfUiiLIcqbU/

Zerodium’s waving fatter payouts for zero-day bug hunters

Exploit buyer and seller Zerodium has once again jacked up what it’s willing to pay for zero-days.

On Monday, it announced new, bigger payouts, including up to $2 million for remote iOS jailbreaks and a doubled bounty, now $1 million, for remote code execution (RCE) vulnerabilities in chat apps WhatsApp, iMessage or other SMS/MMS apps.

The less user interaction an exploit requires, the fatter the payout. The maximum payout of $2m (up from the previous $1.5m) – for the remote iOS jailbreak – is reserved for an exploit that requires no clicks. Another that requires minimal user interaction – one click – is now fetching $1.5m, which is up from $1m.

Those figures might be eye-poppers, but this isn’t the first time, by any means, that we’ve seen exploit merchants buying zero-days for multiple times what manufacturers pay out. In August 2016, Exodus Intelligence was offering 2.5 times what Apple would pay for serious iOS exploits.

They can afford it, given what they’re selling those zero-days for. Around that time, a report from NSS Labs said that said that Exodus Intelligence’s customers were paying annual subscription fees that started at around $200,000 for access to its exploit database.

The report quoted Exodus Intelligence co-founder Aaron Portnoy as saying that Exodus was interested in delivering the nastiest of the nasties:

We try to make them as nasty and invasive as possible. We tout what we deliver as indicative of or surpassing the current technical capabilities of people who are actually actively attacking others.

…which can mean that exploit brokers’ customers could be on the side of the good guys – say, antivirus vendors who want to protect people from newly discovered holes – or that they could be on the offensive, interested in using undisclosed exploits to target systems themselves.

Zerodium has a similar business model: the US company, founded in 2015, says that its customers are mainly government organizations “in need of specific and tailored cybersecurity capabilities and/or protective solutions to defend against zero-day attacks” in the form of its Security Research Feed product.

As of September 2015, Zerodium was spending between $400,000 to $600,000 per month for vulnerability acquisitions and expected to spend around $1m per month before the end of the year – above and beyond the $1m it was offering for an iOS 9 bug at the time – according to what founder Chaouki Bekrar told eWEEK at the time.

More recently, with regards to the new payouts for zero-days it announced on Monday, Bekrar told SecurityWeek that the company’s customers are using the zero-days for good: for example, it’s acquired high-end Tor exploits that Zerodium customers have used to “fight crime and child abuse, and make the world a better and safer place for all.”

That sounds good, doesn’t it? If only we could be certain that the lucrative market for zero-days was only serving the good guys. In fact, Bekrar said, only a very limited number of governments and corporations can acquire the zero-days the firm peddles.

In the hands of intelligence agencies

As we’ve written about in the past, there are two sides to the debate over how much vulnerability information US intelligence agencies, for one, should hoard, as well as how they should use the vulnerabilities found in software used by their own citizens.

One side holds that a crucial element of protecting the homeland is for intelligence agencies to maintain a secret stash of vulnerabilities in order to intercept the communications or cyber weapons of criminals, terrorists and hostile governments. The other side of the coin is that those secrets don’t always stay secret. Vulnerabilities that fall into the hands of malicious actors can be exploited to attack millions of innocent users, or critical systems, before they’re patched: not a great outcome for protecting the homeland.

That’s not just theoretical: it happened in 2016, when the Shadow Brokers hackers released a cache of top-secret cyber attack tools, presumed to belong to the National Security Agency (NSA).

Another problem with zero-days being hoarded, or sold, or inadvertently leaked, or otherwise not being reported to the product manufacturers whose priority it is to close the holes and keep their customers and their data safe, is that zero-days can be used to install backdoors or malware.

As Motherboard reported in August when covering the exploit purchaser and Zerodium competitor Crowdfense, governments use zero-days and other exploits unknown to software manufacturers in order to install malware on devices. Think of the FBI’s ongoing zeal to cripple iOS encryption, in order to install backdoors onto iPhones so agents can intercept messages before they get encrypted, or to remotely turn on a device’s microphone and turn it into a surreptitious recording device.

Crowdfense’s platform is making it easier for researchers to submit and sell individual exploits, piecemeal, without the need for a full exploit chain. Zerodium’s bounties are getting as fat as calves ready for the slaughter, and it’s easy to see why: there are plenty of governments, law enforcement agents and organizations with cash in hand, willing and able to snap up those security bugs, be it for good or evil.

To all of you bug hunters who choose to pass up huge payouts in order to instead ethically disclose vulnerabilities to the manufacturers, we don’t say this often enough, but we’re saying it now: thank you.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/rpLfQXEAsAY/

Some Android apps are secretly sharing your data with Facebook

Android apps have been secretly sharing usage data with Facebook, even when users are logged out of the social network – or don’t have an account at all.

Advocacy group Privacy International announced the findings in a presentation at the 35th Chaos Computer Congress late last month. The organization tested 34 apps and documented the results, as part of a downloadable report.

The investigators found that 61% of the apps tested automatically tell Facebook that a user has opened them. This accompanies other basic event data such as an app being closed, along with information about their device and suspected location based on language and time settings. Apps have been doing this even when users don’t have a Facebook account, the report said.

Some apps went far beyond basic event information, sending highly detailed data. For example, the travel app Kayak routinely sends search information including departure and arrival dates and cities, and numbers of tickets (including tickets for children).

Language learning app Duolingo was among several apps that the report called out for sharing extra data, including “how the app is used, which menus the user has visited, and other interaction information”.

The occasional message telling someone that you’ve opened a language learning app and decided to brush up on your German may seem harmless enough, but it still has Privacy International worried. The report said:

If combined, data from different apps can paint a fine-grained and intimate picture of people’s activities, interests, behaviors and routines.

Moreover, the report says that this basic SDK data could cross over into a special category of user data specially protected under GDPR. If you open a medical or religious app and that data is sent to Facebook, it could include data about the user’s health or religious beliefs, it says.

This is more likely when apps send this information with a unique Google advertising ID (AAID), which according to the report they often do. Many advertising technology companies sync AAIDs across different devices so that they can build a better profile of a user’s activities across mobile and desktop.

What could Facebook use such information for? Some possible uses highlighted by the report include matching contacts and building targetable audiences. The social network has also been known to track application usage in the past to gain market intelligence about which apps people are using, as it did with the Onavo VPN product that it purchased and subsequently removed from Apple’s app store.

Facebook provides opt-out mechanisms that are supposed to allow people without Facebook accounts to control the ads they see. However, using those opt-outs don’t stop the apps sharing the users’ usage data, the report alleged. Neither do enhanced controls to govern how apps collect data, which Google included in Android 6.0 and up.

Apps share this event data via a software development kit (SDK) that developers must use if they want their apps to interact with the social network. The report says that while developers have been able to restrict the event data that they send for a while, the SDK still sent the basic data about opening apps as part of an initialization process that developers couldn’t control.

The default data collection could put Facebook in violation of Europe’s General Data Protection Regulation (GDPR), according to Privacy International. The inability to stop their own apps sending data to Facebook led several developers to contact Facebook raising concerns about compliance.

The report warns that automatically giving up user event data via the SDK may contravene GDPR’s consent rules, adding that even if the user agreed to blanket terms and conditions when installing an app, they couldn’t easily revoke that consent later. It said:

…under the default implementation of the SDK, personal data is transmitted to Facebook before an individual has had the opportunity to be provided with further information or to consent to such data sharing.

Facebook released version 4.34 of the SDK on 28 June, which it said allowed developers to delay sending SDK initialization data until the developer had gained the user’s consent. However, that SDK release came 35 days after GDPR came into effect. Even now, developers must still opt to delay the SDK sending that data.

The report suggests that the SDK as it stands may well violate GDPR’s principle of data protection by design and by default, which requires companies to gather only the data they need for specific purposes:

…the design of the Facebook SDK together with the default Facebook SDK implementation does exactly the opposite, namely automatically (by default) transferring personal data to Facebook for unspecified purposes.

Should Facebook be responsible for how third-party developers pass on user data? Privacy International thinks so, asserting that they share responsibility:

Facebook cannot simply shirk responsibility for the data transmitted to it via Facebook’s SDK by imposing contractual terms on others such as App developers or providers.

Some developers have already responded to the Privacy International report. Skyscanner, which was using a pre-June version of the SDK, said that it had updated its app to use a newer version and would audit its consent tracking.

Privacy International’s research project couldn’t have come at a more sensitive time for Facebook. The Irish Data Protection Commissioner is already investigating the company’s data breach last year, which saw up to 50 million accounts compromised, to see if it violated the GDPR:

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/5rKrJSPKKUA/

IoT weaknesses leave hot tub owners in deep water

For decades hot tubs were simple water-bearing garden luxuries that owners looked forward to relaxing in of an evening.

More recently, manufacturers started adding exciting Internet of Things (IoT) features that product marketing departments worked themselves into a lather promoting as the next must-have.

These IoT-enabled hot tubs look identical to the old ones except that owners can now remotely adjust things such as water temperature using a smartphone app.

No prizes for guessing what’s coming next – according to UK security outfit Pen Test Partners, it looks as if at least one hot tub maker left robust security off the to-do list.

In a video filmed from a hot tub, founder Ken Munro explains how his company was tipped off to look more closely at the authentication design of the app used to control hot tubs or spas made by Balboa Water Group (BWG).

What they found reads like a useful definition of how not to do IoT security.

The app communicates directly with a Wi-Fi interface on the company’s hot tubs, or over the internet using an API. The access point (AP) built into the tub…

…is open, no PSK [pre-shared key], so anyone can stand near your house, connect their smart phone to your hot tub and control it. Your friendly neighbourhood hacker could control your tub.

And that’s not all – the API has no authentication but connects to a cloud service called iDigi, which uses a static password. Reaching out to a specific tub requires an ID, and that turns out to be… a padded version of the Wi-Fi access point’s MAC address!

Sniffing out Wi-Fi networks is easy and popular – so easy and so popular that giant databases and maps of the globe with MAC addresses plotted on them are just a click away. And, as anyone who’s used Google’s Location Services will know, Wi-Fi networks can be used for geo-location very effectively too.

Would you mind if anyone could locate your hot tub on a map? Perhaps not, but most users would mind some of the other security problems revealed by this app.

At this point, the researchers decided to coin a special name for this kind of device – the “hackuzzi” (in honour of the US brand Jacuzzi, which is unaffected by this vulnerability).

In hot water

Now for the pièce de résistance – fiddling with the water temperature.

According to the researchers, this might not be dangerous per se but would allow a hacker to cause the tub to consume excessive electricity or to make it unusably cold. It’s also possible to control the blowers or water spouts:

Blowers are also only turned on when someone is in the tub, so the hacker can figure out if you’re in the tub at the time. Creepy.

There is a serious side to this finding beyond the woeful IoT security of a product used to control an estimated 26,000 hot tubs. When Pen Test Partners contacted Balboa it received no response until the BBC contacted them in advance of a TV feature on the story.

Pen Test Partners claimed that that BWG asked for the Christmas broadcast to be delayed to allow for the holidays.

Said Pen Test Partners:

It’s yet another example of an IoT disclosure train wreck.

Until an app and/or API is updated, their advice for owners is not to use the remote control function and, if really worried, to physically remove the Wi-Fi module enabling it.

Hopefully, Balboa will offer an update soon. However, given that the most recent update for the Android version (v2.2.7) was in July 2013 it’s probably best to assume this might not be imminent.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/vVagp1oJJ8A/

6 Ways to Beat Back BEC Attacks

Don’t assume your employees know how to spot business email compromises – they need some strong training and guidance on how to respond in the event of an attack.PreviousNext

Business email compromise (BEC) campaigns have become a serious business for fraudsters – and companies need to train their employees how to respond.

Just how large a threat are BECs? The FBI Internet Complaint Center (IC3) reported last summer that from October 2013 to May 2018, total losses worldwide for known BEC scams hit $12.5 billion.

Companies are starting to take note by including training on BECs in their security awareness programs. While BECs are typically social engineering crimes in which bad threat actors trick people either via phishing emails, phone or a combination of both to make wire transfers or hand over sensitive documents, there are some situations in which technology can help.

Here are some key insights into BECs and how to prepare for them – and how to respond if one of your users falls for one and you get attacked. 

 

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full BioPreviousNext

Article source: https://www.darkreading.com/vulnerabilities---threats/6-ways-to-beat-back-bec-attacks/d/d-id/1333606?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Great, you’ve moved your website or app to HTTPS. How do you test it? Here’s a tool to make local TLS certs painless

A Google cyrptoboffin is close to releasing a tool that will hopefully make all of us more secure online.

Now that most web traffic travels over HTTPS and browser features increasingly expect security, developers really should be creating and testing apps in an HTTPS environment.

Doing so requires installing a TLS/SSL certificate locally, but the process isn’t as easy as it might be. With a bit of effort, devs can generate their own certificate, self-signed or signed by the local root, and install it. Various online tutorials offers ways to do so. There are also projects like minica that aim to ease the pain.

But it could be easier still, along the lines of Let’s Encrypt, a tool that lets websites handle HTTPS traffic through automated certificate issuance and installation.

On Monday, Filippo Valsorda, a cryptographer who works at Google, said he’s almost done with his open source project called mkcert, which allows devs to create local certificates without fuss.

That’s desirable, says Valsorda, because testing web apps via insecure HTTP can obscure mixed content issues that might break an HTTPS site in production.

Bored cat on computer, photo via Shutterstock

Warning: Malware, rogue users can spy on some apps’ HTTPS crypto – by whipping them with a CAT o’ nine TLS

READ MORE

“mkcert is a simple by design tool that hides all the arcane knowledge required to generate valid TLS certificates,” said Valsorda in a blog post. “It works for any hostname or IP, including localhost, because it only works for you.”

Rather than creating a self-signed certificate, mkcert generates certificates signed by the user’s private Certificate Authority (CA), a more involved process that’s generally better than self-signing if you want to generate multiple certs tied to the CA.

The result is a cert that some browsers still represent with a green padlock icon, though Chrome last year changed how it displays web security.

mkcert works with Linux (Arch, CentOS, Debian, Fedora, RHEL, and Ubuntu), macOS, and Windows, as well as Firefox (macOS and Linux), Chrome and Chromium, and Java. With a few extra steps, it also works with Android and iOS. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/09/certs_resh_security/

Jeep hacking lawsuit shifts into gear for trial after US Supremes refuse to hit the brakes

A class-action lawsuit claiming Fiat-Chrysler knew about, but failed to fix, significant cybersecurity holes in its cars will go to trial in America later this year.

This week, the US Supreme Court refused to hear [PDF] the company’s appeal to a lawsuit that was filed after security researchers revealed, back in 2015, they were able to take over a Jeep’s operation because of clumsy coding in its entertainment software.

Since then the lawsuit has been rumbling through the law courts with the plaintiffs arguing Fiat-Chrysler knew about the problem for three years and failed to fix it, and the car company claiming that since none of the car owners were directly impacted by the hole that they have no right to sue.

The Jeep owners claim that they would never have bought the cars in the first place if they had known about the security risks, and claim that the cars’ resale value has been significantly impacted as a result of the saga. They are seeking $50,000 per car impacted.

The case is a little unusual in that Fiat-Chrysler patched the security hole soon after it was revealed by security researchers. Chris Valasek and Charlie Miller had found they could wirelessly snatch control of engine management systems in some cars by exploiting a security hole in Fiat-Chrysler’s uConnect software which connects vehicles and their internal Wi-Fi to the public internet via the cellular network, allowing people to go online while on the move.

That ability was dramatically demonstrated by the researchers when they put a tech reporter in a Jeep and then took over his car while he was driving it. The subsequent article in Wired magazine woke up millions of car owners to the potential risk that comes with modern network-connected car and resulted in Chrysler recalling 1.4 million vehicles to upgrade their software and fix the hole.

The lawsuit, filed against the US subsidiary of Fiat-Chrysler and the manufacturer of the uConnect software, followed shortly after and is being carefully watched as it could open up companies to liability for failing to secure their products, even if no customers are directly affected.

Mo’ problems

A year later, the same researchers found a way around the software update but it required physical access to the car and so consumers were less freaked out.

Since then, however, Fiat-Chrysler has repeatedly been caught up in further embarrassing cybersecurity incidents. It recalled a further 8,000 SUVs in September 2015 thanks to the software flaw and in May last year recalled an extraordinary 4.8 million vehicles in the US to fix a software bug that could lock the vehicle’s cruise control. It was also investigated by the Department of Justice for different software – this time designed to cheat on emissions tests.

scrooge

Generous Fiat Chrysler offers $1,500 for car security bugs – or two minutes of annual profit

READ MORE

In short, it’s not been a good few years for Fiat-Chrysler when it comes to cyber security and this week’s decision by the Supreme Court not to hear its appeal is only going to add to those woes.

If the case does move forward to trial – it was due to start in March but has been moved back to October over scheduling issues – we are likely to hear much more details over what exactly the car manufacturer knew and did not know about the safety of its vehicles and what it did in response.

The two researchers who identified the original issue told reporters at Black Hat that they told the car company about the security situation but heard little back. It was only when they announced plans to give a talk on the topic that the auto maker got into gear on the issue.

Based on events so far, those details could prove extremely embarrassing for a company that expects people to trust that their hurtling metal boxes are a wonderful form of personal transport rather than a death trap waiting for a hacker.

Fiat Chrysler said it looked forward to presenting its case. “None of the more than 200,000 class members in this lawsuit have ever had their vehicles hacked, and the federal safety regulators at NHTSA have determined that FCA US has fully corrected the issues raised by the plaintiffs,” it said in a statement. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/08/jeep_hacking_supreme_court/

Welcome to 2019: Your Exchange server can be pwned by an email (and other bugs need fixing)

Patch Tuesday Microsoft has released the first Patch Tuesday bundle of the year, patching up 49 CVE-listed security vulnerabilities and issuing two advisories.

Happy new year from Redmond

The January edition of Patch Tuesday includes critical fixes for Windows 10, Exchange Server, and Hyper-V.

Among the 49 bug fixes were patches for remote code execution flaws in DHCP (CVE-2019-0547) and an Exchange memory corruption flaw (CVE-2019-0586) that Trend Micro ZDI researcher Dustin Childs warns is particularly dangerous as it can be exploited simply by sending an email to a vulnerable server.

“That’s a bit of a problem, as receiving emails is a big part of what Exchange is meant to do,” Childs explained.

“Microsoft lists this as Important in severity, but taking over an Exchange server by simply sending it an email puts this in the Critical category to me. If you use Exchange, definitely put this high on your test and deploy list.”

Just one of the vulnerabilities has been reported as being publicly disclosed. That flaw, CVE-2019-0579, concerns a remote code execution vulnerability in the Windows Jet Database engine that would be exploited by tricking the victim into opening a specially-crafted file.

Also of priority should be the patch for CVE-2019-0550 and CVE-2019-0551, a pair of remote code execution vulnerabilities in Windows Hyper-V. Both flaws would allow a guest VM to execute exploit code on the underlying host machine.

Reg readers will already know of the Skype vulnerability behind CVE-2019-0622. As we warned of last week, the Android version of Skype was found to allow users to bypass the lock screen and access things like photos and contact details. Discovery was credited to researcher Florian Kunushevci.

As usual, the bulk of Microsoft’s critical fixes concerned remote code execution vulnerabilities in the scripting engines for the Edge and Internet Explorer browsers. Jet Database was also a popular target this month, with a total of 10 remote code execution flaws (including the above-mentioned CVE-2019-0579) being patched.

Office once again sees fixes for a remote code execution flaw in Word (CVE-2019-0585) as well as an information disclosure bug in Exchange (CVE-2019-0588) and three cross-site scripting vulnerabilities in SharePoint.

Flash! Na-ahhhh…

A round of applause for Adobe, who didn’t need to put out a single security fix for Flash today. Instead, the internet’s screen door will see a handful of performance and stability fixes for the Mac, Windows, Linux, and Chrome OS versions of the multimedia plug-in.

Adobe also pushed out security updates for an information disclosure bug in Digital Editions for Windows, Mac, iOS, and Android, as well as a patch for a token exposure flaw in Connect. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/08/patch_tuesday_january/