STE WILLIAMS

Car parking mobile apps are vulnerable to hacking, say infosec folk

Mobile parking apps are often insecure, according to an investigation by security researchers at NCC Group.

Firms running paid-for parking schemes across the UK are introducing mobile applications as an alternative to paying with coins and/or card at the parking meter. Parking vendors generally cater for customers using Apple iOS and Android.

NCC’s investigation focused on Android parking apps, placing six (unnamed) applications under the microscope. The researchers wanted to highlight the sort of security vulnerabilities that commonly affect these apps in general rather than throwing praise or scorn on particular apps.

Security assessment was limited to the attack surface available on the smartphone itself, which included the APK distributed by the vendor and any data stored on the phone as a result of interaction with supporting servers on the internet. No attempts were made to probe for problems by manipulating data sent to the server, so the exercise omitted steps that would be carried out during a full penetration test.

NCC’s team concluded that nearly all the apps it looked at were “affected by security vulnerabilities – some more serious than others”, with mediocre cryptographic implementations being one common thread, as a blog post by NCC explains.

All of the vendors appeared to recognise the need for some form of encryption when transmitting sensitive data to the server. The reason this is important is that data sent without encryption may potentially be intercepted or altered by an attacker connected to the same network.

The majority of the applications used Transport Layer Security (TLS). However, none of the apps verified the certificate used by the server, which meant that Man-in-the-Middle attacks were still possible using an intercepting proxy tool.

Man-in-the-middle attacks occur when the attacker has some control over the network accessed by a vulnerable device. Most of the time parking applications will be used when connected to mobile data connections – rather than an unsecured public Wi-Fi network. This means that the likelihood of such attacks may be reduced but not eliminated as it may be possible for an attacker to create a fake GSM base station, as NCC notes.

Much more seriously one of the vendors accessed chose not to use industry-standard TLS, opting to “roll their own” encryption scheme instead. “Unless you have extensive experience with developing cryptographic algorithms and implementing them in software this is generally a bad idea,” said Chris Spencer, a senior security consultant at NCC.

Sure enough, in the case in point, the keys used to “encrypt” credit card details and passwords were stored in the application code and “easily retrieved by decompiling the app”. That left it open for a potential hacker to “recover credit card details from network traffic they may have intercepted during the registration process”.

NCC also discovered more subtle vulnerabilities lurking in some of the apps in areas such as data storage of PIN or password. For example, many of the apps offered the user a facility for saving their password or PIN locally on the device, to enable subsequent ’auto-login’. One of the applications stored the password for the system (unencrypted) in the application’s private data directory on a smartphone. Attackers wouldn’t be able to get at this directly but in cases where they manage to infect an Android device with malware then this information would be exposed.

Despite the problems it uncovered NCC reports that many of the application developers had taken steps to secure their apps against trivial attacks, for example, through the correct use of hashing algorithms to store representations of sensitive data. Spencer nonetheless remains critical of the overall quality of the apps he and his team reviewed.

“We saw that using parking applications on Android could potentially put some of our sensitive data at risk, and potentially allow an active attacker to compromise your phone,” Spencer concludes. “This isn’t good, and vendors clearly have some work to do in order to provide better security for their users.”

NCC Group contacted vendors of apps affected by serious vulnerabilities and offered full details of the flaws the UK-based security consultancy found before it went public with its findings today.

The security of mobile parking apps rated worse than those typically available for major online shopping sites, gambling sites or gaming studios. NCC rated them as roughly on par with apps offered by some of the smaller games developers.

NCC’s blog offers top tips for developers in remediating the types of apps it discovered when putting apps through their paces. Using the latest Android API to develop apps; applying securely configured TLS, with Certificate Pinning to mitigate against man-in-the-middle attacks on connections; avoiding the export of Android components where possible; and protecting data at rest using an appropriate hashing algorithm are all recommended. ®

Sponsored:
Evolution of the Hybrid Enterprise

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/11/mobile_parking_apps_audit/

No root for you! Google slams door on Symantec certs

The four-month row between Google and Symantec over SSL certificate issuing has just gone nuclear, with the Chocolate Factory making good on its threats and beginning a blockade.

“Over the course of the coming weeks, Google will be moving to distrust the ‘Class 3 Public Primary CA’ root certificate operated by Symantec Corporation, across Chrome, Android, and Google products,” said Google software engineer Ryan Sleevi.

“Symantec has decided that this root will no longer comply with the CA/Browser Forum’s Baseline Requirements. As these requirements reflect industry best practice and are the foundation for publicly trusted certificates, the failure to comply with these represents an unacceptable risk to users of Google products.”

Sleevi said that Symantec had informed Google that the root certificate would be used for purposes other than for publicly trusted connections, but isn’t saying what else they might be used for. As a result, it’s on Google’s naughty list.

“Symantec has indicated that they do not believe their customers, who are the operators of secure websites, will be affected by this removal,” Sleevi said. “Further, Symantec has also indicated that, to the best of their knowledge, they do not believe customers who attempt to access sites secured with Symantec certificates will be affected by this.”

That’s far from certain, so he was kind enough to provide a link to Symantec Enterprise technical support, who are most likely having a rather unpleasant Friday morning.

But, according to Symantec, Google is overblowing the whole situation. Michael Klieman, SVP of product management at Symantec, told The Register that Symantec regularly retires certificates and that Google has made a mountain out of a molehill.

“Google’s post on this was surprising because it came across as alarmist, that this was something out of the ordinary,” he said.

“We’ve been in business a long time and have lots of roots embedded in different clients. In this case we notified all browsers to say here’s an old root we’ve removed.”

In this case the 1024-bit RSA roots were no longer acceptable and Mozilla had ceased support for the root back in September 2014.

As for the uses of the root certificate, Klieman said that they were mostly used for internal testing and for customers stuck on legacy systems, and were not intended for browser use.

“We’ve had a back and forth with Google,” he said. “The roots are only for internal or private purposes, we can’t even enumerate what all those use cases are.”

Klieman said that, as far as he is aware, this is nothing to do with Symantec’s ongoing feud with Google over SSL certification. ®

Sponsored:
Evolution of the Hybrid Enterprise

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/11/google_slams_symantec_certs/

France says ‘non’ to Wi-Fi and Tor restrictions after terror attack

The French Prime Minister Manuel Valls has ruled out introducing restrictions on public Wi-Fi and access to Tor as a response to the Paris terrorist attacks.

Earlier this month, documents leaked to Le Monde suggested that the French police were asking for powers for the following (among others):

  • Curtail public Wi-Fi
  • Enforce GPS tracking of rented cars
  • Allow the use of cellphone collection stations
  • Authorize the eradication of Tor

But the socialist leader has told the gendarmes to ficher le camp.

“Internet freedom is a great way to communicate with people, that’s a plus for the economy,” said Valls, saying it was “also a way for terrorists to communicate and spread their totalitarian ideology.”

“The police look at all the aspects that better fight against terrorism, of course, but we must take effective measures,” he said.

When it comes to Tor, the internet anonymizing service, Valls said there were also no plans to block the software or monitor its use. He said he had seen no proposals for such a scheme.

It seems the so-called cheese eating surrender monkeys can still teach the rest of the world a thing or two about how not to let the terrorists erode core principles. ®

Sponsored:
Evolution of the Hybrid Enterprise

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/11/no_wifi_tor_restrictions_after_france_terror_attack/

Latentbot: A Ghost in the Internet

Malware’s multiple layers of obfuscation make it almost invisible FireEye says,

In what is quickly becoming a familiar pattern, security researchers have discovered a dangerous new malware threat that is notable largely because of how difficult it is to spot in the wild.

Threat actors have been using the malware, called Latentbot since mid-2013 to target organizations in at least nine countries including the US, United Kingdom, Brazil, United Arab Emirates and Canada.

During this time, the malware has operated almost invisibly and has managed to leave barely any traces of its existence on the Internet, security vendor FireEye said in an alert on the threat issued Friday.

The vendor described Latentbot as malware capable of taking complete control of systems, stealing data and surreptitiously watching its victims. Among other things, the malware is capable of completely corrupting a hard disk to make an infected system useless.

What makes the malware interesting is the manner in which it implements multiple layers of obfuscation to hide its tracks.

To start with, the malware uses a convoluted approach to infect a system. Victims are first targeted with an email containing a malicious Word attachment. When the attachment is opened it triggers an executable, which beacons out to a server that in turn downloads a secondary malware tool on the infected system.

FireEye said it identified the secondary malware as LuminosityLink, a previously known remote access Trojan designed to steal data and passwords, record keystrokes, and surreptitiously turn on any attached webcam or microphone. LuminosityLink itself is enough to take complete control of the infected system.

But it is only at this stage that a second command and control server drops Latentbot as a camouflaged .Net binary on the infected system. The binary in turn contains yet another similarly obfuscated fourth stage payload that is used to plant malicious code in system memory. The malware uses similar obfuscation to drop fifth and sixth stage payloads as well.

Daniel Regalado, a senior malware researcher at FireEye describes Latentbot as having multiple interesting features. The real malicious code for instance is only present in memory for a short period of time and is very hard to figure out.

“Latentbot won’t expose its internal workings [easily] due to its multiple layers of obfuscation and multiple injections into processes in memory,” Regalado says. “So, basically, an analyst must fully trace Latentbot in memory and have a proper response from the [CC server] in order to understand how it works.”

Even then it is not an easy task because decrypted strings in memory are removed after use. Callback traffic, APIs, Registry keys and other typical indicators of compromise are decrypted dynamically making it hard to spot them. Latentbot also has a feature to wipe the master boot record of an infected system clean to remove all traces of its existence, a feature that is not common in malware of this sort, the security researcher says.

Another unique feature in Latentbot is its use of a hidden Virtual Network Computing (VNC) process in memory that allows attackers to remotely monitor victims without being noticed, he said. Finally, the malware’s highly modular plugin architecture makes it relatively easy for threat actors to enable multiple features and add new ones as needed, Regalado says.

“In order to know exactly what it is doing, multiple layer of obfuscations needs to be circumvented [and] a live communication to a C2 is required to download the malicious plugins, he says. “If you run Latentbot and the C2 is not responding, you will end up with a piece of malware showing nothing about its internal operations.”

Researchers have seen online sandboxes running samples of Latentbot since 2013 but have not been able to figure out how it works. “It is like a ghost in [the] Internet,” he says.

Latentbot, marks the third time in recent weeks where security researchers have warned about malware capable of evading detection for lengthy periods.

In November, RSA issued an alert on a so-called zero-detection threat dubbed GlassRAT that threat actors have been using nearly invisibly for the past three years to target Chinese nationals at large companies. Earlier the same month, Trustwave warned about Cherry Picker, a malware tool targeting point of sale systems that remained largely undetected by AV tools for some four years.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/latentbot-a-ghost-in-the-internet-/d/d-id/1323537?_mc=RSS_DR_EDT

Teen cooks a turkey with flame-shooting drone

Austin Haughwout has a thing for drones and, it seems, weapons.

So maybe it’s only natural that Haughwout, an 18-year-old from Connecticut, would combine his two interests to build a home-made drone mounted with a flamethrower and shoot things with it.

On Monday (7 December), Haughwout posted a video on his YouTube channel showing his invention as it shot flames at what the video title describes as a holiday turkey, which was mounted to a spit in a wooded area with a house nearby.

The quadrotor drone’s flamethrower appears to be a kit built with a propane torch, fuel pump and a car battery.

The drone video bears the logo of a company called HobbyKing that sells drone parts, radio controlled planes and other gear, and links to pages on the HobbyKing website showing the components used to build the drone (it’s not clear if HobbyKing is involved in sponsorship).

Haughwout explained in the video description that his creation also required a “significant number of 3D printed parts, wiring, soldering, and miscellaneous parts.”

Haughwout has tried this kind of stunt before, and he has a lengthy record of run-ins with law enforcement.

In July, Haughwout posted a video showing a flying drone firing a handgun in a wooded area, which drew the attention of international media and the US Federal Aviation Administration (FAA).

Haughwout’s hijinks with drone, flamethrower and turkey likely won’t be investigated by local police, according to the Hartford Courant.

Connecticut doesn’t have any laws prohibiting what Haughwout was doing, and “laws have not caught up with technology,” one police detective said.

In May 2014, Haughwout got into a physical altercation with a woman who accused him of being a “pervert,” for flying a drone equipped with a camera nearby as she lounged on a beach – the woman was arrested for assault but Haughwout never faced any charges.

The emergence of recreational drones as a popular hobby in recent years has raised some difficult questions about how they should be regulated, and numerous incidents point to drones as a potential threat to privacy and physical safety.

Drones have interfered with firefighters and other aircraft, and crashed in crowded public places.

On the other hand, drones are useful for rescue operations and going places humans can’t. They’re also handy for law enforcement and military applications, and companies like Amazon and Wal-Mart plan to use drones for deliveries and other commercial uses.

The FAA could soon issue rules requiring hobbyists to sign up for a drone registry.

Image of flamethrower drone via YouTube.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/EkD6agbddyQ/

My Talking Tom offers up naked selfie ads to kids

My Talking Tom, heralded as the “world’s most popular cat” by the maker of the Android and iOS children’s app, is a fully animated, interactive 3D character that users can tickle, poke, play with, spend parents’ money to customize, get to repeat what they say, force to sing a pimple-themed version of Lady Gaga’s My Poker Face, and induce to dance Gangnam style.

What he is not designed to do is to serve as a delivery vehicle for ads that invite children to “f**k.”

But that’s exactly what the cartoon cat was used for in two in-game pop-up ads that were shown over the course of four days in August.

The ads were for Affairalert.com: a site that advertises the grammatically garbled “Meet Secret Sex Affairs” and which warns that “This site likely contains sexual pics of local hotties you may recognize!”

The UK’s Advertising Standards Authority (ASA) on Wednesday upheld complaints made by two parents, who said that their 7-year-old and 3-year-old children saw the ads while playing the game.

This is the second time the ASA has made a ruling on the content of ads featured in the app.

In June, the watchdog had confirmed that ads of three naked women, engaged in sexual activities with four other women, were shown, with a “play” symbol on top of the image.

According to the ASA, this time around, this is what the ads showed (profanity rendered work-safe):

  1. The first ad included a selfie of a naked woman sitting in front of a mirror. The photo had been cropped to just show her torso. Her breasts were exposed but her crotch was concealed by her hand. The words “Wanna f**k?” were written in lipstick on the mirror. Text above the image stated “Want to f**k her?” and the options “YES”, “MAYBE” and “NO” were stated below.
  2. The second ad was a slight variation on the first.

My Talking Tom is made by Outfit7 Ltd, but the ASA ruling was against the advertiser in question, Plymouth Associates Ltd.

From the ruling:

We considered that the sexually explicit content of the ads and the product they promoted meant that they should not appear in media which might be seen by children. We considered that the “My Talking Tom” app, in which the ads had appeared, would be of particular appeal to children.

Plymouth Associates, for its part, denied placing the ads in the app and said that it suspected that they’d been produced and placed “by a malicious third party,” as opposed to an affiliate, but the company couldn’t identify who was responsible.

Given that Plymouth Associates couldn’t present any evidence to confirm that somebody else was responsible, the buck stopped there, the advertising watchdog said:

Given that the ads promoted Affairalert.com and they were the sole beneficiaries, we considered that Plymouth Associates were responsible for the material and for ensuring that it was compliant with the Code.

The code referenced by the ASA is concerned with social responsibility.

The ASA said that Plymouth Associates had procedures in place intended to prevent their ads appearing in apps or websites that could appeal to, or were targeted at, users under under the age of 18, but they sure didn’t work in this case.

From the ASA’s ruling:

We were concerned… that their procedures had not been adequate to ensure their ads only appeared in appropriate mediums. Therefore, we concluded that the ads had been irresponsibly placed and breached the Code.

The upshot: the ASA told Plymouth Associates Ltd to ensure that its ads were targeted appropriately and didn’t pop up again in apps played by children.

In the meantime, poor Talking Angela.

When she’s not putting up with internet freak-outs about scary guys looking out of her cartoon eyeballs, she has to put up with users making Talking Tom pressure her into cat-inappropriate behavior.

And now this? My Talking Tom delivering lewd ads?

Forgive me for what I am about to do, but it must be said: it’s Cat-astrophic.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JbOjpQ-Ssd0/

Advent tip #11: Ask permission to post photos, not forgiveness!

There’s a famous saying: “It’s easier to ask for forgiveness than for permission.”

The idea is that the best ideas and the coolest stuff often emerge when you have the guts to back yourself and just Go For It, sidestepping any fuddy-duddy reasons why you shouldn’t.

But if you’ve ever tried this excuse, you’ll know that it only works if your unauthorised efforts were a staggering success.

If you do something you’re not supposed to, and it doesn’t come off, you’ll find yourself wishing you had asked first.

Please remember that over the holiday season.

Thanks to office parties, Christmas get-togethers, holiday outings and meetups with friends you haven’t seen for ages, you’ll probably end up with more snapshots on your mobile phone than usual.

We’re not lawyers, but if you took the photo, and the people in it posed for you happily, then then you probably don’t need to ask for permission before posting it on your favourite social media site.

Nevertheless, we’re urging you not to publish snaps of other people without asking them first.

Even if it’s a selfie with your BFF in front of the {Eiffel Tower, Sydney Opera House, Table Mountain, London Eye, Statue of Liberty, Christ the Redeemer, Great Wall of China}, get in the habit of asking, “Do you mind if I Facebook this one?” or “Is it OK if I upload this to Instagram?”

It’s a small courtesy, but it shows you care about other people’s privacy – and we think that’s a great example to set.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aR2bpa3e2fk/

How ‘Digital Forensic Readiness’ Reduces Business Risk

These six real-world scenarios show how to turn reactive investigative capabilities into proactive, problem-solving successes.

Digital forensic investigations are, for the most part, still predominantly conducted in response to an incident. With this reactive approach, there is extreme pressure put on the investigation team to gather and process digital evidence before it is no longer available or has been modified. Showing signs of weakness, being reactive to incidents suggests that organizations are not acting on their own initiative to identify problem areas and develop strategies for its suppression.

For investigations to truly become proactive, organizations must closely examine the time, money, and resources invested into their overall investigative capabilities. Digital forensic readiness is a process used by organizations to maximize their electronically stored information (ESI) to reduce the cost of digital forensic investigations. At the starting point, there needs to be a breakdown of risks including both internal events — those that can be controlled and take place within the boundaries of control (e.g. outages, human error) — and external events — those that cannot be controlled and take place outside the boundaries of control (e.g. floods, regulations). 

Here are six practical and realistic scenarios that can be used to demonstrate a pro-active initiative to manage business risk.

Scenario #1: Reducing the impact of cybercrime

With Information Technology (IT) playing an integral part of practically every business operation, the evolving threat landscape continues to increase risks associated with organizational assets. Using a threat modeling methodology, organizations can create a structured representation of the different ways a threat actor can go about executing attacks and how their tactics, techniques, and procedures can be used to create an impact. The output of this exercise can be put to practical use by implementing appropriate countermeasures that create potential digital evidence.

Scenario #2: Validating the impact of cybercrime or disputes

When a security incident occurs, organizations must be prepared to quantify impact. To obtain a complete and accurate view of the entire cost of an incident, both direct and indirect contributors must be included in the impact assessment. This means incorporating logs generated from different type of controls (e.g. preventive, detective, corrective) or the overhead cost of managing the incident (e.g. people and technology expenses).

Scenario #3: Producing evidence to support organizational disciplinary issues

A Business Code of Conduct document promotes a positive work environment that, when signed, strengthens the confidence of employees and stakeholders by establishing an accepted level of professional and ethical workplace behavior. When the guidelines set out in this document have been violated, employees can be subject to disciplinary actions. Where disciplinary actions escalate into a legal problem, organizations must approach the situation fairly and reasonably by gathering and processing credible digital evidence.

Scenario #4: Demonstrating compliance with regulatory or legal requirements

Compliance is not a one-size-fits-all process. It is driven by factors such as an organizations industry (e.g. financial services) or the countries where business is conducted (e.g. Canada). Evidence documenting that compliance standards are met must be specific to the requirements of both the regulation or law, and the jurisdiction.

Scenario #5: Effectively managing the release of court-ordered data

Regardless of how diligent an organization is, there will always be a time when a dispute ends up before a court of law. With adequate preparation, routine follow-ups, and a thorough understanding of what is considered reasonable in a court of law, organizations can effectively manage this risk by maintaining the admissibility of electronically stored information (ESI), such as the requirements described within the U.S. Federal Rules of Evidence. Ensuring compliance with these requirements demands that organizations implement safeguards, precautions, and controls to ensure their ESI is admissible in court and that it is authenticated to its original source.

Scenario #6: Supporting contractual and/or commercial agreements

From time to time, organizations are faced with disagreements that extend beyond disputes that involve employees. With the majority of today’s business interactions conducted electronically, organizations must ensure they capture and electronically preserve critical metadata about their third-party agreements. This would include details about the terms and conditions or the date the agreement was co-signed. Contract management system can be used to standardize and preserve metadata needed to provide sufficient grounds for supporting a dispute.

By following a reactive approach to digital forensic investigations, organizations foster a perception that they lack is initiative for managing risk.  Converseley, whe organizations implement strategies to proactively gather potential sources of digital evidence in support of the business risk scenarios, they showcase their ability to effectively manage risk.

This article was sourced from the forthcoming book by Jason Sachowski, “Implementing Digital Forensic Readiness: From Reactive To Proactive Process,” available now at the Elsevier Store and other online retailers.

Jason is an Information Security professional with over 10 years of experience. He is currently the Director of Security Forensics Civil Investigations within the Scotiabank group. Throughout his career at Scotiabank, he has been responsible for digital investigations, … View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/how-digital-forensic-readiness-reduces-business-risk/a/d-id/1323508?_mc=RSS_DR_EDT

Underwear thief used Instagram location data to find victim’s homes

Asked why he did it, suspected burglar Arturo Galvan reportedly told police:

“I wish I knew.”

Computers, iPads, TVs: those valuable items make burglary sense.

But bras? Panties?

There was, apparently, a sexual component to the burglaries, police said.

In fact, the targets were college-aged female victims, and police believe that Galvan hunted them down by using the location data from photos posted on Instagram and other social media sites to pinpoint where they lived.

You know, the same location data used in a project entitled I Know Where Your Cat Lives, made possible by all those location-revealing cat pictures we love to post.

Galvan, a 44-year-old Los Angeles man, was arrested last week, the Fullerton Police Department (FPD) said on Monday.

Police suspect that he’s responsible for six burglaries at four Los Angeles locations dating back to October. They also think that he is responsible for a similar number of burglaries near Chapman University in Orange, California, earlier this year.

Victims were home in some of the break-ins.

The FPD got a search warrant and searched Galvan’s home on Monday, finding what they said was “a garage-full” of stolen items belonging to 24 victims.

The police told the LA Times that the panties were in the garage, while the electronics were piled up in the house.

Beyond panties and bras, police allege that Galvan stole framed photos of women and jewelry from the homes and apartments he’s suspected of hitting.

Clean and snatched from drawers, dirty ones fished out of laundry baskets, the occasional male roommate’s undergarments mixed in, it didn’t matter: his alleged panty raids did not discriminate.

Galvan was released from jail Saturday after posting bail of $200,000.

He faces charges of burglary, receiving stolen property, and peeping and prowling.

When an artist creates a map of where to find all the posted cats in the world, all thanks to their owners not turning off location services, it’s cute and funny, although slightly alarming.

But when women are stalked and victimized with the assistance of location data, it’s a frightening wake-up call about the real-life dangers of geolocation information we post publicly for any stalker, burglar or other criminal to see.

Don’t put your location into cybercreeps’ hands.

For more details about managing geolocation on your phone, read our article on smartphone privacy and security.

And, please, set your social media to only be viewed by only your connections! Whether it’s Instagram or Facebook, you wouldn’t show a stranger in your street your photos, so why do it online?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/d4Aw0yIC5jM/

£50K insurance policy offers to cover victims of cyberbullying

Short of driving people to suicide, online trolls can have devastating effects on victims, be it a smeared reputation, or being forced to flee their homes or even move permanently to a new address in the face of death threats.

A UK insurance company, Chubb Insurance, plans to offer what’s believed to be the first ever insurance policy to cover victims against such costs incurred during online harassment.

According to The Telegraph, the coverage will be available to policy holders in the UK who buy Chubb’s personal insurance.

The new policy, which will reportedly be made available on 1 January 2016, will cover policy holders up to £50,000 (about $76,000).

Chubb’s claiming that the money can be used to pay for professional help, lost income if a policy holder is forced to miss work for a week or more, and even the costs of relocation.

As well, it will cover help from online experts for victims and their families, be it hiring a reputation management team to clean up online smears or the costs of digital forensics to track down the abusers.

Chubb says the policy comes out of “extensive research” about the type of protection its wealthy customers wanted.

Tara Parchment, UK and Ireland private clients manager, told The Telegraph that it’s about helping clients “get back to how they were” before the harassment, be it recovering material or psychological well-being:

We still help to restore homes, cars and belongings that have suffered physical harm or damage, but increasingly it’s about the person and how they cope.

The Telegraph notes that insurance to cover online harassment exists in the US, but it’s only for alleged bullies: say, if a person under the policy gets sued for the harassment, as opposed to covering cyberbullying victims.

It’s easy to imagine that parents who have the income might line up for this type of policy.

As it is, a recent survey found that nearly one out of every 5 – 18% – of teens report that they’ve been cyberbullied.

More than half said cyberbullying is worse than face-to-face bullying.

Of the teens surveyed, 41% said cyberbullying made them feel depressed, 41% said it made them feel helpless, 26% felt “completely alone”, 21% stayed away from school, 25% closed down their social media accounts and thus shut themselves off from friends, 38% said they didn’t tell their parents or guardians, 32% felt ashamed, 40% were scared their parents would get involved, and 36% simply worried about what their parents might do.

Women are also widely targeted. In fact, a Pew Research Center study from October 2014 found that age and gender are most closely associated with online harassment.

The insurance could be a godsend, if it’s a decent product.

Unfortunately, trolls’ preferred targets aren’t demographics you’d typically associate with high income.

Let’s hope that legislation manages to catch up to where we need it to be: protecting people at all income levels from harassment, no matter where they live or what kind of insurance they can shell out for.

In the meantime, for those of us in the US who don’t currently have an option for such insurance, The Atlantic last year put together a good article about what the law could or couldn’t do about online harassment as of November 2014.

Bear in mind that the past year has also seen a slew of new legislation and convictions, particularly for revenge porn.

For example, California in September passed into law SB 676: a law that allows prosecutors to seek forfeiture of unauthorized images as well as the storage devices they’re on.

The ripples continue to widen: in October, Attorney General Kamala Harris launched a state website to help victims of revenge porn get the images deleted from online sites.

Image of hand pointing through computer screen courtesy of Shutterstock.com

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/eHbjiOxB9f8/