STE WILLIAMS

It’s baaaack: Locky ransomware is on the rise again

Aug
17

Thanks to Dorka Palotay of SophosLabs for her behind-the-scenes work on this article.

Locky was once among the most dominant strains of ransomware. Over time, it receded from view, replaced by ransomware such as Cerber and Spora. But in the last couple of weeks, Locky has returned.

Last week it sported a new extension: .diablo6. This week researchers are seeing more new variants, now with a .lukitus extension. SophosLabs researcher Dorka Palotay said the new variants perform the usual Locky behavior:

It is spread by spam email and comes with a .zip attachment with a .js file inside (e.g. 20170816436073213.js). It downloads the actual payload, which then encrypts the files. 

Email characteristics, payloads

The .lukitus variant comes with email subject lines like “PAYMENT” and the following body content:

The Diablo variant used the body content “Files attached. Thanks” and the sender’s email address had the same domain as the recipient’s. The emails came with the .zip attachment “E 2017-08-09 (957).zip”, which contained a VBScript downloader called “E 2017-08-09 (972).vbs”.  The script would then download the Locky payload from an address ending with /y872ff2f. 

The .lukitus version connected to its command-and-control server via these addresses:

  • hxxp://185[.]80[.]148[.]137/imageload.cgi
  • hxxp://91[.]228[.]239[.]216/imageload.cgi
  • hxxp://31[.]202[.]128[.]249/imageload.cgi

The diablo6 version connected to its command-and-control server via these addresses:

  • 83.217.8.61/checkupdate
  • 31.202.130.9/checkupdate
  • 91.234.35.106/checkupdate

Defensive measures: malicious attachments

Sophos is protecting customers from the latest Locky campaigns. But it helps to keep the following advice top of mind:

  • If you receive an attachment of any kind by email and don’t know the person who sent it, DON’T OPEN IT.
  • Configure Windows to show file extensions. This gives you a better chance of spotting files that aren’t what they seem.
  • Use an anti-virus with an on-access scanner (also known as real-time protection). This can help you block malware of this type in a multi-layered defense, for example, by stopping an initial booby-trapped PDF or HTA file.
  • Consider stricter email gateway settings. Some staff are more exposed to malware-sending crooks than others (such as the order processing department), and may benefit from more stringent precautions, rather than being inconvenienced by them.

Defensive measures: ransomware

The best defense against ransomware is not to get infected in the first place, so we’ve published a guide entitled How to stay protected against ransomware that we think you’ll find useful:


You can also listen to our Techknow podcast Dealing with Ransomware:

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3nRP8MCLW00/

Got an iPhone? Here’s what we think about the security of iOS11

Aug
17

We’re due for an update to Apple’s iOS pretty soon, as the current stable release, iOS 10, is nearly a year old and iOS 11’s beta is rumored to be near completion and ready to launch soon.

Exactly when we don’t quite know – Apple isn’t forthcoming about details of its roadmaps, and simply says it’ll be “this fall”. (For reference, iOS 10 came out in early September 2016, just in case we’re looking at a yearly schedule.) As we’re counting down the last days of summer in the northern hemisphere, the iOS 11 official launch is likely not long away.

So it seemed a good time to take the beta for a whirl on my old iPhone 6 to see what changes coming might be of interest to the security-minded. (You can read the very shiny list of major updates on the official iOS preview page from Apple; not everything I cover below actually appears on the preview.)

A lot of the changes touted by the official pronouncements are about usability, design, and accessibility changes — all well and good, of course — but I want to kick the tires a bit with the security and privacy settings.

The lock screen: more talk-y, less lock-y

Setting up a passcode on your mobile devices is one of the most basic privacy measures you can, and should, take. We’ve covered before that you also should disable Siri access on your lockscreen, as Siri has been an attack vector in the past to bypass basic security measures and gain access to your private phone data (like stored photos) even when the phone is locked.

And yet, even with Siri disabled and a passcode enabled, the iOS 11 update negates a lot of the purpose of the lockscreen altogether. Even with iOS 10, Apple lets us know that more and more of our phone app notifications can be shown on the lockscreen without needing a passcode to see them — so you can act on them quickly, of course — and it seems with iOS 11 that trend continues.

iOS 11 adds viewing the Control Center (the menu that you can pull up from the bottom of the screen) and returning missed calls to options that work despite the lockscreen, in addition to features that were already available on iOS 10. All of these options are turned on by default.

Is this necessarily a problem? Of course not. However, it could be problematic if your phone is in the wrong hands. A passcode should mitigate the risk to you if your phone is stolen or misplaced; ideally the passcode should help render your phone all but useless to the person who now has it.

But by default now you can still access several features while the phone is still technically locked; personally, if my phone were stolen I wouldn’t want anyone to be able to access my Wallet credit cards (especially since many transactions don’t ask for a PIN), read my app notifications, or see what was on my day’s agenda. While I can see the utility in being able to respond to phone calls or messages from a lock screen — assuming the person who now has my phone is a good Samaritan — in general, if my phone is in the wrong hands, I want my phone to be completely useless to them.

Ultimately this is a matter of your comfort and risk tolerance — if the convenience of these features is worth it for you, then you can leave them all enabled.

But if you’d rather keep your lockscreen, well, lock-y, you’ll be able to disable any lockscreen notifications you prefer under Settings Touch ID Passcode and scroll down to the “Allow Access When Locked” area.

More of iCloud keychain

For those that already use Apple products and Keychain, you may be happy to find out that the iCloud Keychain is even more integrated into iOS 11 than previous iterations, with greater management and visibility within iOS. Under the Settings area, there’s a new section called “Accounts Passwords” where you can both manually add credentials (which I imagine might be quite tedious) or, when iOS detects a credential set, it may prompt you to save the credentials.

The credentials above are ones I entered and saved on my iPhone 6 with the iOS 11 beta, and these credentials also appeared on my Macbook’s Keychain under “iCloud” (hence iCloud Keychain), but the credentials already saved on my Macbook’s iCloud Keychain didn’t also sync back up to my iPhone’s “App Website Passwords” area.

Right now, or at least how I seem to have things configured, it seems like credential sharing could be one-way — iDevices to the greater Keychain account only — but it’s entirely possible I didn’t set things up correctly.

Nonetheless, this makes password management more streamlined and accessible for people who might not want to use a standalone password manager. I already use a password manager across my devices that I don’t intend on abandoning, but if I didn’t have that option I might consider going with this instead.

A bit more granularity over location sharing

This one’s a minor change, but a nice one: all apps that use any kind of location services are required to have three options for location access: Always, While Using, and Never.  While most apps in iOS 10 already used these three options, it was not hitherto required to have “While Using”, so if an app needed any kind of location access, it’d ask to have this access in perpetuity and not just when it needed it. (Uber was a rather notorious example of this.)

Of course, the master switch for location services is still right up at the top of the Location Services settings page, and you can simply turn the whole thing off.

If you want to play along at home and give the iOS 11 beta a shot, it’s pretty simple to do. Keep in mind that beta means things could be potentially wonky, and ultimately there is some, albeit minute, risk; so back up all your files before trying the beta and, better yet, try it out on a device that isn’t one you rely on day-to-day.

Ready to take the plunge? Follow Apple’s instructions here (it will prompt you to log in with Apple credentials) and NB you’ll have a much easier time if you’re installing via Safari.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/8bm66itkJhM/

Uber faces privacy audits every two years until 2037, rules FTC

Aug
17

Surely someone inside Uber had doubts about the riskiness of the company’s internal software program today infamously known to the world as “God View”.

If the name “God view” doesn’t sound dystopian enough, the description of what it was for – monitoring the location of customers taking rides in real time – should have made management think hard about the potential for it to be misused.

Including by them, it turns out: in 2014, it emerged a senior vice-president had used the system to monitor a journalist said to be hostile to the company as she moved around New York as a way of, allegedly, spying on her.

Last year, a former employee claimed that this was no one-off with God View being used to track:

High-profile politicians, celebrities and even personal acquaintances of Uber employees, including ex-boyfriends/girlfriends, and ex-spouses.

That’s a lot of intrusive God Viewing for one company, although it’s fair to say that the concept of big internet companies having access to the intimate details of their users’ lives doesn’t only apply to Uber.

In the event, in November 2014 the company responded by re-stating its privacy policy, including that it had deployed an automated tool to monitor employee access to God View as a way of deterring abuse.

The US FTC later discovered that tool was in use for less than a year, abandoned for reasons that still aren’t clear. Separately, around the same time, the New York Times also discovered that Uber started using a tool called Greyball to track officials investigating the company’s operations in a number of cities.

Compounding all this, the company had failed to encrypt driver data stolen during a 2014 data breach said at the time to affect 50,000 but since upped to 100,000.

This week the FTC ruled on this catalogue of data privacy problems and bad behaviour. Summarised FTC acting chairman Maureen K Ohlhausen:

Uber failed consumers in two key ways: first by misrepresenting the extent to which it monitored its employees’ access to personal information about users and drivers, and second by misrepresenting that it took reasonable steps to secure that data.

Among a series of undertakings, Uber has six months to undertake an independent audit of its privacy controls, which will have to be repeated every two years until 2037.

That sounds like a big deal until you realise that in 2011 the FTC handed the same 20-year privacy undertaking to Facebook and Google, and in 2014 to Snapchat.  This kind of privacy case in the EU could perhaps have resulted in a fine large enough to, at the very least, seriously annoy investors. In the US, companies end up with extra admin.

But damage has still been done, not only to Uber’s image but also the fast-sinking notion that Silicon Valley shows how technology and society can work together in a mutually beneficial way.

To a growing band of sceptics, Uber’s God View is just the latest example of the tech industry’s irresistible temptation to become unhinged by its own importance in pursuit of objectives it refuses to be honest about.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/4DVXdFfnvGs/

Woman targeted with 120 images on public transport via AirDrop

Aug
17

On pretty much any given day, you’d rather spend your morning on the subway reading the newspaper, drinking your coffee, or catching up on Instagram than have photos of a stranger’s genitals up on your iPhone.

Unfortunately, the return of an ancient fad known as bluejacking has meant that the air in subway cars has increasingly been polluted by inflicted “dick pics”.

As you may or may not recall, bluejacking first popped up in 2003. It allowed pranksters to exploit mobile phones’ Bluetooth technology, which lets devices communicate with each other up to a range of about 30 feet. When Bluetooth is activated, it automatically seeks out other Bluetooth devices in the vicinity, and that lets people send anonymous messages – or, say, pictures of their junk, as goes the modern rendition – to each other.

As Sophos technical support reported many moons ago, getting anonymous messages panicked some users into thinking they might be under attack from a mobile phone virus.

That’s exactly what bluejackers were after: that shocked look on a recipient’s face as they blasted out unexpected junk.

Ironically enough, the idea for bluejacking was originally that of a woman, and the first victim was a man, though there are other origin stories about it having been first carried out by a Malaysian IT consultant who used his phone to advertise Sony Ericsson. At any rate, as the BBC tells it, a woman going by the name of Ellie had said that the “priceless” expression on the face of her first victim as he tried to work out what was going on had turned her into a regular bluejacker.

She reportedly put up a tutorial on a message board that, back then, was a favorite among owners of SonyEricsson phones, explaining that …

[The victim’s expression], mixed with not knowing whether the victim will react in an amused/confused or negative way gives me an adrenaline rush.

Fourteen years after adrenaline junkies were getting high on bluejacking, we now have AirDrop: an iPhone file-sharing app that enables users to send photos, videos and documents instantly over a wireless connection.

Nowadays, many people have AirDrop on by default, given that it’s used for NFC payments. That means there are plenty of phones that are beaming out come-hither signs over the airwaves, and there are plenty of perverts ready to freely spew their pixels on to them.

And that’s exactly what’s happening. The reference to 120 penis portraits wasn’t an exaggeration: Sophie Gallagher, a writer for Huffington Post UK, on Tuesday posted a story about having been cyber-flashed with a flock of more than 100 down-the-pants images via AirDrop while traveling on the London Underground.

That’s 120 images, to be exact, she later reported in a post that took people to task for blaming the victim.

“Stop telling me to turn my AirDrop off,” she said, in spite of the fact that, well, shutting your Wi-Fi up would in fact stop the weiner parade:

Yes, turning it off stops me from receiving the pictures, it makes it harder for the perverts to contact you when you have the nerve to leave the house in the morning.

But it doesn’t stop the offender from sending them to someone else, from believing that they can hide behind their phone screen and cause harm and distress to unsuspecting people around them.

And quite honestly it is insulting to men to suggest that the only way they can resist making sex offenders of themselves is to block their methods of communication.

Insulting? Well, it might be more like “pragmatic”. Dr Justin Lehmiller, a Harvard University psychology professor, has suggested (in the absence of much research on the topic) that the (extremely) common phenomenon of sending unwanted penis pictures to women could be attributed to cognitive biases that have evolved to help with reproduction.

I suspect that the most likely explanation is that men are simply misperceiving women’s interest in receiving photos of their junk. There’s a large body of research indicating that men aren’t very good at determining how interested women are in sex.

In fact, research has shown that men often mistake friendliness for flirting. Basically, women have to club them with eggplants – did you know that the eggplant emoji is a stand-in for “penis?” – to get across the idea that they don’t want to get a closeup of their zucchinis.

How should one react when one receives images of a stranger’s floppy flesh? Some suggest aggressively:

While others point out that this could escalate the situation to the point of stalking or other threatening behavior.

It’s advisable to report the matter to police. As the HuffPo has reported, few women do so, and London police, at least, seem to think that there’s no epidemic going on. (New York police seem to know better.)

It’s well worth reporting incidents to the police, both to get them up to speed with the frequency of unsolicited dick pics and to get the senders caught.

Because yes, it’s a crime. In England, sending indecent images is classified under section 66 of the Sexual Offences Act (2003), given that it’s the same as exposing genitals and intending that the recipient “see them and be caused alarm or distress”. The penalty for breaking the law is a prison term of up to two years.

Detective chief inspector Kate Forsyth from the British Transport Police told HuffPost UK:

My message to offenders is clear, while you might think you can hide behind modern technology in order to carry out abuse, you leave a digital footprint and stand a very good chance of being caught, arrested and ending up on the sex offenders’ register.

And that might be a lot of offenders finding their way on to the register: a survey of more than 5,500 American singles found last year that 53% of the women they asked had been on the unwilling receiving end of an unsolicited dick pic. People, just don’t send photos of your junk to someone else unless you know it will be welcome – and by “know it will be welcome”, we mean “that you’ve got explicit consent to send”.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/rhxnurlngGw/

News in brief: new Bitcoin fork; HBO hacked; China cracks down

Aug
17

Your daily round-up of some of the other stories in the news

Bitcoin fork to become a trident

Just when you think you’ve got your head around the recent fork in Bitcoin, which produced another variant of the cryptocurrency called Bitcoin Cash, the news comes along that it will fork again, in November.

This latest move has its roots in the ongoing 1MB-per-block issue, which – broadly – means that Bitcoin transactions take a long time to process. Each transaction is written to a single block on the blockchain, and each block can, under the original Bitcoin protocol, be only 1MB in size.

The new version of the blockchain software underpinning Bitcoin, which created Bitcoin Cash, can take blocks of up to 8MB, which should speed up processing time substantially. However, that version of the protocol excludes the segregated witness process.

Now there’s a third version of the software in the works, which takes the standards set out in the New York Agreement of May. This version of Bitcoin will set the block size at 2MB and will include segregated witness – and the new version of Bitcoin will be known as Segwit2x.

If all this seems arcane, in many ways it is. But the arcane stuff translates into the real world, where there’s a lot of both hype and concern about the wild west, unregulated nature of Bitcoin, with some flagging up similarities between the cryptocurrency and the unregulated explosion of shadow banking that eventually led to the financial crash.

HBO social media accounts attacked

HBO is in the wars again, with a hacker group calling itself OurMine attacking and taking over several of its social media accounts, apparently to “raise awareness” about lax security at the media giant, Rolling Stone reported on Thursday.

OurMine posted messages on HBO’s accounts on Twitter and Facebook, including corporate accounts and those for HBO hits shows such as Game of Thrones and Silicon Valley, saying “Hi, OurMine are here, we are just testing your security, HBO team please contact us to upgrade the security.”

This is just the latest in a string of attacks on the company, with others led by “Mr Smith” asking for huge sums of money, which thus far HBO has declined to pay. Back in May, HBO told GoT cast members and crew to implement 2FA on their email and other accounts, and meanwhile stolen episodes have been leaked online.

China orders stores to cease selling VPN tools

China has further stepped up its efforts to restrict its citizens’ access to the internet beyond the Great Wall by warning e-commerce platforms over the sale of illegal VPNs.

Shopping giant Alibaba is one of five platforms told to carry out “immediate self-examination and correction”, Reuters reported on Thursday. The instruction came in a notice posted by the Zhejiang provincial branch of China’s cyberspace regulator, the Cyberspace Administration of China.

Tools that allow Chinese residents to bypass what’s known as the Great Firewall of China are now banned, with the CAC saying it has “ordered these five sites to immediately carry out a comprehensive clean-up of harmful information [and] close corresponding illegal accounts”.

As well as Alibaba’s Taobao site, social shopping site Mogujie and entertainment platforms Xiami and Peiyinxiu were ordered to remove VPN tools.

Catch up with all of today’s stories on Naked Security


 

 

 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qlZ4aE2G8kQ/

London council ‘failed to test’ parking ticket app, exposed personal info

Aug
17

A London council has been fined £70,000 after design faults in its TicketViewer app allowed unauthorised access to 119 documents containing sensitive personal information.

The parking ticket application, set up in 2012, was developed by Islington council’s internal application team for the authority’s parking services.

It allowed people issued with a parking ticket in the north London borough to log on using their car registration number and see CCTV images or videos of their alleged offence.

They could then appeal this ticket by sending supporting evidence – which might include details of health issues, disabilities or finances – to the council by email or post. The back office would scan and upload this information into the system as a ticket attachment folder.

That brought together a person’s car reg, name, address and potentially medical and financial details – so you’d hope the council had properly tested the system.

But it seems this was not to be, and in October 2015 a concerned citizen alerted the council to the fact that these ticket attachment folders could be accessed if a user tweaked the URL.

Between the launch of the site in 2012 and the date the issue was reported, 825,000 parking tickets had been issued and 270,000 appeals received.

On October 25, 2015, the ticket attachment folders contained personal data relating to 89,000 users while internal testing showed that 119 documents had been accessed a total of 235 times from 36 unique IP addresses.

That information was related to 71 users – although an investigation by the UK’s data protection watchdog found that there was no evidence that anyone had actually been harmed by the breach.

However, the Information Commissioner’s Office found that the council had failed to take proper technical measures to stop unauthorised access to the information, and handed it a £70,000 fine (PDF) for breaching the Data Protection Act.

The ICO said that the folder browsing functionality in the web server was misconfigured and that the application had design faults.

“The council should have tested the system both prior to going live and regularly after that,” it said.

“For no good reason, Islington appears to have overlooked the need to ensure that it had robust measures in place despite having the financial and staffing resources available.”

The council notified both the ICO and the people whose data was exposed and, in a statement sent to The Register today, once again apologised for the breach.

“We remain very sorry about the previous TicketViewer problem and agree with the ICO that we failed to meet the required data protection standards back in 2015,” a spokesperson said.

“As soon as we were aware of the problem we took every possible action to prevent a recurrence and instructed auditors to carry out a thorough review so we could learn from our mistake.”

The council added that it had taken advantage of the reduced fine offered by the ICO for early payment, which cut the costs to £56,000. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/17/london_council_fined_over_leaky_parking_ticket_app/

Rowhammer RAM attack adapted to hit flash storage

Aug
17

It’s Rowhammer, Jim, but not as we know it: IBM boffins have taken the DRAM-bit-flipping-as-attack-vector trick found by Google and applied it to MLC NAND Flash.

Google’s Project Zero found Rowhammer in 2015, when they demonstrated that careful RAM bit-flipping in page table entries could let an attacker pwn Linux systems.

Since the bug upsets the operating system by attacking memory rather than looking for developers’ errors, it’s potent even if it’s limited to types of memory that aren’t protected by error checks (such as ECC in newer RAM).

Ever since Project Zero’s initial result, boffins have looked for other vectors or other victims (for example, it was turned into an Android root attack in 2016).

Enter a group of boffins from IBM Research Zurich, who plan to demo a Rowhammer attack on MLC NAND flash after explaining it at this week’s Usenix-organised W00T17 conference in Vancouver.

Scary? Yes, but there’s a couple of slivers of good news: it’s a local rather than a remote attack, and the researchers constrained themselves to a filesystem-level attack rather than a full-system attack.

The bad news is that Rowhammer-for-NAND can work at lower precision than its ancestor: while the original Google research worked by flipping single bits, “ the attack primitive an attacker can obtain from MLC NAND flash weaknesses is a coarse granularity corruption”.

In other words, their “weaker attack primitive … is nevertheless sufficient to mount a local privilege escalation attack”.

To get that far, the research explain in this paper [PDF], an attack has to beat protections at all layers from the Flash chip up to the operating system:

Only then does the attacker get to present their payload.

A successful privilege escalation is shown in the video below:

Youtube Video

In a Linux ext3 filesystem formatted in 4 KB block sizes, an attacker who can create a 100 GB file has a 99.7 per cent chance of a successful exploit, the researchers note.

The particular characteristic of Linux that make the attack possible is the inode, which stores the attributes and (importantly for this attack) disk block locations of an object’s data: “An indirect block is written (by the kernel filesystem driver) as soon as a file becomes larger than 12 blocks in size: this write is therefore very easy to time and trigger for the attacker.”

While they only went as far as writing a local exploit, the authors note that remote attack scenarios are feasible, through something like Javascript in a browser:

“Because browsers do allow writes and reads to the filesystem (albeit indirectly), through web content local caching, cookies, or use of the HTML5 storage API, it may be feasible to extend the attack vector presented here to remote attacks”.

The best defence, they note, is disk encryption with something like dm-crypt. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/17/rowhammer_for_nand_flash/

Bank IT fella accused of masterminding multimillion-dollar insider-trading scam

Aug
17

A banking IT expert orchestrated an insider-trading caper that raked in millions of dollars for him and his pals, it was claimed on Wednesday.

Between August 2013 and April 2017, Daniel Rivas, 32, worked for an unnamed New York bank in its capital markets technology division. He was hired as a consultant for a new banking application and, according to US authorities, he allegedly used his privileged access to servers and email systems to run an insider-trading ring.

It’s claimed Rivas had access to, and exploited, highly confidential details on hundreds of pending business deals, giving him an edge on announcements yet to be made public.

Rivas, of Hasbrouck Heights, New Jersey, is accused of passing insider knowledge to his girlfriend’s father, one James Moodhe, 60, of New York City, who then used it for trading stocks on 25 companies. US financial watchdog the SEC estimates Moodhe amassed about $2m in profit over the three years he was getting information from Rivas.

Being a techie, Rivas understood the nature of digital evidence and did not communicate with Moodhe online. Instead, the SEC claims, he wrote out share tips by hand and gave notes for his girlfriend to pass on to her dad.

Initially, it is alleged, Moodhe just invested in the stock of firms that were going to be taken over, but, as the profits mounted, he started moving into option trading to maximize his returns, the SEC said. He also shared some of the tips with his friend Michael Siva, 55, of West Orange, New Jersey, who was a financial adviser.

Siva is accused of using inside information on at least 13 stock trades from both his own trading accounts and for his clients. These netted him around $8,000 in profits personally, but his clients made over $1.18m over two years of trading, the SEC claims.

Knowledge wants to be free

When not currying favor with his girlfriend’s dad, Rivas is also accused of passing tips to two friends in Florida: Roberto Rodriguez, 32, of Miami Gardens, and Rodolfo Sablon, 37, of Miami, the SEC claims. Notes passed hand to hand wouldn’t be possible given the distance from New Jersey to Florida, so the trio are accused of collaborating via an unnamed app that encrypts messages and then destroys them after they were read.

In December 2015, Rivas met the two and planned to use his insider knowledge to allow the duo to set up their own brokerage business, it is claimed. Rivas would then join as a customer and benefit from their success, it is alleged.

Neither Rodriguez nor Sablon were experienced traders, but they began making larger and larger bets on stock movements, all of which hit the jackpot, the authorities claim. That’s the kind of situation that sets off alarm bells in the minds of regulators – for instance, the newbies realized $2m in profit from a $100,000 investment, it is claimed.

The two did attempt to cover their tracks by writing phony research reports to make it look like they were relying on public info, and set up shell companies for trading, it is claimed. Sablon established The Odin International Group while Rodriguez created Blackbourne Financial Group, and these companies opened accounts to trade on the tips the duo were allegedly getting from Rivas.

The SEC says that by 2016 Rodriguez was also sharing the tips with another, unnamed friend, who realized some good profits of his own – $67,000 according to the SEC.

The third and final alleged conduit for insider information was Rivas’ childhood friend Jhonatan Zoquier, 33, of Englewood, New Jersey. The SEC claims that Rivas passed on stock tips to his buddy – as well as buying him an engagement ring for a marriage proposal to his girlfriend – and this information netted over $30,000 in illicit profits.

Following a pattern, Zoquier is accused of passing the same stock information on to his friend, Jeffrey Rogiers, 33, of Oakland, California, who used it to make over $50,000 in profits from trading, it is alleged. The SEC claims Rogiers also passed on the information to two other friends who made over $400,000 from trading.

The SEC’s formal charges against the above were filed in the Southern District of New York on Wednesday. The watchdog accuses Rivas, Moodhe, Rodriguez, Sablon, Zoquier, Siva, and Rogiers of violating Sections 10(b) and 14(e) of the Securities Exchange Act of 1934 and Rules 10b-5 and 14e-3. It wants the money from the illicit trades, with interest, back, and prison time may be on the cards if they are found guilty. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/17/it_bank_worker_insider_trading_charges/

UK govt steams ahead with £5m facial recog system amid furore over innocents’ mugshots

Aug
17

The UK Home Office has put out to tender a £4.6m ($5.9m) contract for facial recognition software – despite the fact its biometrics strategy and retention systems remain embroiled in controversy.

According to the tender announcement, a company is sought to provide “a combination of biometric algorithm software and associated components that provide the specialized capability to match a biometric facial image to a known identity held as an encoded facial image.”

It runs for an initial term of 60 months and its main job will be to integrate the Home Office’s Biometric Matcher Platform Service (BMPS) into a centralized biometric Matching Engine Software (MES).

Of the 48 regional and special police forces in the UK, seven use the Athena system for storage of custody images, and the remaining forces use a variety of different approaches and legacy IT systems, which makes introducing universal and consistent policies very difficult. The contract’s main goal will be to start creating a standard approach.

The decision to fork out for such a solution comes amid significant controversy over the Home Office’s retention of millions of individuals’ faces. That approach was declared illegal by the High Court back in 2012 and Lord Justice Richards told the police to revise its policies, giving them a period of “months not years” to do so.

Despite that admonition, it took the Home Office until February this year – five years later – to produce a new set of policies. And that new policy requires an individual that believes they may be in the database of 19 million mugshots to specifically ask to be removed from it. That request can be turned down if it meets the highly ambiguous and vague standard that retention would serve “a policing purpose.” The police themselves get to decide if that is the case.

Flaws

The new policy has met significant criticism. Both the former and current Biometrics Commissioners have published reports on the Home Office’s approach to facial recognition and biometrics more generally and have pointed to significant problems and flaws.

In March 2016, in his annual report, Alastair MacGregor QC, warned that “hundreds of thousands” of facial images held by the police belong to “individuals who have never been charged with, let alone convicted of, an offence.”

He also noted that “the considerable benefits that could be derived from the searching of custody images on the Police National Database (PND) may be counterbalanced by a lack of public confidence in the way in which the process is operated, by challenges to its lawfulness and by fears of ‘function creep’.”

MacGregor also wrote a follow-up report [PDF] in April 2016, shortly before being replaced as Biometrics Commissioner, in which he continued warning about the “wholly unsatisfactory situation” where “very significant quantities” of biometric information on individuals that should have been deleted had been retained.

“It seems clear that insufficient priority was given to the need to comply with that regime and to ensure that expired material was quickly deleted,” he concluded. “It also seems clear that IT and resource difficulties, a lack of adequate management information and oversight, the absence of any proper system to check the lawfulness of retention and/or to generate appropriate deletions, and a breakdown in communications/understanding between JFIT and SOFS all played important parts in this ‘deletion deficit’.”

In other words, the complete lack of rules, operational procedures and consistent systems meant that hundreds of thousands of people’s photos were being illegally held on police computers.

Five years later…

And that comment points to another, broader failure on the part of the Home Office: the lack of an official biometrics strategy, despite having promised to produce one back in 2013.

In 2015, Parliament’s science and technology select committee complained that the UK government’s joint forensics and biometrics strategy had missed deadlines three times in 2013, 2014 and 2015. As a result, the panel warned, “there remains a worrying lack of clarity regarding if, and how, the government intends to employ biometrics for the purposes of verification and identification and whether it has considered any associated ethical and legal implications.”

The committee called on the government to publish a comprehensive strategy “no later than December 2015” – a deadline that the Home Office agreed to. And then failed to meet yet again. More than 20 months later, there is still no official biometrics strategy.

Despite the lack of strategy, the police have continued to roll out face-spotting systems using their biometrics database, using it at last year’s Notting Hill Festival – and failing to spot a single person.

The Home Office meanwhile has continued to spend millions on bodycams for the police, whose images are uploaded to servers, despite there not being any official legal or operational rules in place – a situation that the Scottish police flagged as a significant problem earlier this year.

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/17/home_office_pushes_ahead_with_facial_recognition_system_despite_furore_over_innocent_mugshots/

Judge orders LinkedIn to stop blocking third-party use of your data

Aug
16

A San Francisco judge has rebuffed LinkedIn’s attempts to stop a third-party data-analytics startup from using the publicly available data of its users. According to legal experts, the case could wind up in the Supreme Court, given the important constitutional and economic issues it raises.

As we reported in July, HiQ, a San Francisco startup, has been marketing two products, both of which depend on whatever data LinkedIn’s 500m members have made public: Keeper, which identifies employees who might be ripe for being recruited away, and Skills Mapper, which summarizes an employee’s skills.

To reiterate: HiQ isn’t hacking anything away – it’s just grabbing the kind of stuff you or I could get on LinkedIn without having to log in. All you need is a browser and a search engine to find the data HiQ’s sucking up, digesting and selling.

LinkedIn has tolerated this for years. Then, for whatever reason, it told HiQ to stop. Bad news for the start-up – without a steady stream of data from LinkedIn, HiQ cannot function.

HiQ CEO Mark Weidick was a bit surprised. It’s not as if LinkedIn suddenly discovered what the company was up to. Its employees had attended a conference HiQ put on, he told the San Francisco Chronicle:

I thought we were on good terms. They knew perfectly well what we are doing. We were doing it in the broad light of day.

Nonetheless, in May, LinkedIn sent a cease-and-desist order to HiQ, alleging that the startup was violating the Computer Fraud and Abuse Act (CFAA), the Digital Millennium Copyright Act (DMCA), and unfair business practices under California state law. In the letter to HiQ, LinkedIn noted that it had used technology to block HiQ from accessing its data.

HiQ filed for relief in early June, asking for a temporary injunction and recommending that the parties take 30 days to discuss the matter and, hopefully, to come up with an amicable solution.

On Monday, the San Francisco judge sided with HiQ, saying that the “balance of hardships tips sharply in HiQ’s favor” and that LinkedIn’s argument about HiQ having violated the CFAA is pretty dubious. The law wasn’t put in place to gum up access to publicly available data, the judge said in a court order granting HiQ’s motion for a preliminary injunction.

Indeed. The CFAA, which prohibits accessing a computer without authorization, has been used in many criminal cases, such as to prosecute ex-employees who hack their former employers. It was also used, infamously, to prosecute internet activist Aaron Swartz. Rights groups have called the act “infamously problematic“.

But to use the CFAA to prosecute a company for scraping publicly available data? Um, no, that’s not a thing, the judge said on Monday:

The broad interpretation of the CFAA advocated by LinkedIn, if adopted, could profoundly impact open access to the internet, a result that Congress could not have intended when it enacted the CFAA over three decades ago.

The motion ordered LinkedIn to dismantle any technical roadblocks it put in place to fend off bots that scrape its members’ data. The BBC reports that LinkedIn is considering an appeal.

HiQ is far from the first company to spin a business model out of whatever it can siphon off another service. You can think of social media platforms – say, LinkedIn, Twitter, and Facebook – as trees. They’ve got an ecosystem of epiphytes, sucking up their data to package and sell in some form.

Sometimes, that parasitic relationship can carry on for years. Take, for example, Geofeedia’s use of the APIs of Twitter, Facebook and Instagram.

For five years, Geofeedia used their data streams to create real-time maps of social media activity in protest areas. As was made clear in a report from the American Civil Liberties Union (ACLU) about police monitoring of activists and protesters via social media data, police have used the maps to identify, and in some cases arrest, protesters shortly after their posts became public.

Following that report, the three social media giants cut off the data streams they were feeding Geofeedia.

The metadata – including images, geolocation data, and screen names available on Instagram’s public feed – on Geofeedia’s map of Ferguson protests was all publicly available. But the scale at which police were identifying and retaining data on protesters was beyond what any individual could achieve without special access to social media platforms’ APIs.

LinkedIn is rationalizing its opposition to HiQ not in terms of scale but rather in terms of user privacy and HiQ’s ability to retain user data. It’s pointing to what it says are more than 50m LinkedIn members who’ve used a “Do Not Broadcast” feature that prevents the site from notifying other users when a member makes profile changes, even when a profile is set to Public.

LinkedIn says it’s also received user complaints about the use of data by third parties. In particular, two users complained that information that they had previously featured on their profiles, but subsequently removed, remained visible to third parties (other than HiQ).

LinkedIn maintains that even though HiQ wants to collect data that’s publicly viewable, it could use profile tweaks – even those listed as Do Not Broadcast – to label an employee as being at high risk of leaving under its Keeper product. It could also retain and make available data that LinkedIn users have deleted – including entire profiles.

OK, those arguments have some merit, the judge wrote. But is data privacy seriously at risk? Out of 50m users who turned on “Do Not Broadcast”, LinkedIn only managed to scare up a measly three complaints about data privacy related to third-party data collection. And none of those three mentioned HiQ or the Do Not Broadcast option.

LinkedIn is even willing to sell profile change data to third parties, if they subscribe to its Recruiter product, according to marketing materials HiQ presented to the court. What’s sauce for the goose is definitely not good for the gander in LinkedIn’s opinion: for years, it’s charged recruiters, salespeople and job hunters for higher levels of access to profile data, but now it’s telling HiQ to keep its hands off.

Where does this leave LinkedIn and its users? It’s a question with obvious relevance to anybody who’s looking for a new job but would like to keep the search on the QT, not served up on a platter to their current boss. Sure, we want our professional information to be public. How else would potential employers find us? But does that leave third parties free to romp, able to do whatever they like with our data, without our say-so?

We’ve seen multiple social platforms fight against the data-sucking epiphytes, for good reason: the bots have scraped publicly available data for a host of privacy-challenging and/or unsavory purposes. For example, last year, without users’ permission, Danish researchers publicly released data scraped from 70,000 OkCupid profiles, including their usernames, age, gender, location, what kind of relationship (or sex) they’re interested in, personality traits, and answers to thousands of profiling questions used by the site.

But the LinkedIn/HiQ case could have far wider implications than just that of a scuffle between two companies. The constitutional scholar and Harvard law professor Laurence Tribe is weighing in to advise HiQ in the case, due to what he told the San Francisco Chronicle are its important constitutional and economic issues.

For a long time, this has been a central concern for me. Today, social media is the new equivalent of the public square. [LinkedIn’s actions present] a serious challenge to free expression in the modern world.

Freedom of speech is not just about flag-burning. It’s about how you use information in the digital economy. Data is the new form of capital in creating products and services.

If it does reach the Supreme Court, we’ll be sure to keep following the case.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zZlvtUPNuKE/