STE WILLIAMS

City planners use mobile data to track congestion in tourist hot spots

Fancy a tourist trip to the iconic Sagrada Familia cathedral in Barcelona? You might get around more easily in future as the city of Barcelona is trialling a scheme to track how people move around the church when they are in it.

The objective is to reduce congestion but there will inevitably be concerns about security, privacy and just what they can track. Fortunately, at least one project in the UK, run by Transport for London, suggests that it’s possible to use be part of this kind of project without compromising your privacy. And of course, you can only be tracked if you have a device with you – and don’t forget that you can always switch WiFi or Bluetooth off.

The Spanish scheme was developed by d-Lab with data from Orange, and used WiFi, GSM and and 3D sensors to establish how people were moving around the attraction and the effect they had on the building and the surrounding environment. D-lab established exactly how many people entered the church after approaching it and how long they stayed, and was therefore able to advise the authorities on how to manage entries so that congestion was cut.

Similar schemes have been tried before; Disney had a trial with its “magic band” a couple of years ago to see how people moved around the attraction.

The scheme in London trialled tracking people on the underground through their mobile phones that were logged into the free WiFi provided on the Tube. The aim was to get data on how people travel around the network to combat congestion and work out the routes people were taking rather than just the points at which they entered and exited the tube network. TfL assured users that all data would be anonymised when it announced the pilot.

Jonn Elledge, editor of Citymetric, published by the New Statesman about cities, agreed that privacy and security were fundamental but stressed the benefits assuming this could be made to work. He said:

What we haven’t been able to have before is data about how people move through a city – we’ve just had where they use their cash cards and travel cards. We can now see routes people are taking as well as where they pay.

Ultimately that tells you where the congestion is going to be and therefore where you need to invest, and that’s quite a benefit.


 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/YUA13u48FOA/

Wow, did you see what happened to Veracode? Oh no, no, it’s not dead. It’s been bought by CA

Investors in the cloudy app security biz Veracode are going to be celebrating after CA Technologies agreed to buy it up for $614m in cash.

CA announced the buy on Monday and said that it wanted to add Veracode’s application security testing to its security lineup and devops business, as well as keeping its cloud apps more secure. CA thinks Veracode, headquartered in Burlington, Massachusetts, will help it with larger enterprise customers while also snaffling the security firm’s larger punters.

“We provide over 1,400 small and large enterprise customers the security they need to confidently innovate with the web and mobile applications they build, buy and assemble, as well as the components they integrate into their environments,” said Bob Brennan, CEO of Veracode.

“By joining forces with CA Technologies, we will continue to better address growing security concerns, and enable them to accelerate delivery of secure software applications that can create new business value.”

The Veracode acquisition is the latest in a long line of purchases by New York City-based CA. The giant snapped up Israeli testing outfit BlazeMeter last year, and in 2015 snaffled identity management outfit IdMlogic, cloudy devops supplier Rally Software, and automated testers Grid-Tools. It is a corporate sponge, in other words. A bottomless devourer of technology. A blackhole of software.

“Software is at the heart of every company’s digital transformation. Therefore, it’s increasingly important for them to integrate security at the start of their development processes, so they can respond to market opportunities in a secure manner,” said Ayman Sayed, president and chief product officer, CA Technologies.

“Looking holistically at our portfolio, now with Veracode and Automic, we have accelerated the growth profile of our broad set of solutions. We now expect that the size of our growing solutions within our Enterprise Solutions portfolio will eclipse the more mature part of the Enterprise Solutions portfolio in FY19.”

The deal is expected to be concluded by April. Veracode has received roughly $114m in funding since its founding in 2006. CA predicted the biz gobble would add a couple of percentage points to its global revenues and have a “modestly adverse impact” on earnings per share and cash flow from operations over the next two fiscal years. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/07/ca_technologies_slurps_up_veracode/

Scammers hired hundreds of ‘staff’ to defraud TalkTalk customers

Hundreds of staff were hired by scammers in Indian call centres to defraud TalkTalk customers, according to a BBC report revealing the extent of the scam.

According to the report, employees worked in shifts and earned £120 per month phoning TalkTalk customers. The whistleblowers say they were given a script in which they were told to claim they were calling from TalkTalk.

The Register has documented the scam since February last year, which included customers being convinced to install a remote control software package via which they then deploy a trojan.

Fraudsters had breached maintenance visits data in order to convince customers to allow them remote access to their computers.

One customer told us he had been asked to download TeamViewer software, which was used to try to make a number of money transfers using third-parties’ credit card information.

The on-going problems emerged again in June when another customer got in touch to say scammers had contacted her to say they were contacting all their customers regarding the hacking that happened last year.

Another customer who was contacted by scammers in December separately, got in touch with The Register to share the telephone number from which they rang in order to defraud him by £257.

The Register phoned the number, but the respondent purporting to be a TalkTalk representative hung up when we put it to them the number was being used by fraudsters.

Last month, the chief exec of TalkTalk, Dido Harding, stood down from the company after seven years in the role. Harding presided over TalkTalk during its catastrophic cyber-attack in 2015, in which nearly 157,000 users’ details were divulged and which cost it £42m.

A TalkTalk spokeswoman said of the latest revelations: “We are aware that there are criminals targeting a number of UK and international companies, and we take our responsibility to protect our customers very seriously.

“This is why we launched our ‘Beat the Scammers’ campaign, helping all our customers to keep themselves safe from scammers no matter who they claim to be, while our network also proactively blocks over 90 million scam and nuisance calls a month.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/07/talktalk_scammers_hired_hundreds_of_staff_to_defraud_customers/

Facebook shopped BBC hacks to National Crime Agency over child abuse images probe

+ Comment Facebook reported BBC journalists to the police after the reporters accidentally emailed them images of child sexual abuse, the social network’s PR has alleged.

The BBC was investigating private Facebook groups used to share both legal and illegal images, some of the latter of which featured children being abused. Its journalists reported 100 images to Facebook’s moderators but only 20 per cent of them were taken down.

As part of the investigation, the BBC wanted to interview Facebook’s director of policy, Simon Milner, to ask why the offending images were not removed by moderators.

BBC reporter Angus Crawford said in his story that Milner’s condition of agreeing to the interview was for the BBC to send him examples of the images that had not been removed.

Having received them, Facebook then immediately contacted the Child Exploitation and Online Protection Centre and the National Crime Agency to report that it had been sent images of child abuse.

Facebook’s UK PR tentacle claimed to The Register that the BBC forwarded images it believed were legal but otherwise broke Facebook’s terms and conditions. Upon looking at them, Facebook said it found images of child abuse and therefore reported the matter to the police. The BBC would not comment on this specific allegation.

In a statement sent to the BBC, Milner said:

We have carefully reviewed the content referred to us and have now removed all items that were illegal or against our standards. This content is no longer on our platform. We take this matter extremely seriously and we continue to improve our reporting and take-down measures. Facebook has been recognized as one of the best platforms on the internet for child safety.

It is against the law for anyone to distribute images of child exploitation. When the BBC sent us such images we followed our industry’s standard practice and reported them to CEOP. We also reported the child exploitation images that had been shared on our own platform. This matter is now in the hands of the authorities.

The BBC was at pains to emphasise that all the images it sent to Facebook were pictures it had found on Facebook’s own platform and which had not been taken down despite reports to site moderators, and that Facebook had specifically asked for the material to be sent across.

+ Comment: Yes, you had to call the cops – but you were dickheads about it

There is a formal memorandum of understanding between police forces, the Crown Prosecution Service, ISPs and social media platforms. It sets out how authorised people handling reports of child abuse images on the internet should act to avoid being prosecuted themselves. Companies can also sign “specific agreements” with the authorities that give quasi-legal effect to internal processes for moderators handling reports of child abuse imagery.

The memorandum is very clear: anything not covered by it, or any agreement’s strictly drawn terms, is likely to lead to prosecution.

Nonetheless, neither the memorandum nor its underlying statute, the Protection of Children Act 1978, has any public interest defence for reporters investigating paedophile rings. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/07/facebook_bbc_child_abuse_images_police_report/

Cybercrooks charging more than the price of a new car for undetectable Mac malware

Cybercriminals are attempting to flog a supposedly undetectable Mac malware strain on the dark web for 40BTC ($50,000) a pop.

The Proton malware boasts capabilities including taking full control of macOS devices by evading antivirus detection, its sellers claim. Hackers offered to add an Apple-approved developer signature to the attacker’s custom RAT software in order to bypass Apple’s Gatekeeper protection on targeted Macs, according to Mac security firm Intego.

Offers touting the malware first appeared on a Russian cybercrime message board last month and were first reported by Israeli threat intelligence firm Sixgill.

Security experts are sceptical as to whether the nasty will find many buyers.

Chris Doman, security researcher at security dashboard firm AlienVault commented: “At 40 Bitcoin for unlimited installs, and far more for access to the source code, this is still an expensive rat. Particularly considering RATs for macOS are now available for free. It’s likely this pricing is intended to limit the distribution – and so detection by security vendors.

“Whilst Proton is marketed on dark web forums, it also has promotional YouTube videos and a (now down) public website. It may have attracted more attention than the malware author was hoping.”

Kyle Wilhoit, senior security researcher at DomainTools, added that would-be buyers would likely be able to haggle over the price. “Typically, just like negotiating the price for a car, adversaries will negotiate the price lower than what’s being asked, or the malware authors themselves will lower the price,” he said. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/07/undetectable_mac_malware_underground_sales/

Is Mentorship the Key to Recruiting Women to Cybersecurity?

New ISACA survey identifies biggest barriers faced by women in tech, chief of which are a lack of mentors and female role models.

The cybersecurity industry has got a lot of work to do in order to shift the gender balance of its talent pool. Industry figures show that – in terms of recruitment of women –  cybersecurity remains stagnant, with some of the worst male-to-female ratios in the technology workforce. Experts believe that the ratio is hurting the field’s ability to fill open positions, and to creatively take on today’s threats.

 More on Security Live at Interop ITX

The question is, how can the industry effectively improve its recruitment of women? A new survey out Monday suggests that the secret to amping up the female participation rate will depend on fostering better connections within the community.

As a way to bring attention to International Women’s Day later this week, ISACA commissioned a global survey among more than 500 of its female members across the general IT workforce. It found that nearly nine out of 10 respondents are somewhat or very concerned about the lack of women in the technology space, and it examined the top barriers faced by women who work in IT.

Topping the list is a lack of mentors, cited by 48% of participants. Another 42% of respondents cited a lack of female role models, and 39% said gender bias in the workplace stood as the second and third top barrier. Rounding out the top five were problems around unequal growth opportunities compared to men, and unequal pay for the same skills.

Though the survey did not focus on cybersecurity specifically, its results remain relevant to the security subspecialty.

“A lot of the same issues apply in securities specifically. I think the mentorship thing and leadership tracks are especially challenging for security because in other areas of tech there are a little bit more defined roles and a more linear path in terms of career progress,” says Lysa Myers, security researcher at ESET. “Whereas in security, there’s so many facets that are forever changing.”

This career path flexibility may be a curse for mentorship, but it would also be a blessing in a lot of ways for security’s recruitment of women – so long as organizations are willing to recruit creatively and be willing to train women with the right mindset with the technical skills needed. For example, Myers says that many years ago she was working as a florist before she was hired as a receptionist at a small security company. 

“There was too much work and not enough people to do it and so they started just throwing things over the fence to see what I could do,” she says. “Once they felt I could do one level of something, then they’d send something a bit more challenging and I would ask them for more. And eventually they took me on full time in the security department and by the time I left I was someone who was training other people.”

As things stand, there aren’t many women like Myers in the field. According to ISC(2), current cybersecurity employment of women compared to men has been steadily plateaued at about one in ten for at least the last four years, plus or minus a percentage point fluctuation year to year. That’s drastically lower than just about any other IT specialty. Most recent Department of Labor statistics show women make up 34% of computer systems analysts, 35% of web developers and 27% of information systems managers.

Such a low participation rate not only hurts security with a monoculture or male-centric perspectives, but it also severely limits organizations who are hurting for security recruits to fill what experts expect to be a growing labor shortage. As Todd Thibodeaux, president of CompTIA, put it in a recent column for Dark Reading, even if the security world shot low and just tried to do as well as other specialties in IT at attracting and retaining women workers, it just might be able to fill that security shortage that’s been nagging the industry.

“When nearly half the population represents an untapped source of expertise, employers need to reassess how they attract and train cybersecurity professionals,” he wrote.

Related Content:

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: http://www.darkreading.com/careers-and-people/is-mentorship-the-key-to-recruiting-women-to-cybersecurity/d/d-id/1328331?_mc=RSS_DR_EDT

Consumer Reports to Grade Products on Cybersecurity

The ratings group will begin to consider products’ cybersecurity following a rise in attacks on IoT devices.

The non-profit consumer ratings group Consumer Reports plans to evaluate cybersecurity and privacy when ranking products, Reuters says. It is currently working with organizations to create methodologies for doing this. An early draft of standards is available here.

This decision was made following a recent increase in cyberattacks on IoT devices, many of which contain vulnerabilities easily exploited by hackers. Researchers believe these attacks are unlikely to cease because manufacturers do not want to spend on securing connected products.

The draft prepared by Consumer Reports includes an analysis of built-in software security, amount of customer details collected, and whether all user data is deleted on account termination.

Jeff Joseph of the Consumer Technology Association describes this decision as positive but believes Consumer Reports “must be very clear about how they score products and the limitations of what consumers can expect.”

The new grading methodology will gradually be introduced, says Consumer Reports.

Read Reuters for details.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/iot/consumer-reports-to-grade-products-on-cybersecurity/d/d-id/1328332?_mc=RSS_DR_EDT

France Abandons Electronic Voting for Citizens Abroad, Cites Security

The French government made its decision after the national cybersecurity agency warned of a high risk of cyberattacks.

France will not allow its 1.3 million citizens abroad to vote electronically in the June 11 and 18 legislative elections due to cybersecurity concerns, reports Reuters, quoting the French Foreign Ministry. This decision was made after the National Cybersecurity Agency issued an alert for an “extremely high risk” of cyberattacks.

“In that light, it was decided that it would be better to take no risk that might jeopardize the legislative vote for French citizens residing abroad,” the ministry said.

French citizens abroad have been allowed to take part in legislative elections via electronic voting since 2012. They were not allowed to vote in the presidential polls, which are slated for April-May this year.

State-sponsored hackers were seen trying to disrupt the 2016 US presidential elections. French presidential candidate Emmanuel Macron has also alleged that Russian hackers are targeting him in favor of his pro-Russia rivals.

Read details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/france-abandons-electronic-voting-for-citizens-abroad-cites-security-/d/d-id/1328333?_mc=RSS_DR_EDT

A Real-Life Look into Responsible Disclosure for Security Vulnerabilities

A researcher gives us a glimpse into what happened when he found a problem with an IoT device.

As an information security researcher, a major part of my job is to help software and hardware manufacturers fix security issues before they’re exploited by bad guys. When white hat hackers like me find a new zero-day vulnerability in devices or software, we report it directly to the vendor using a series of steps called “responsible disclosure.”

I reviewed this process in an earlier post here on Dark Reading. Unfortunately, the disclosure process is sometimes criticized. Some accuse manufacturers of not fixing the problems with their devices, while others accuse researchers of releasing vulnerability information recklessly. But based on my personal experience reporting vulnerabilities, I strongly believe disclosure is beneficial for all parties involved.

 More on Security Live at Interop ITX

To prove this point, I’d like to walk you through a recent research project I completed so you can see the steps we take to find and responsibly disclose a new vulnerability. Some quick background: I’m part of WatchGuard’s Threat Lab, and we recently launched an ongoing research project that evaluates Internet of Things devices in response to the growing threats associated with the Mirai botnet. It’s worth noting that the vendor highlighted in this example was exemplary in responding to our disclosure and worked to immediately patch the vulnerability.

The product I’ll cover in this article is the Amcrest IPM-721S Wireless IP camera. This webcam allows users to view footage at Amcrest’s website (called Amcrest View). The first thing I attempted was the obvious goal: viewing from a camera that was not associated with my account. Let’s jump in.

I performed most of my investigation using Burp Suite’s proxy. My attempts to retrieve connection information for a specific camera with an unauthenticated session failed, confirming that Amcrest verifies ownership of each camera’s serial number by the authenticated user before providing connection details. No vulnerability here.

If I couldn’t view a camera that I didn’t own, how about simply taking ownership of the account that owned the camera?

Amcrest View, like most Web applications, lets users modify account settings such as the associated email address. To change the email address associated with an account, the browser submits a POST request. The POST request contains several parameters with the important ones being “user.userName” and “user.email.” A successful request tells Amcrest View to set the email address for the username in the user.userName parameter to the value of the user.email parameter. As it turned out, my request was successful even when the user.userName parameter didn’t match the username of the currently authenticated user session. Houston, we have a problem.

If I wanted to exploit this vulnerability, I could have modified the email address associated with any account and then issued a password reset to take over the account and obtain live access to their cameras (creepy, right?). While confirming the unauthorized account modification vulnerability, I also found the input I passed in the user.email parameter was not validated or fully sanitized. This means the software did not have code to check that the user.email parameter was, in fact, an email address. This could be exploited to inject arbitrary JavaScript into a victim’s session — a perfect example of a stored cross-site scripting vulnerability. Attackers use cross-site scripting vulnerabilities to siphon off authentication credentials and load malicious websites full of malware without the victim’s knowledge.

After discovering this vulnerability, I turned my notes, screenshots, and code samples into a vulnerability disclosure report, which you can read in full here. I submitted this report to Amcrest on November 4, 2016. Many large vendors have a process in place for reporting vulnerabilities, and if not, a researcher usually sends an email encrypted with the vendor’s public PGP key. In this case, I contacted Amcrest’s support team to inquire how they would like me to report the vulnerability. Ultimately, I submitted my vulnerability report through a support case.

Now the important question — what is the appropriate amount of time to allow a researcher to respond before publicly disclosing a vulnerability? A researcher should allow vendors a reasonable amount of time to investigate and patch a vulnerability, but there’s no industry standard for how long that is. I opt for 60 days, which is common.

Once the vendor has issued a patch, or if the vendor has not responded in a reasonable amount of time, a researcher will usually publish their vulnerability report publicly. This allows end users to protect themselves if the vendor can’t or won’t fix the problem (the implied threat of public disclosure can put pressure on vendors to fix security issues quickly).

Fortunately, that was not an issue in this case. Amcrest got back to me in just four days to confirm my report. By early December, it had patched the vulnerabilities and issued a security notice that urged customers to update their camera’s firmware. I published my security report, satisfied I had helped at least one company and its users become more secure. This was a win-win for all parties involved.

Contrary to what you might read in the news, most vulnerabilities reported to manufacturers turn out like this one. Both parties benefit; the vendor makes their products and customers more secure, and the researcher increases public awareness of vulnerabilities and builds their reputation by publishing their findings. It’s not a perfect system, but I strongly believe it’s beneficial for everyone involved.

Related Content:

Marc Laliberte is an information security threat analyst at WatchGuard Technologies. Specializing in network security technologies, Marc’s industry experience allows him to conduct meaningful information security research and educate audiences on the latest cybersecurity … View Full Bio

Article source: http://www.darkreading.com/threat-intelligence/a-real-life-look-into-responsible-disclosure-for-security-vulnerabilities/a/d-id/1328315?_mc=RSS_DR_EDT

Boffins show Intel’s SGX can leak crypto keys

A researcher who in January helped highlight possible flaws in Intel’s Software Guard Extensions’ input-output protection is back, this time with malware running inside a protected SGX enclave.

Instead of protecting the system, Samuel Weiser and four collaborators of Austria’s Graz University of Technology write that the proof-of-concept uses SGX to conceal the malware – and that within five minutes, he can grab RSA keys from SGX enclaves running on the same system.

It’s the kind of thing SGX is explicitly designed to prevent. SGX is an isolation mechanism that’s supposed to keep both code and data from prying eyes, even if a privileged user is malicious.

Weiser and his team created a side-channel attack they call “Prime+Probe”, and say it works in a native Intel environment, or across Docker containers.

The PoC is specifically designed to recover RSA keys in someone else’s enclave in a complex three-step process: first, discovering the location of the victim’s cache sets; second, watch the cache sets when the victim triggers an RSA signature computation; and finally, extracting the key.

As the paper puts it:

We developed the most accurate timing measurement technique currently known for Intel CPUs, perfectly tailored to the hardware. We combined DRAM and cache side channels, to build a novel approach that recovers physical address bits without assumptions on the page size. We attack the RSA implementation of mbedTLS that is used for instance in OpenVPN. The attack succeeds despite protection against sidechannel attacks using a constant-time multiplication primitive. We extract 96 % of a 4096-bit RSA private key from a single Prime+Probe trace and achieve full key recovery from only 11 traces within 5 minutes.

The attack even works across different Docker containers, because the Docker engine calls to the same SGX driver for both containers.

Getting keys out of Docker on Intel SGX

Docker containers share the same SGX driver

Timing: A cryptography side-channel attack needs a high resolution timer, something forbidden in SGX. Weiser and his collaborators combed Intel’s specs, and settled on the inc and add instructions, because these have “a latency of 1 cycle and a throughput of 0.25 cycles/instruction when executed with a register as an operand”.

To emulate the forbidden timer, the researchers used these x86 instructions:

mov counter , %rcx
1: inc %rax
mov %rax , (%rex)
jmp lb

”Eviction set” generation: This step is designed to discover virtual addresses “that map to the same cache set”: we scan memory sequentially for an address pair in physical proximity that causes a row conflict. As SGX enclave memory is allocated in a contiguous way we can perform this scan on virtual addresses.”

With those two steps completed, Weiser et al worked out how to monitor vulnerable cache sets, looking for the characteristic signature of RSA key calculation.

This part of the attack has to happen offline – that is, separately to the cache monitoring that collects the data – because you end up with lots of data that has lots of noise in it (from timing errors, context switching, non-RSA-key activity in the victim’s enclave, and CPU timing changes due to power management, and so on).

Key recovery comes in three steps. First, traces are preprocessed. Second, a partial key is extracted from each trace. Third, the partial keys are merged to recover the private key.

On an SGX-capable Lenovo ThinkPad T460s running Ubuntu 16.10, they found:

The researchers say their attack can be blocked, but the fix will have to come from Intel, because modifications to operating systems risk weakening the SGX model. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/07/eggheads_slip_a_note_under_intels_door_sgx_can_leak_crypto_keys/