STE WILLIAMS

Chrome drops ‘secure’ label for HTTPS websites

When it comes to browser security, how important are the address bar icons and labels that tell users about a site’s security status?

For Google at least, they matter a lot. In 2017 the Chrome browser started marking transactional sites not using HTTPS as ‘Not Secure’. In July 2018, all sites not offering HTTPS will get this label.

This always risked making the Chrome address bar look a bit crowded. In addition to ‘Not Secure’ with a red warning triangle, there was ‘Secure’ (for sites using HTTPS), as well as the famous green padlock symbol dating back more than a decade.

But which signal matters most – virtue or deficiency?

Given that HTTPS security is rapidly becoming the norm – thanks largely to arm-twisting by Google itself – the company has announced that, in future, it will only inform users when a site is insecure.

Consequently, from Chrome version 69 due in the September, the ‘Secure’ label will disappear from HTTPS sites and the green padlock will turn grey.

At some point beyond that, the padlock will vanish completely, leaving the address bar empty save for the URL.

It’s a move that turns the address bar from something that tells people that something is good (using HTTPS) into something that only tells users when something is bad (using insecure HTTP).

Well done to Google for announcing the death knell of HTTP, which faces certain extinction for every site this side of the long tail.

It’s a far cry from the perplexing wordiness of the past. Take Internet Explorer 8 (2009-11), which used to throw up the following dialogue when visiting a site using HTTPS:

You are about to view pages over a secure connection. Any information you exchange with this site cannot be viewed by anyone else on the Web.

And where HTTPS was absent:

You are about to leave a secure internet connection. It will be possible for others to view information you send.

Most people just turned them off by ticking the “do not show this warning” box, which perfectly sums up why this signalling design turned into an irrelevance.

Google’s tweak doesn’t mean that confusion about address bar signalling is gone for good – rival browsers Firefox, Edge, Safari and Opera still have their own slightly different systems for signifying the presence or absence of HTTPS.

Then there is the vexed issue of whether sites should be assumed to be good simply because they are using HTTPS.

This is a risky assumption, given that there’s nothing to stop a phishing site from deploying HTTPS as a calculated attempt to spoof its virtues. Even legitimate sites using HTTPS can sometimes fail to secure the data of their users.

Longer term, the answer is not more icons or labels but for companies such as Google to find a way to filter out sites that fall short of acceptable levels on a balance of indicators. Right now, we’re still a long way from a world built on that sort of transparent web security.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/tYsySahOF6s/

Real-time cellphone location data leaked for all major US carriers

LocationSmart – a US company that aggregates real-time location data of cellphones – has leaked the location data of all major US mobile carriers, in real time, without their consent, via its buggy website, security journalist Brian Krebs reported on Thursday.

Krebs says the data could be had without a password or any other form of authentication or authorization.

Krebs was tipped off about an unsecured service on the site by Robert Xiao, a security researcher at Carnegie Mellon University who was tinkering with a free demo of a find-your-phone service from LocationSmart. Xiao’s interest had been piqued after he read about the company supplying real-time phone location data to one of its customers – 3Cinteractive – which then reportedly supplied the data to Securus Technologies.

Securus, which provides and monitors calls to inmates, was the subject of a 10 May article from the New York Times, about how its location service – typically used by marketers who offer deals to people based on their location – can easily be used to find the real-time location of nearly any US phone to as close as a few hundred yards.

The issue came to light when a former Missouri sheriff was charged with using a private service to track people’s cellphones without court orders: Cory Hutcheson has been charged with allegedly using Securus at least 11 times to look up people’s information, including that of a judge and members of the State Highway Patrol.

On 15 May, ZDNet reported that Securus was actually getting its data from the carriers by going through an intermediary: 3Cinteractive, which was getting it from LocationSmart.

As an archived version of its website shows, LocationSmart has claimed to have “direct connections to all major wireless carriers providing near-complete coverage for US subscribers.” That includes any ATT, Sprint, T-Mobile, US Cellular or Verizon phone in the US, coming as close as a few hundred yards. It’s bragged about having access to 95% of the country’s carriers, including smaller ones such as Virgin, Boost, and MetroPCS, as well as Canadian carriers, like Bell, Rogers, and Telus, according to ZDNet.

Kevin Bankston, director of New America’s Open Technology Institute, told ZDNet that the carriers were selling the phone location data to LocationSmart as a workaround, since the Electronic Communications Privacy Act forbids telecom companies from disclosing the data to the government but doesn’t restrict them from disclosure to other companies that may then give it to the government.

ZDNet quoted Bankston:

[The loophole is] one of the biggest gaps in US privacy law.

The issue doesn’t appear to have been directly litigated before, but because of the way that the law only restricts disclosures by these types of companies to government, my fear is that they would argue that they can do a pass-through arrangement like this.

Besides exploitation of the legal loophole, the past few weeks have brought news of the data being exposed in multiple ways: first, there was Securus’s admitted failure to properly verify authentication/authorization for access to the data – a failure that Senator Ron Wyden has demanded be investigated by the Federal Communications Commission (FCC) and several major telecommunications companies.

Then there was the flaw in LocationSmart’s website. Krebs reports that Xiao, the Carnegie Mellon University researcher, found that LocationSmart’s demo page required users to consent to having their phone located by the service, but the application programming interface (API) used to display responses to visitors’ queries didn’t prevent or authenticate interaction with the API itself.

Then too, on Wednesday there was another shocker: Motherboard brought us news of a hacker who broke into Securus’s servers to steal 2,800 usernames, email addresses, phone numbers and hashed passwords of authorized Securus users. The hacker reportedly gave Motherboard some of the stolen data, including usernames and poorly secured passwords – secured with the notoriously weak MD5 algorithm – for thousands of Securus’s law enforcement customers.

From Motherboard:

A spreadsheet allegedly from a database marked ‘police’ includes over 2,800 usernames, email addresses, phone numbers, and hashed passwords and security questions of Securus users, stretching from 2011 up to this year.

This isn’t the first time that Securus has shown itself to be careless with sensitive information. In 2015, The Intercept investigated a hack of Securus’s system that involved 70 million prisoner phone calls.

What, exactly, are the carriers going to do about their part in this multi-part failure to protect their customers’ data? According to LocationSmart’s privacy policy, that data may include, but is not limited to, GPS data such as device latitude/longitude, accuracy, heading, speed, and altitude; cell tower Wi-Fi access point, or IP address information. Xiao used the latitude and longitude to plot volunteers’ locations via Google Maps.

How hard is it to exploit the API to get at all that sensitive information? Not hard at all, Xiao said:

I stumbled upon this almost by accident, and it wasn’t terribly hard to do This is something anyone could discover with minimal effort. And the gist of it is I can track most peoples’ cell phone without their consent.

This is really creepy stuff.

LocationSmart took the leaky service offline after Krebs informed the company of Xiao’s findings on Thursday. LocationSmart Founder and CEO Mario Proietti told Krebs that the company is investigating the issue but claimed that the company doesn’t give away data.

We make it available for legitimate and authorized purposes. It’s based on legitimate and authorized use of location data that only takes place on consent. We take privacy seriously and we’ll review all facts and look into them.

As far as the carriers go, they’re not confirming or denying their connections to LocationSmart, though the company’s site lists their corporate logos. Krebs got a lot of spokespeople who gave him referrals to privacy policies after contacting the four major carriers. One – T-Mobile – said that it shut down the funneling of customers’ location data to Securus after it received Wyden’s letter.

Xiao suggested that without more scrupulous attention to who gets at our phones’ location data under what authority, we should likely gird ourselves for more news of privacy violations when it comes to the tracking devices that ride in our pockets:

We’re going to continue to see breaches like this happen until access to this data can be much more tightly controlled.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/rbHd0bD2vJk/

Greenwich uni fined £120k: Hole in computing school site leaked 20k people’s data

The Information Commissioner has slapped a £120,000 fine on the University of Greenwich in the UK after a security cockup by its computing and maths school compromised the data of almost 20,000 individuals.

The incident occurred after an academic and a student from the then devolved department developed a microsite to facilitate a training conference in 2004.

The microsite, which was not closed down or secured post event, was first compromised in 2013 and then hit by multiple attackers in 2016 who exploited the vulnerability to access other areas of the web server.

The personal data included the contact information of 19,500 people such as students, staff and alumni – comprising names, addresses and telephone numbers. Around 3,500 records involved sensitive data such as details of learning difficulties and staff sickness records, which were subsequently posted online.

Steve Eckersley, ICO head of enforcement, said:

“Whilst the microsite was developed in one of the University’s departments without its knowledge, as a data controller it is responsible for the security of data throughout the institution.

“Students and members of staff had a right to expect that their personal information would be held securely and this serious breach would have caused significant distress. The nature of the data and the number of people affected have informed our decision to impose this level of fine.”

The commissioner found the university did not have in place appropriate technical and organisational measures for ensuring, so far as possible, that such a security breach would not occur.

Greenwich University secretary Peter Garrod said:

We acknowledge the ICO’s findings and apologise again to all those who may have been affected. Since 2016 when the unauthorised access to some of the university’s data was discovered, we have carried out a major review of our data protection procedures and made a number of key changes.

Specifically, we have invested significantly in new technology and staff; overhauled the information technology governance structure to improve internal accountability; and implemented new monitoring systems and a rapid response team to anticipate and act on threats.

No organisation can say it will be immune to unauthorised access in the future, but we can say with confidence to our students, staff, alumni and other stakeholders, that our systems are far more robust than they were two years ago as a result of the changes we have made. We take these matters extremely seriously and keep our procedures under constant review to ensure they reflect best practice.

®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/21/university_of_greenwich_slapped_with_120k_fine_for_breaching_details_of_20k_people/

What Israel’s Elite Defense Force Unit 8200 Can Teach Security about Diversity

Unit 8200 doesn’t follow a conventional recruiting model. Technical knowledge isn’t a requirement. The unit values traits that emphasize problem-solving and interpersonal skills, and it uses hiring processes that build female leaders.

Over the past few years, alumni of the elite Israeli Defense Forces’ Unit 8200 have become known for founding cybersecurity startups, including the two co-founders of Cybereason, where I work, along with many of my colleagues — both men and women. At 18, I was picked to serve in Unit 8200. The experience sparked my interest in both military intelligence and my current career as a security researcher. 

Surprisingly, despite its notoriety and success, Unit 8200 doesn’t follow a conventional recruiting model. Technical skills aren’t a requirement; the unit values traits that indicate leadership and problem-solving skills, using hiring processes that build women leaders and serve as an example for the private sector to address the current security talent shortage. Here are five best hiring practices from Unit 8200.

Practice 1: When recruiting, look beyond the traditional candidate profile.
Traditionally, when filling entry-level security positions, organizations look for candidates with either IT or computer science backgrounds. Hiring managers want script kiddies and computer gamers to fill those jobs. But hiring people with only these profiles limits the workforce’s diversity. In reality, men outnumber women in computer science degree programs (although not everywhere). In 2015, for instance, women earned 18% of all computer science degrees awarded in the US. That percentage is even lower for women of color, according to the National Center for Education Statistics. Meanwhile, females who like to geek out in their free time and play video games often face harassment and abuse. Ultimately, these situations lead to low numbers of women entering the IT and security fields.

Unit 8200 knows it’s not going to find female candidates by looking for recruits with conventional backgrounds. Instead, candidates with general traits that indicate success in tech fields are sought. These include critical thinking, the ability to learn skills on their own, leadership, problem-solving skills, and good interpersonal skills. As for technical skills, Unit 8200’s leadership assumes those can be acquired later.

As part of its recruiting program, Unit 8200 runs tests designed to identify individuals who can handle stressful events, are team players, can find innovative solution to various problems, and, most importantly, are coachable. Recruits who pass these tests undergo extensive training that teaches them any technical information they need to know. This training is followed by on-the-job training. This approach has helped Unit 8200 have an equal number of female and male soldiers.

Practice 2: Manage the high employee turnover rate.
Losing cybersecurity talent is a chief concern at many organizations. Unit 8200 is a great place to learn how to deal with high employee turnover since approximately 90% of its workforce only serves in the unit for five years or less. To handle this situation, Unit 8200 has a system to deal with high workforce churn. All the unit’s soldiers serve in small squads. After finishing their training, new recruits are assigned a mentor who usually has served for about a year. That gives them enough time to acquire knowledge that’s transferable while still being aware of all the challenges a newbie faces. There’s a set cadence for promotion and succession. After about two years, the more accomplished soldiers receive officer’s training to become squad commanders. They’re replaced by the soldiers who were recruited the year before. In an environment of constant but planned turnover, capturing and sharing knowledge is key and important information is kept in secure systems.

Practice 3: Provide a seat at the table.
Women in the workforce often feel excluded from discussions on topics about which they have extensive knowledge. But in Unit 8200, subject matter experts discuss critical military matters with top commanders, regardless of gender or how junior they are. For example, a 19-year-old female soldier briefing a chief of general staff in the IDF is not uncommon. This provides them with an opportunity to participate in the decision-making process, a chance that’s rarely given to females early in their careers.

Practice 4: Fight “impostor syndrome.”
Some women harbor the false belief that they’re not qualified for their jobs, despite their professional accomplishments. This is known as “impostor syndrome,” the psychological perception that stymies women from advancing in their careers. At Unit 8200, which recruits young, untrained individuals, leaders emphasize recruits’ abilities to learn and improve. During training, all individuals receive daily updates on how they’re progressing with skill development. They’re routinely praised for achievements and constantly reminded about how far they’ve advanced in a short period of time. Recruits are always reminded that their unique abilities led to them being selected for their roles in the unit, increasing their confidence. The unit’s reward and promotion program, while helping motivate all the unit’s soldiers, particularly boosts the self-worth of female soldiers.

Practice 5: Consider security industry takeaways.
In Unit 8200, diversity is welcomed. Having soldiers with different backgrounds leads to new approaches to problems. The security industry is filled with tons of complex problems that need solutions, problems that can’t be solved if organizations only look to hire men with IT and computer science backgrounds because there aren’t enough of them. There are simply are too many security jobs to fill.

People whose experiences are unconventional shouldn’t be passed over for security jobs. The security industry (and the greater technology community) needs to realize that technical skills alone do not make a person qualified. Many women who lack a technical background but possess keen problem-solving skills, great communication abilities and strong leadership qualities would be eager to pursue a security career. They just need someone to give them the opportunity.

Unit 8200 has demonstrated that its approach to finding security talent works. The private sector should take note. In Israel, just 26% of the tech positions are held by women. And the situation isn’t much better in the U.S., where women hold 25% of those jobs. Gender diversity is even worse in cybersecurity; women comprise only 11% of the global workforce. If the methodologies used by Unit 8200 to recruit and promote women are adopted by the private sector, not only would security teams become more diverse, the security talent shortage wouldn’t be so acute.

Related Content:

Lital Asher-Dotan, senior director of research and content at Cybereason, has 15 years of experience working with tech companies. Asher-Dotan is a veteran of Unit 8200 of the Israeli Defense Force. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/what-israels-elite-defense-force-unit-8200-can-teach-security-about-diversity/a/d-id/1331827?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Get Smart About Network Segmentation & Traffic Routing

What’s This?

Through a combination of intelligent segmentation and traffic routing to tools, you can gain much better visibility into your network. Here’s how.

The first step to solving the security tool selection problem is asking the right questions:

•   Do you handle credit cards? Are you subject to Payment Card Industry (PCI) regulations?
•   Do you handle Personal Identifiable Information (PII) in Europe that is in scope with the General Data Protection Regulation (GDPR)?
•   Do you build computer things? Do your engineers do stuff that could “look suspicious”?
•   Do you have a small number of things you really need to secure, but the rest can burn?

Step two is taking inventory of exactly what your company does. For example, in my world, I know that:

•   We build network security tools.
•   Our engineers do a lot of DevTest work that “looks suspicious.”
•   We run traffic generators that create fake network behavior.
•   We run a lot of testing side by side with production traffic.
•   We have multiple clouds.

While our jobs entail much more, these tasks play directly into our tool selection process. Fortunately for me, I’ve been given carte blanche at Gigamon to test whatever tools I want on my live network. In fact, seeing how tools have failed in my network is what led me down the bespoke train of thought. To give some concrete examples:

•   User Behavior Analysis (UBA) tools firing dozens of alerts per day.
•   Network Intrusion Detection System (NIDS) tools failing to detect files fast enough due to excessive, irrelevant, traffic.
•   Metadata alarms triggering at all hours.
•   Security Information and Event Management (SIEM) systems failing on unusual Transmission Control Protocol (TCP) parameters.

As a research and development shop that builds security tools, we do a lot of stuff that other security tools would consider bad. For example, if I were to deploy a UBA and one of its criteria is “Secure Shell (SSH) is suspicious,” I’m going to get several alerts or need to do a ton of whitelisting that may even increase my risk, or a combination of both. Likewise, if you happen to be downloading malware samples that pass through your NIDS … well, as they say, “Hang on tight.”

By contrast, if you’re a retail organization, where SSH on your network would be suspicious since it’s probably something only a few of your administrators are doing, perhaps, for you, a UBA might drop in right out of the box and be an effective tool in your security posture.

Or what if your business is somewhere in between – for example, you are developing tools in a cloud environment, but also have normal business operations like financial planning and accounting (FPA) or human resources (HR)? In that case, tool selection gets more complicated. Each of your supported groups will likely have different threat models that need to be monitored differently – call it bespoke monitoring to go with our bespoke network. How do you secure those FPA and HR people while not burning through Security Operations Center (SOC) analysts because of the pile of false positives coming out of your development team? How do you secure that team?

The answer: network segmentation and tool routing.

More Nuanced Network Segmentation

Yes, I hear you, “I’ve got a DMZ and a management zone, I’m segmented already.” I challenge that you can do better and grow your security posture with even more nuanced segmentation.  Segmentation is generally viewed as a method to contain lateral movement, but I claim that we can expand this definition to encompass a strategy to contain lateral movement and provide situationally targeted security monitoring. 

By recognizing the different behavioral scopes previously discussed, you can start segmenting based on security requirements. This isn’t a new concept. In James Rome’s paper “Enclaves and Collaborative Domains,” he notes that segmentation “is required when the confidentiality, integrity or availability of a set of resources differs from those of the general computational environment.” Does that sound familiar? You can solve this problem when you move into a micro-segmented environment – part one of the solution.

Part two is getting the data from each segment to the correct tools.

Figure 1: Micro-segments with tool-specific routing.

Figure 1 shows how micro-segmentation looks at Gigamon – well, a bit. I’ve purposefully not added much detail so you don’t attack me. But the graphic does show that we’re considering tools based on how to properly monitor each group. For example, with my HR and FPA teams, a UBA could be very effective at finding unusual behavior such as SSH on the segment or unusual client-client interactions. On the lab network, however, the UBA would likely gurgle blood while a NIDS looking for file hashes could be useful since, even in testing, I don’t expect engineers to be shipping malware around. Additionally, for tools that are of uniform use, like my SIEM, I can route all traffic to it. In my real use case, I don’t do this because I find that targeted metadata is easier to work with and so instead, I generate that off of all my traffic, and route the details that interest me into my SIEM for intel bumping and alerting.

Through a combination of smart segmentation and selective traffic routing to tools, you can gain much better visibility into your network and, at the same time, create high-fidelity data to work with. This approach helps you better pair tools with workloads, which in the long run, can lower your tool expenditure since you may need less boxes to cover the various targeted segments. What’s more, with intelligent tool routing, you can also skip sending traffic you have whitelisted – for example, Netflix – from even hitting the tools, thus lowering the overall tool spend as your network scales up.

 

Jack is principal information security engineer at Gigamon, responsible for managing the company’s internal security team – conducting security operations, security architecture and incident response. A hands-on, seasoned operations manager with a focus on quality and … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/gigamon/get-smart-about-network-segmentation-and-traffic-routing/a/d-id/1331845?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Signal bugs, car hack antics, the Adobe flaw you may have missed, and much more

Roundup Here’s your guide to this week’s infosec news beyond what we’ve already covered.

ICE’s extreme vetting plan melts away

US Customs won’t getting their massive terror predicting system after all. It’s reported that America’s immigration cops – ICE – have abandoned its call for the development of an artificially intelligent tool that would be able to predict whether a person entering the country was secretly a terrorist, based on social networking activity.

We’re told it wasn’t outcry over human rights or privacy concerns that killed the plan, but rather the onset of cold reality. Developers who looked into what ICE wanted concluded there was no way such a system could be viable due to the limitations of today’s AI and data collection technologies. Or at least, that’s what they told the agency.

Instead, ICE will continue to rely on good old fashioned meatware to do the job, as its agents will have to pore over online posts by hand to weed out the bad guys.

The Adobe zero-day you didn’t hear about

Early this week, Adobe put out a set of patches for Reader, Acrobat, and Photoshop. As it turns out, at least one of the flaws had an exploit code floating in the wild for miscreants to potentially use.

That particular flaw, CVE-2018-4990, is a double-free() programming blunder that can be exploited to pull off remote code execution via booby-trapped document. More specifically, says Malwarebytes, it can be paired up with a Windows flaw, CVE-2018-8120, to create a particularly nasty exploit package attack.

“Those two combined zero-days were necessary to escape the Acrobat Reader sandbox protection, which to its credit has been improving the security of the software drastically, so much so that malicious PDFs that were once common as part of drive-by download attacks have all but vanished,” Malwarebytes explains.

It goes without saying that you’ll want to install those Reader and Acrobat updates, if you haven’t already.

White House kills cyber czar role

Last week, we told you about fears within the security community that the White House was going to do away with its cyber security advisor role. Just days later, those fears were confirmed.

Rob Joyce will be the last person to service in the White House advisor role. Instead, John Bolton will delegate the roles of the job to others within the National Security Council. With mid-term elections months away, opponents of the move are worried the cuts could make the US government and its electorate more vulnerable to online attacks from both foreign governments and private hackers.

Que malo! Mexican bank hit by hackers

Out of Mexico City comes this story of a cyber-heist targeting one of the Mexican capital’s largest banks.

An attack that used phony transfer orders was able to suck around $15m out of Banorte. The crooks were able to get into the bank’s payments system to order the illicit withdrawals, possibly with help from tellers working within the bank locations themselves.

Banorte said no individual accounts were affected by the attack, and the banks have switched to a different system.

EFF prevails in border privacy battle

The Electronic Frontier Foundation says it has won a key decision in the Alasaad v Nielsen case as a federal court ruled the complaint can proceed to hearing.

The case concerns arguments from the EFF and ACLU that border patrol agents violated first and fourth amendment rights of citizens when they performed warrantless searches on 11 travelers’ devices.

If successful, the suit could force border agents to obtain a warrant before they can search a device they encounter at a crossing.

“It is the latest and greatest of a growing wave of judicial opinions challenging the government’s claim that it can ransack and confiscate our electronic devices—just because we travel internationally,” the EFF said of the decision.

“By allowing the EFF and ACLU case to proceed, the district court signaled that the government’s invasion of people’s digital privacy and free speech rights at the border raises significant constitutional concerns.”

Smoked Signal

It has been a rough month for the Signal messaging platform, and things got a bit worse this week when researchers uncovered a pair of vulnerabilities in the desktop version of the client.

According to researcher Ivan Barrera Oro, the desktop software fails to properly sanitize HTML components and is vulnerable to tag injection attacks. Two variants on the technique were assigned CVE-2018-10994 and CVE-2018-11101.

In that attack, the bad guy would be able to slip malicious HTML code into a tag that would then be able to automatically execute on the machine of the target. In practice, this would most likely be used to conduct cross-site scripting attacks.

You will want to update Signal desktop to version 1.11, where both of the vulnerabilities are now patched.

Calamp makes smart cars do dumb stuff

Researcher Vangelis Stykas says bugs in a number of popular smart car alarms could leave them vulnerable to remote unlocking and activation.

Stykas says vendor Calamp was using an insecure server configuration to handle reports from devices running its smart alarm service. Because of this, an attacker who was able to forge a request to the server would be able to access the service’s production database and, from there, be able to take over the accounts of users.

Having a compromised account would then let an attacker use the mobile app to interact with the car’s smart alarm. From there, it’s game over, as that app can control things like unlocking the car and starting the engine.

Fortunately, Stykas did the right thing and privately disclosed the issue to the vendor. The issue was patched well before the researcher went public with his findings.

Photographer hacks his way to landscape photo album

A globetrotting hacker-slash-photographer has found a novel way to conduct landscape photography – Marcus Desieno and his new photobook “No Man’s Land”.

The album consists entirely of photos Desieno shot through highjacked CCTV cameras. Because many cameras in the field are so poorly secured, he was able to use default credentials to get into a number of cameras and take some admittedly beautiful shots from compromised surveillance units around the world.

“Focusing on landscapes shows how far-reaching our surveillance state is,” Desieno told the photo journal.

“The camera could be high on a mountain, where it takes someone hours to climb to – you would think no one can watch you there.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/19/security_roundup/

Actor Advertises Japanese PII on Chinese Underground

The dataset contains 200 million rows of information stolen from websites across industries, likely via opportunistic access.

A dataset containing more than 200 million lines of Japanese personally identifiable information (PII) has been found on the Chinese underground market, researchers report. It’s believed the data is authentic and was exfiltrated from multiple Japanese website databases.

Experts at FireEye iSIGHT Intelligence first noticed the actor advertising the dataset in December 2017. This actor has sold site databases on Chinese underground forums since at least 2013 and is likely connected to someone living in China’s Zhejiang province.

The team identified the actor and data as part of regular monitoring of the cyber threat landscape, explains Oleg Bondarenko, senior manager for international research at FireEye. The Chinese underground primarily consists of instant messenger groups such as QQ, he says. This dataset was not discovered on a forum but rather a group for sharing and offering data.

“Yes, we’ve observed actors who were selling Japanese PII data or interested in purchase,” Bondarenko continues. “However [we] have never observed at such scale.”

Given the number of sources and different types of data included, it’s likely the data was taken via opportunistic compromise and not targeted attacks. The means of obtaining this data have not been confirmed, but Bondarenko says one possible way would be collecting data from previous public leaks and taking over victims’ accounts. Motivation was likely financial gain.

Specific data types included in this set include names, credentials, email addresses, birthdates, phone numbers, and home addresses. The data seemingly comes from a range of 11-50 Japanese websites across industries including financial, retail, food and beverage, transportation, and entertainment. One folder indicated the data was collected between May and June 2016; another showed its data was acquired in May and July 2013.

The actor claims all credential sets are unique and priced them at ¥1,000 CNY ($150.96 USD) for the full dataset.

In a random sample of 200,000 leaked email addresses, most were previously leaked in major data breaches, a sign the addresses included in this dataset were not specifically created for it. Since most of the leaked data didn’t come from one specific leak or public website, researchers don’t think the actor scraped the info from other data leaks and resold it as a new product.

“The data was extremely varied and not available through publicly available data sources; therefore, we believe that the advertised data is genuine,” researchers explain in a report.

That said, they do believe the number of real and unique credentials is lower than the actor claims. In a sample of 190,000 credentials, researchers noticed more than 36% contained duplicate values and there is a significant number of fake email addresses. Several actors commented on the ad to express interest in buying the data. However, the same actors later posted negative feedback, claiming they didn’t receive the product advertised.

Most of the information advertised is commonly stored on websites with customer login and profile information. Researchers didn’t notice the actor selling sensitive email or businesses data that would indicate he/she had access beyond servers connected to a site or Web portal.

Bondarenko says the team hasn’t noticed any similar type of activity from a specific group in China. The actor behind this was active for a while, and during the time he was selling the data.

“However, there are no other insights available for the actor because he became inactive recently, so we’ve been closely monitoring to understand the reason behind that and potentially getting additional insights,” he adds.

Since much of the data advertised had been exposed in large leaks, researchers don’t think this specific dataset will enable large-scale cyberattacks toward the people whose credentials are included. It is worth noting the leaked PII could be used to target other entities if those people reused credentials between the compromised sites and other personal or business accounts.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/actor-advertises-japanese-pii-on-chinese-underground/d/d-id/1331847?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ex-CIA man fingered as prime suspect in Vault 7 spy tool manuals leak

A former CIA employee has been named as the prime suspect behind last year’s leak of thousands of top-secret documents on the agency’s hacking practices.

According to the Washington Post, court documents name Joshua Adam Schulte as the person authorities believe to be behind the massive Vault 7 online dump of CIA internal documents and manuals.

Transcripts [PDF] from an investigation contain multiple references to search warrants related to the Vault 7 case.

kangaroo

WikiLeaks doc dump reveals CIA tools for infecting air-gapped PCs

READ MORE

“In March of 2016, there was a significant disclosure of classified material from the Central Intelligence Agency. The material that was taken was taken during a time when the defendant was working at the agency,” prosecuting attorney Matthew Laroche is quoted as saying.

“The government immediately had enough evidence to establish that he was a target of that investigation. They conducted a number of search warrants on the defendant’s residence.”

Another January transcript [PDF] made public also notes that attorneys were discussing “national security evidence that might be present in the case.”

Here’s where things get tricky: the government says it does not have enough evidence to charge Schulte with the leak. However, he is facing unrelated charges in the New York Southern District court for possession and distribution of child abuse images.

He has plead not guilty to the charges.

The report says that, while the government thinks Schulte was the one who handed the cache of documents over to WikiLeaks, they do not currently have enough evidence to bring charges. Rather, he is being charged with operating a server that contained a 54GB container of child abuse content (we’re not going to label it as ‘pornography’ out of respect for adult entertainment performers).

Schulte’s lawyers have argued that he simply ran a public server and had no idea as to the contents of the encrypted container. Interestingly, court transcripts show that Schulte’s team has offered his work with the CIA, and the rigorous screenings that come with it, as arguments in his defense.

According to the report, Schulte worked for the the CIA’s engineering development group until 2016, a position that would have given him access to the thousands of agency documents that were handed over to WikiLeaks in 2017.

That cache would eventually be disclosed as the “Vault 7” data dump. While it was embarrassing for the CIA to lose so many documents, the dump itself provided little in the way of juicy intel: mostly it just showed that, yes, the CIA engages in covert intelligence operations.

Most notably, the dump included details on hacking tools the agency used to compromise Windows, MacOS and iOS devices. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/15/vault_7_leak/

LocationDumb: Phone tracker foul-up exposes world+dog to tracking

Updated The parade of bad privacy news this week has managed to get even worse, as one of the companies associated with the selling of phone locations for cash scandal was subject to a publicly exploitable bug.

Researcher Robert Xiao says LocationSmart was running a site riddled with vulnerabilities that could allow anyone to look up the location of virtually any mobile phone in the US. Xiao says he reported the bug to the company, who has since patched it on their site.

Xiao, currently at Carnegie Mellon University (he’s set to become an assistant professor at the University of British Columbia this Fall), found that a demo feature the company offers on its site could be abused to look up the location of anyone without their knowledge.

LocationSmart was among the companies dragged into the public eye this week when it was named among the location-tracking sources used by Securus, a US telco accused of illegally giving tracking data to police. LocationSmart pitches its services for areas like opt-in marketing, company device management, and Internet of Things services.

To help sell its tracking services (for legitimate uses), LocationSmart allows users to perform a “demo” search by entering their own phone number, replying to an opt-in test, then seeing their own location.

Normally, the opt-in feature would protect user privacy by only letting a user track a phone they owned. Unfortunately, as Xiao found, simply editing one line of POST request sent to the site – and asking for the location as a .json instead of an XML snippet- bypasses the requirement for this check.

“Essentially, this requests the location data in JSON format, instead of the default XML format,” Xiao explains.

“For some reason, this also suppresses the consent (‘subscription’) check.”

Xiao also provided a proof of concept script to show how the (since patched) vulnerability could be exploited in the wild.

LocationSmart did not respond to a request for comment on the matter. ®

Updated to add

LocationSmart has confirmed it had learned of the issue through Xiao and had remedied it prior to the public disclosure. The company said that it did not believe anyone else had exploited the flaw to view user details.

“LocationSmart is continuing its efforts to verify that not a single subscriber’s location was accessed without their consent and that no other vulnerabilities exist,” the company told The Register.

“LocationSmart is committed to continuous improvement of its information privacy and security measures and is incorporating what it has learned from this incident into that process.”

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/18/phone_tracker_foulup/

How to Hang Up on Fraud

Three reasons why the phone channel is uniquely vulnerable to spoofing and what call centers are doing about it.

With cybercrime skyrocketing over the past two decades, companies that do business online — whether retailers, banks, or insurance companies — have devoted increasing resources to improving security and combatting Internet fraud. But sophisticated fraudsters do not limit themselves to the online channel, and many organizations have been slow to adopt effective measures to mitigate the risk of fraud carried out through other channels, such as customer contact centers. In many ways, the phone channel has become the weak link.

Most contact centers have continued to rely heavily on knowledge-based authentication to grant callers access to their accounts. However, the ready availability of personal information, either stolen in data breaches or gleaned from social media, makes it increasingly easy for criminals to impersonate customers. Add in some Caller ID or automatic number identification (ANI) spoofing, and fraudsters are well on their way to deceiving call center staff and taking over customer accounts.

The phone channel is uniquely vulnerable to spoofing for a number of reasons:

1. Easy creation and manipulation of call signaling data. An incoming phone call contains signaling data with billing and routing information. However, it is simple for a hacker to alter legitimate call signal data, which is why spoofing has become an epidemic. Spoofing is simply changing or falsifying call-signaling data. It’s easy to do using any of dozens of software tools such as FreeSWITCH or Asterisk. When a criminal originates a call, he or she can create all the signaling data needed to mimic a legitimate calling number.

2. Lack of encryption. Most websites today can encrypt a communication system end-to-end, from the browser to the web server. In contrast, very few telephone calls are encrypted end-to-end. This is because the security infrastructure that is mature in web communications is still in its infancy within the telephone network. Getting encryption in place is a slow process because it requires compatible and reliable networking capabilities for telephone communications among all the carriers in the world. That is a long process. In the meantime, this means that the vast majority of telephone calls and their call signaling data are sent in plain text and can be manipulated.

3. Many opportunities to launch attacks. There are thousands of telephone carriers in North America alone, and tens of thousands of global partners that offer access to place calls. Fraudsters exploit carriers and partners with lax security practices or no enrollment requirements. This enables criminals to remain anonymous and leaves them free to launch attacks without any repercussions.

By iterating attacks with multiple signatures on different access points, criminals will find a combination that can perfectly mimic calls from major carriers. Once successful, they can mimic calls from all the customers of that carrier.

Knowledge-based authentication, or KBA, is just as vulnerable because of fraudsters’ relatively easy access to the personal information needed to correctly respond to call center agents’ identity interrogation. This information can be purchased on the Dark Web or unearthed by simply scouring the social media sites of an account takeover target.  

So, how do we forge a more secure path forward? The basic answer, no matter the particular security solutions involved, is multifactor authentication. Multifactor authentication is a strategy in which inherence and/or ownership factors (that is, something known by a person and/or something in their physical possession) are used to verify a caller’s identity, thus reducing or eliminating reliance on KBA and spoof detection. A bank would never allow ATM use without knowledge of a PIN and ownership of a physical debit card, and it’s time for companies to adopt this same approach to secure their phone channels.

Multifactor authentication has long been recognized as more secure, and the available tools are becoming increasingly sophisticated. For example, to incorporate an inherence factor, which uses a physical attribute of a caller, contact centers can deploy voice recognition systems. These systems obtain a voice print from a caller and compare it to a reference voice print to make an authentication determination. (Note: TRUSTID offers pre-answer caller authentication as an alternative to KBA and as part of a multifactor authentication solution. Many companies in the industry provide additional caller authentication solutions for multifactor authentication.)  

A complementary technology uses a caller’s phone as an ownership-based authentication token. This approach audits all phone calls, devices, and line types from within the global telephone network to ensure that the phone call and device are real and unique and can thus provide a deterministic authentication outcome in the form of an ownership authentication token.

In transitioning to an optimally secure authentication solution, voice biometrics or phone ownership authentication can be paired with KBA to create a quick-fix two-factor authentication approach. Our recent survey of contact center professionals showed that the majority of organizations planning to move to multifactor authentication expect to do so by adding a new factor to their existing KBA process.

Companies that want to gain their customers’ trust must show that they take information security seriously. This means not only employing robust measures to prevent data breaches but also implementing multifactor authentication systems to ensure that personally identifying information stolen from other companies or gleaned from social media profiles will no longer empower fraudsters to access customer accounts.

Related Content:

Patrick Cox is chairman and CEO of TRUSTID, which enables companies to increase the efficiency of their fraud-fighting efforts through pre-answer caller authentication and the creation of trusted caller flows that avoid identity interrogation, allowing resources to be focused … View Full Bio

Article source: https://www.darkreading.com/endpoint/authentication/how-to-hang-up-on-fraud/a/d-id/1331829?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple