STE WILLIAMS

On-Premise Security Tools Struggle to Survive in the Cloud

Businesses say their current security tools aren’t effective in the cloud but hesitate to adopt cloud-based security systems.

Cloud usage is growing faster than businesses’ ability to secure it. While IT pros are quick to point out the benefits of SaaS applications, they are hesitant to adopt cloud-specific security tools. At the same time, their existing security systems are putting cloud-based data at risk.

Most (64%) large organizations say SaaS adoption is outpacing security, reports iboss in its new 2018 Enterprise Cloud Trends report. On average, about one-fifth of enterprise applications are SaaS, and the number is expected to hit 36% per business within the next two- to three years.

All of iboss’ respondents say there is at least one benefit to using SaaS applications over physical software. Their reasons include speed (71%), user-friendliness (58%), data storage capacity (49%), heightened productivity (43%), and data accessibility (40%). They are most commonly using SaaS for email (63%), data loss prevention (59%), and file sharing (59%).

Employees expect to use SaaS in the workplace and they’ll continue to do so. However, 91% of respondents say their organizations’ security policies need to improve if they’re going to operate in a cloud environment. One in ten says a “complete overhaul” is needed.

Current Tools Aren’t Cutting It in the Cloud

Security in the cloud was a challenge for 97% of respondents in a new global survey by Sumo Logic, entitled 2018 Global Security Trends in the Cloud. Most report a lack of tools, cross-functional collaboration, and resources to gain insight into enterprise security.

Nearly all (93%) respondents have issues using security tools in the cloud. About half (49%) say existing tools aren’t effective in their cloud environments, stating too many tools makes it hard to know what to prioritize. Forty-five percent say they can’t investigate threats in a timely manner because of poor integration. Respondents also say different tools give conflicting information, and cloud-specific tools are both expensive and hard to learn.

“Legacy, on-prem security tools simply aren’t designed for the borderless networks most large organizations use today,” says iboss cofounder and CEO Paul Martini. “On-prem solutions require all network traffic to be routed through physical security appliances at headquarters, an incredibly expensive and inefficient process.”

Sumo Logic found 87% of businesses struggle to use on-prem SIEM in the cloud for several reasons. More than half (51%) say they can’t effectively assimilate cloud data and threats (51%), 34% say using on-prem tools in the cloud is too expensive, and 33% say deployment and usage is difficult. Only 17% say they don’t struggle to use on-prem SIEM in the cloud.

When the SIEM was originally built, it was intended for security data, says Sumo Logic CSO George Gerchow. It was primarily used by security teams. Now, these systems need to be more transparent so developers and operations employees can access the data. As businesses rely on cloud services like Office 365, Salesforce, and Workday, they’re realizing they need to change.

“They’re finally starting to learn they need something that’s going to be scalable, elastic, and give visibility across modern-day applications,” Gerchow explains.

Using on-prem tools in the cloud is expensive, he adds. Collecting data from a cloud-based environment, importing it for analysis, then pushing it back to the cloud is inefficient and costly.

The demands of cloud security are also putting pressure on the structure of security teams. More than 60% of Sumo Logic respondents say cloud security demands broader technical expertise, 54% say they need greater cross-team coordination, and 51% say their staff is overloaded. Overall, 97% of respondents face organizational challenges with cloud security.

Switching to SaaS Security: Why Wait?

Despite the enthusiasm around SaaS applications, around half (49%) of iboss’ respondents report they’re hesitant to adopt SaaS-based security tools.

“Because they believe every SaaS solution requires them to leverage multi-tenant shared cloud infrastructure, companies are typically hesitant to adopt SaaS security tools due to data privacy concerns,” says Martini. Those in industries like financial services and healthcare are also worried about regulatory control, he adds.

However, not switching to cloud-based security will force companies to forego a lot of benefits provided by SaaS applications. More employees demand the flexibility to use cloud applications to work remotely; using on-prem security tools prevents them from doing this securely.

“A risk is in using cloud-based security tools is around knowledge and education,” says Gerchow. “We just don’t have enough of it out there. Moving to the cloud, [businesses] just don’t have the skill sets to understand how these tools work.”

Adopting cloud-based security tools may require a learning curve, but Gerchow warns companies that sticking with on-prem tools amid the move to cloud can be dangerous.

“In my mind, the biggest risk is, you’ll only be looking at part of the environment,” he explains. “You’re not going to get a holistic, 360-degree view of what’s taking place.”

The pressure to embrace SaaS security will increase as companies collect larger amounts of data, Gerchow continues. Cloud-based solutions can scale to handle larger data stores. If you’re managing workloads in AWS, for example, and scale from 10 terabytes of data, to 40, to 100, you won’t be able to secure it all with an on-prem security system.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/on-premise-security-tools-struggle-to-survive-in-the-cloud/d/d-id/1331501?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

‘SirenJack’ Vulnerability Lets Hackers Hijack Emergency Warning System

Unencrypted radio protocol that controls sirens left alert system at risk.

The sound of an emergency alert siren can be a nightmare soundtrack to the millions who live in areas subject to hurricanes, tornados, earthquakes, or other natural disasters. A recently disclosed vulnerability in the emergency warning system used by San Francisco and other municipalities could allow a threat actor to take control of the system, sound false alarms, or block legitimate warnings.

While the vendor – ATI – says it has now patched the so-called SirenJack vulnerability, an unencrypted protocol, the process of its discovery could have implications for other locations.

Balint Seeber, a researcher with Bastille, began researching San Francisco’s warning siren system shortly after moving to the city in 2016. Noticing poles with sirens attached scattered throughout the city, and noting that the hardware for the sirens included radio antennae, Seeber was curious about the system’s security.

After realizing that there was a system test every Tuesday, Seeber first began looking for the system’s radio frequency. “I started every week, capturing and analyzing large chunks of the radio spectrum with a view to trying to find this one unknown signal amongst hundreds, maybe thousands, of signals across the spectrum and that took some time,” he says.

Seeber was surprised to find that the frequency used by the system is not one normally associated with public service or public infrastructure control. It is, instead, one that is close to those used by radio amateurs.

“I’ve demonstrated that even a $30 to $35 handheld radio you can buy from Amazon that is used by radio hobbyists — like a more enhanced walkie-talkie — is perfectly capable of perpetuating an attack when combined with a laptop,” he says.

Once the frequency was known he began looking at the transmission itself and he soon found that the control signals were being sent with no encryption at all. That meant that anyone willing to put in the sort of effort he had made could analyze and hijack control of the system. Seeber then traveled to Sedgwick County, Kansas, where a similar system was in use, to see if the vulnerability also existed there. “The findings were consistent there and I did see the same pattern. And so I was able to confirm that their system was also vulnerable,” he says.

While each system is customized to a great extent, Seeber says that an attacker could use their knowledge of the protocol to turn pre-programmed alerts on or off. In addition, he says that the system has a direct public-address mode, so it is possible that an attacker could use the infrastructure to broadcast an illicit message to the public over these public speakers.

At that point, Seeber and Bastille notified ATI, the system’s vendor, of the SirenJack vuln. Seeber is eager to point out that the notification was in line with ethical analyst behavior. “We conducted this process with responsible disclosure,” he says, adding, “That means that we write our findings up and and disclose it privately to the vendor, which we did in early January. Then we provide 90 days during which they’re able to take those findings and prepare any remediation steps.”

In a statement, ATI’s CEO, Dr. Ray Bassiouni, said, “ATI is fully supportive of all of our clients and will be on standby if anyone is concerned about hacking or vulnerabilities in their system.”

Seeber says that while Bastille was not asked to test the patch ATI provided to San Francisco, he has seen work on the pole-based components and has noticed random traffic within the signals, traffic that indicates at least some level of encryption is now in place.

“We don’t want the public to lose confidence in the system and the government’s ability to handle emergencies,” Seeber says. He encourages more government agencies to test their emergency notification systems to avoid surprises in the future.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for an intensive Security Pro Summit at Interop IT X and learn from the industry’s most knowledgeable IT security experts. Check out the agenda here.Register with Promo Code DR200 and save $200.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/iot/sirenjack-vulnerability-lets-hackers-hijack-emergency-warning-system/d/d-id/1331502?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Another company’s been harvesting Facebook user data

Déjà data-analytics vu: Facebook’s suspended yet another firm for dressing up its personal-data snarfing as “nonprofit academic research,” in the form of personality quizzes, and handing over the data to marketers.

The company, Cubeyou, a la Cambridge Analytica (CA), pasted the label “for non-profit academic research” onto its personality quizzes, CNBC reported on Sunday.

One of Cubeyou’s quizzes, “You Are What You Like,” was created in conjunction with the University of Cambridge, as was the psychographic data collected by the Facebook quiz thisisyourdigitallife.

Another version of Cubeyou’s quiz, named “Apply Magic Sauce,” states that it’s only for “non-profit academic research that has no connection whatsoever to any commercial or profit-making purpose or entity.” That sounds an awful lot like thisisyourdigitallife, which billed itself as “a research app used by psychologists.”

Cambridge University professor Aleksandr Kogan’s Facebook license was only to collect data for research purposes, not to pass on to a commercial outfit like CA. In violation of Facebook’s terms, he passed users’ data on to CA for targeted political ad marketing in the 2016 US presidential election. Similarly, Cubeyou sells data to ad agencies that want to target certain Facebook user demographics. It’s not what you’d call cloak and dagger: the data analytics firm’s site advertises its wares as “All the best consumer data sources in one place.”

Our platform brings together the most robust consumer data sources available, both online and offline. Leverage social media statistics, syndicated studies, government surveys, and more – even your own data.

One of many examples:

DEEP Go deeper than you’ve ever thought possible, mixing demographics, psychographics, lifestyles, interests and consumption traits to pinpoint the exact audience you’re looking for. Get hyper-local with over 10 Million panelists distributed across 950 US metro areas. ex. Millennial Gamers in San Francisco that purchase electronics at BestBuy

The site says that the company has access to personally identifiable information (PII) such as first names, last names, emails, phone numbers, IP addresses, mobile IDs and browser fingerprints. CNBC also dug into cached versions of the site from 19 March that said that Cubeyou also keeps age, gender, location, work and education, and family and relationship information.

It keeps Facebook users’ activity, as well: likes, follows, shares, posts, comments, check-ins and mentions of brands/celebrities. CNBC found on the cached site that such company interactions are tracked back to 2012 and are updated weekly. From the site:

This PII information of our panelists is used to verify eligibility (we do not knowingly accept panelists under the age of 18 in our panel), then match and/or fuse other online and offline data sources to enhance their profiles.

CNBC said it gave Facebook a heads-up about Cubeyou after it found that the company was using these CA-like tactics. After CNBC showed Facebook the Cubeyou quizzes and terms, Facebook said that it would suspend the company from the platform and investigate.

CNBC suggested that the case of Cubeyou “suggests that collecting data from quizzes and using it for marketing purposes was far from an isolated incident” and that Cubeyou could get away with it, just as Kogan and former CA employee Christopher Wylie did, because Facebook didn’t lift a finger to stop the harvesting of users’ data without permission until CNBC pointed it out:

[It] suggests the platform has little control over this activity.

That’s exactly what whistleblower Sandy Parakilas described. Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, calls Facebook’s lack of oversight over external developers “utterly horrifying.”

Facebook thanked CNBC for bringing Cubeyou to its attention. Ime Archibong, Facebook vice president of product partnerships, sent a statement that said that Cubeyou has been suspended pending an audit:

These are serious claims and we have suspended Cubeyou from Facebook while we investigate them. If they refuse or fail our audit, their apps will be banned from Facebook. In addition, we will work with the UK ICO [Information Commissioner’s Office] to ask the University of Cambridge about the development of apps in general by its Psychometrics Centre given this case and the misuse by [Aleksander] Kogan.

CNBC says that in the years before Facebook put a stop to it in 2015, Cubeyou, like the quiz that fed user data to CA, could scrape not just the data of those who agreed to take it, but also their friends’ PII.

That greatly expands the reach of these quizzes. Facebook last week said that independent estimates of how many Facebook users got dragged into CA’s nets were way off: in actuality, it was maybe around 87 million, Facebook said. In simpler terms: potentially, the public information of all US users.

As of Sunday morning, both of Cubeyou’s quizzes – a version of You Are What You Like and Apply Magic Sauce – could be found on Facebook, CNBC reports. Cubeyou’s response to CNBC’s findings: the company only worked with Cambridge University from December 2013 to May 2015, only collected data from that time, and hasn’t had access to new people who’ve taken the quiz since June 2015.

CEO Federico Treu said that the terms of usage on YouAreWhatYouLike.com are now more upfront about how collected information would be used. The terms now include the stipulation that data may be used “for academic and business purposes” (emphasis added) and shared with third parties, including research institutions. Plus, it would be disclosed only anonymously.

The University of Cambridge Psychometrics Center said in a statement that it wasn’t aware what Cubeyou was up to and said that it would contact the company to ask that it clarify its terms. The Center gets a lot of wannabe collaborating companies that name-drop, it said:

We have not collaborated with them to build a psychological prediction model – we keep our prediction model secret and it was already built before we started working with them. Our relationship was not commercial in nature and no fees or client projects were exchanged. They just designed the interface for a website that used our models to give users insight on [the users’] data. Unfortunately collaborators with the University of Cambridge sometimes exaggerate their connection to Cambridge in order to gain prestige from its academics’ work.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PyLK7fhXjDA/

YouTube illegally collects data from kids, group claims

YouTube is illegally making “substantial profits” from children’s personal data, according to a group of 23 child advocacy, consumer and privacy groups that have filed a complaint asking the Federal Trade Commission (FTC) to make it stop.

Kids are on the platform en masse, the group said, citing a study that found that 96% of children aged 6-12 are aware of YouTube and that 83% of children that know the brand use it daily. In fact, last year, YouTube topped the list of favorite online kid brands, according to the study:

For the second year in a row, YouTube leads all 347 cross-category brands evaluated in the BRAND LOVE® study, solidifying its position as the most powerful brand in kids’ lives. The platform’s ascent to the top is impressive, moving from a KIDFINITY score of 749 (and #86 ranking) in 2010 to the #1 brand that is disseminating trends, changing play patterns, and transforming the ways kids come of age.

No wonder kids have come to adore YouTube: the Google-owned company has been working hard to get their love and their little eyeballs on advertisements, the coalition says.

A case in point is YouTube Kids: launched in February 2015, it was designed to be a sanitized place where youngsters would be spared the hair-raising comments and content found on the rest of YouTube.

But YouTube recently found itself hiring thousands of moderators to review content on the broader site after nasty children’s content and child abuse videos got through both on YouTube and even on YouTube Kids.

Such moderation is not enough, say critics. The Guardian quoted Josh Golin, executive director of the Campaign for a Commercial-Free Childhood (CCFC), which is one of the groups that filed the complaint. Golin says YouTube is being disingenuous when it talks about children’s use of the platform:

For years, Google has abdicated its responsibility to kids and families by disingenuously claiming YouTube – a site rife with popular cartoons, nursery rhymes, and toy ads – is not for children under 13. Google profits immensely by delivering ads to kids and must comply with COPPA. It’s time for the FTC to hold Google accountable for its illegal data collection and advertising practices.

The complaint continues…

YouTube also has actual knowledge that many children are on YouTube, as evidenced by disclosures from content providers, public statements by YouTube executives, and the creation of the YouTube Kids app.

At the time that YouTube Kids launched, Product Manager Shimrit Ben-Yair said that YouTube developed the app because “Parents were constantly asking us, Can you make YouTube a better place for our kids?”

Another of many examples of YouTube’s “actual knowledge” that kids are on the platform is a keynote by Malik Ducard, YouTube’s Global Head of Family and Learning, who explained that YouTube rolled out the kids version “as a mobile experience because of this reality – that we’re all familiar with – 75% of kids between birth and the age of 8 have access to a mobile device and more than half of kids prefer to watch content videos on a mobile device or a tablet.”

The group is urging the FTC to investigate the matter as it is illegal to collect data from kids younger than 13 under the Children’s Online Privacy Protection Act (COPPA).

However, this is exactly what is happening to under-13s who use YouTube – the group’s complaint says that Google collects personal information including location, device identifiers and phone numbers, and tracks them across different websites and services without first gaining parental consent, as is required by COPPA.

The coalition that filed a complaint with the FTC includes The Center for Digital Democracy, CCFC, Berkeley Media Studies Group, Center for Media Justice, Common Sense, Consumer Action, Consumer Federation of America, Consumer Federation of California, Consumers Union (the advocacy division of Consumer Reports), Corporate Accountability, Consumer Watchdog, Defending the Early Years, Electronic Privacy Information Center (EPIC), New Dream, Obligation, Inc., Parent Coalition for Student Privacy, Parents Across America, Parents Television Council, Privacy Rights Clearinghouse, Public Citizen, The Story of Stuff Project, TRUCE (Teachers Resisting Unhealthy Childhood Entertainment), and USPIRG.

In response, a YouTube spokesperson had this to say in a statement sent to the Guardian:

While we haven’t received the complaint, protecting kids and families has always been a top priority for us. We will read the complaint thoroughly and evaluate if there are things we can do to improve. Because YouTube is not for children, we’ve invested significantly in the creation of the YouTube Kids app to offer an alternative specifically designed for children.


 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/w4jIZdTfl8o/

How to check if your Facebook data was shared with Cambridge Analytica

We’re sure you’ve heard of Cambridge Analytica (CA), the controversial company that harvested data from Facebook and then used it in ways that you almost certainly wouldn’t have wanted.

About a month ago, we reported how a CA whistleblower named Christopher Wylie claimed that the company had allegedly:

…exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.

Were you affected?

The thing is that CA didn’t crack passwords, break into accounts, rely on zillions of fake profiles, exploit programming vulnerabilities, or do anything that was technically out of order.

Instead, CA persuaded enough people to trust and approve its Facebook app, called “This is Your Digital Life”, that it was able to access, accumulate and allegedly to abuse personal data from millions of users.

That’s because the app grabbed permission to access data not only about you, but about your Facebook friends.

In other words, if one of your friends installed the app, then they might have shared with CA various information that you’d shared with them.

But how to find out which of your friends (some of whom may be ex-friends by now) installed the app, and how to be sure that they remember correctly whether they used the app or not?

Facebook has now come up with a way, given that it has logs that show who used the app, and who was friends with them.

We used this link:

https://www.facebook.com/help/1873665312923476

After we’d logged into Facebook, we got the result we hoped for:

Based on our available records, neither you nor your friends logged into “This Is Your Digital Life.”

As a result, it doesn’t appear your Facebook information was shared with Cambridge Analytica by “This Is Your Digital Life.”

Phew – we’re OK.

Unfortunately, you might not be, but if you don’t yet know, it’s worth finding out, even if only to help you decide how to approach social networks, friending and sharing from now on.

What to do?

If some of your personal data has fallen into Cambridge Analytica’s hands, there’s nothing much you can do about that now – the horse has already bolted.

But it’s still worth locking the stable door, to tighten things up for next time.

As Facebook recommends, review and update the information you share with apps and websites via the Facebook settings page.

https://www.facebook.com/app_settings_list

Also, consider how much personal data you want to share with your Facebook friends – and how many friends you want to share it with.

Remember: if in doubt, don’t give it out.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/kVjCio8ARZs/

How ODNS keeps your browsing habits secret

In computing, popular ideas have a way of becoming part of the bedrock and, once petrified, they’re extremely difficult to dislodge.

It doesn’t matter how good or bad an idea is, how well or how poorly something is coded or how insecure it is, if something is widely adopted it’s not going anywhere fast.

For example, despite its inherent insecurity email remains central to our lives, and Flash, despite a ready replacement and countless should-have-been-fatal wounds, is dying as if there’s an Oscar on the line.

Finding new ideas is easy but replacing or retooling old ideas is hard.

That puts a premium on solutions that make things better, faster or more secure by working with, or adding to, what’s already there with minimal disruption.

And that’s why ODNS (Oblivious DNS) is such an interesting idea.

ODNS is the latest entrant to an increasingly crowded field of solutions looking to address the privacy problems of the global DNS (Domain Name System).

The trouble with DNS

DNS maps human-readable names for computers and services, like nakedsecurity.sophos.com, into the numeric IP addresses that computers need in order to communicate with each other, like 192.0.79.33.

Unfortunately DNS has a privacy problem – an adversary who can see DNS queries can tell who is browsing where, even if those people are taking care to encrypt the precise details of their browsing with HTTPS.

DNS traffic can be read in two ways: on-the-wire, as it passes over the internet, or when it arrives at its destination.

How DNS works

Let’s say you want to visit www.example.org with your web browser. In order to reach that site your computer has to know its IP address, information it can get via DNS.

It does this by asking the question “what’s the IP address for www.example.org?” of a recursive resolver, which might be operated by your ISP or perhaps a third party service, like CloudFlare’s 1.1.1.1 or Google’s 8.8.8.8.

In turn the recursive resolver consults the server that knows about  .org addresses, which passes it on to the server that knows about .example.org addresses, which passes it on to the authoritative server that knows about www.example.org.

The authoritative server answers the original question, and sends the IP address 93.184.216.34 to the recursive resolver, which sends it back to your computer.

All this traffic is visible on-the-wire to anyone on the same network as you and to your ISP (or your VPN provider) as it passes through their network.

It’s also visible at a number of destinations. The most useful vantage point is the DNS resolver but traffic is also visible at the authoritative server and, often, at the other servers the recursive resolver consults too:

This information can be visible to a 3rd party eavesdropping on the communication between a client and a recursive resolver, or even between a recursive resolver and an authoritative server. As this information is sent to each DNS server, DNS operators can also see clients’ information.

Securing DNS

There are a lot of schemes afoot to deal with DNS’s privacy issues but most solutions only tackle a part of the problem and some require the kind of retooling that could make adoption slow.

  • DNS Query Name Minimisation reduces the amount of information that recursive resolvers share with some DNS servers. Snooping at or between the resolver, ISP or authoritative server is still possible though.
  • DNS-over-TLS and DNS-over-HTTPS require retooling of existing systems to encrypt DNS traffic and prevent snooping on-the-wire. They solve that problem at a cost but do nothing to prevent traffic being monitored at the resolver or other destinations.
  • Recursive resolvers built for privacy, like 1.1.1.1, tackle the resolver problem by promising not to monitor you or keep logs of your activity hanging around. It’s nice, but privacy and security requires stronger underpinnings than assurances of “you can trust us”.

Enter ODNS.

Oblivious DNS

Oblivious DNS attempts to tackle spying on-the-wire and snooping at the resolver, or other destinations, without significant retooling.

Your computer still asks the question “what’s the IP address for www.example.org?” but this time it’s sent to a local ODNS resolver on your computer.

That local resolver creates a session key, encrypts the domain with it and then adds .odns to the end, giving you a completely unrecognisable domain name, like 9fab9405429045fe5.odns.

The session key is then itself encrypted using a public key provided by an the authoritative server for the .odns TLD (Top-Level Domain). Anyone can encrypt something with the public key but only the authoritative server can read it (you’ll see why in a few paragraphs).

The encrypted session key is added to the DNS query and it’s sent on to a normal recursive resolver, such as the one operated by your ISP.

Snooping between you and the resolver, or at the resolver itself, is foiled because a voyeur can identify who’s making a request but not what the request is for, since the domain name is encrypted before leaving your computer.

Just as it would with any other domain, the resolver then identifies the authoritative server for 9fab9405429045fe5.odns and asks it for the corresponding IP address.

On receiving that request the specially equipped authoritative server uses its private key to decrypt the session key, and then uses the session key to decrypt 9fab9405429045fe5.odns, revealing www.example.org.

The authoritative server then acts like a recursive resolver: consulting the server that knows about  .org addresses, which passes it on to the server that knows about .example.org addresses, which passes it on to the authoritative server for www.example.org, which provides the IP address.

The IP address is then passed back down the line to your computer.

Spying on-the-wire during this phase, or at any of those destinations, is foiled because, although a voyeur can now see what domain the requests are for, they can’t see who made them since all the requests appear to start at the .odns server rather than your computer.

Right now ODNS is only a prototype and according to the research team there’s work to be done:

…we have some future work to continue in this direction. We have implemented a prototype of ODNS to evaluate its feasibility and to measure its performance overhead in comparison to current DNS performance.

This puts it a way behind some other DNS privacy solutions but, as I said at the beginning, solutions that require existing systems to change have a way of rolling out really, really slowly.

ODNS’s ability to work with DNS as it is, rather than as we wish it to be, could give it head start, even though it’s starting from behind.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aTga6TkVs6k/

89% of Android Users Didn’t Consent to Facebook Data Collection

A new survey shows most Android users did not give Facebook permission to collect their call and text data.

Facebook is in hot water as more users find out how much of their personal data the social media giant has collected. In a new study by anonymous social app Blind, 89% of 1,300 Android users claim they did not give Facebook permission to gather their call and text history.

Following news of the Cambridge Analytica scandal, Android users began investigating the extent of Facebook’s data collection. They learned the company had been recording their call history records and SMS data, which the majority of them did not consent to. More than 30% of 2,600 users surveyed in March say they plan to delete their Facebook account, Blind reports.

Last week, Facebook shared several steps it’s taking to cut back on the amount of data it pulls from Android. CTO Mike Schroepfer says call and text history is part of an opt-in feature for Messenger and Facebook Lite on Android devices. The purpose is so Facebook can surface frequent contacts, he says, and the content of message is not collected.

Read more details here.

Interop ITX 2018

Join Dark Reading LIVE for an intensive Security Pro Summit at Interop IT X and learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/mobile/89--of-android-users-didnt-consent-to-facebook-data-collection/d/d-id/1331493?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

20 Ways to Increase the Efficiency of the Incident Response Workflow

Despite all the good intentions of some great security teams, we are still living in a “cut-and-paste” incident management world.

I am a big fan of efficiency. Why do I love efficiency? Mainly because introducing efficiencies into processes saves time and money. There are other benefits as well, such as decreased chance for human error, improved accuracy, and increased productivity.

Unfortunately, in the incident response world, the overall state of inefficiency still reigns supreme. Despite the good intentions of some great security teams, we are still largely living in a “cut-and-paste” incident management world. I know quite a few talented teams in good organizations that struggle to introduce efficiencies into their incident response process.

That’s not to say that there aren’t a lot of people talking about introducing efficiencies into incident response. But somehow all that talk hasn’t resulted in a lot of change on the ground. There are likely a number of different reasons. Part of me wonders if there is a gap in understanding where organizations end up sinking time on manual incident management tasks. Perhaps it would help to enumerate areas where organizations are likely begging for efficiencies in their incident response workflow. Here are 20:

Image Credit: DuMont Television/Rosen Studios. Public domain, via Wikimedia Commons.

  1. Intelligent mapping between alerts/events and tickets/work queue. A security organization may work with billions of events, tens or hundreds of thousands of alerts, and hundreds of tickets on a given day. Alerts are typically generated automatically based on logic covering one or more events. One can debate the quality and fidelity of the alerts, but the process is relatively automated. But which of the alerts makes it into a ticket that needs to be worked? Unfortunately, that process is far less well-defined and for the most part, intensely manual.
  2. Pre-emptive prioritization. It has always amazed me that we wait until our alerts get into our work queue before thinking about prioritization. That means that our teams need to comb through tens of thousands of data points that add little to no value to our security postures in order to get to the data points that do add value. Why not think about the risks and threats we face and look to prioritize at the beginning of the content development process, before a single alert finds its way to the work queue?
  3. Front-loading analytics. Why run analytics over a mass of data whose context and meaning we know very little about? Why not run analytics strategically at the beginning of the content development process to produce higher-quality alerts and more contextually aware, meaningful data to send to the work queue?
  4. User identification. We all need to identify the user when looking into an alert. So, why do we continue to do this step manually?
  5. Asset identification. See No. 4.
  6. Vet the alert. Chances are that we check the same five or 10 things when vetting most alerts. We might even follow a written procedure instructing us how to vet a given family of alerts. So why do we not automate much of this work?
  7. Understand the alert. Once we vet an alert, we need to gain at least a basic understanding of what is going on. That typically involves reviewing the alert, along with additional supporting evidence. Why not pull in that supporting evidence automatically?
  8. Extract IOCs. Chances are that if you’re investigating something involving malicious code or a malicious link, you will want to extract the indicators of compromise for a variety of reasons. In 2018, don’t you wish you didn’t have to perform this step manually?
  9. Build the narrative. Decisions require context and understanding. So, as you build the narrative around the alert you’re investigating, wouldn’t you prefer to have much of the manual work done for you automatically so that you can focus on analysis and incident response?
  10. Analyze. Ah, analysis. Quite possibly your favorite part of the entire workflow. So, why are you still cutting and pasting into and out of Excel? Can’t we do better than that?
  11. Identify the infection/intrusion vector. As one result of all of our analysis, we’ll want to identify gaps in our security posture and close them. Chances are that once we identify any gaps, we will need to log in to one or more entirely separate system to take any action toward closing those gaps.
  12. Pivot. Once we have isolated one or more hosts that are behaving oddly, we will likely want to pivot to study what those very hosts have been up to recently. Yup, you guessed it. That likely involves cutting and pasting, along with setting up additional queries.
  13. Look for related activity. Once we have a decent understanding of what we’re working with, another type of pivot that needs to happen is one that will enable us to look for similar types of activity elsewhere. I know you won’t be surprised when I tell you that we are once again looking at a lot of cutting and pasting, alongside additional queries.
  14. Identify/fill gaps in alerting. In the event that we missed something important, we will need to understand why and address the gap in alerting. Of course, we will need to drive this process ourselves. Wouldn’t it be nice if our tooling could suggest how we might identify something we missed more proactively in the future?
  15. Identify root cause. After any incident, it’s important to understand what the root cause was. But that is a very manual process. Wouldn’t it be nice to have some assistance here?
  16. Improve security posture. Say I discover a new set of malicious domains or something analogous. I might want to block it, sinkhole it, or do something else. Manually, of course.
  17. Include everything in the ticket. I once worked for a boss who said, “If it isn’t written down, it didn’t happen.” He was absolutely right. But why does recording everything in the incident ticket have to involve so much cutting and pasting?
  18. Report. Large or serious incidents typically involve a post-incident report. If I already recorded all of the important details in my incident ticket, why do I need to redo all that work to put together a respectable report that I can be proud to share with management, executives, and other stakeholders?
  19. Communicate. Clear, concise, and timely communication can make all the difference in handling an incident. So, why do I find myself cutting and pasting into emails instead of pulling automatically from the system out of which I’m running the incident?
  20. Extract lessons learned. No security program is perfect. We can always take lessons learned from anything we work with. Wouldn’t it be great to have a little help from our tooling?

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Josh (Twitter: @ananalytical) is an experienced information security leader with broad experience building and running Security Operations Centers (SOCs). Josh is currently co-founder and chief product officer at IDRRA and also serves as security advisor to ExtraHop. Prior to … View Full Bio

Article source: https://www.darkreading.com/perimeter/-20-ways-to-increase-the-efficiency-of-the-incident-response-workflow/a/d-id/1331430?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Jail for white collar pirates who stole from Oracle

The struggle between software giant Oracle and services company Terix has finally concluded with the latter’s CEO and co-founder Bernd Appleby being handed two years in jail.

A US tech exec being put behind bars is not an everyday occurrence but, then again, what Oracle accused Terix of doing was not a run-of-the-mill crime. According to Oracle’s 2013 accusation, along with a separate company Maintech, Terix had illegally obtained software patches and firmware from Oracle’s Solaris support site, secretly distributing them to their own customers on a commercial basis.

A serious accusation, which led to Maintech settling the case for $14 million in 2014. The following year, Terix was ordered to pay the even larger sum of $57 million. Oracle also won a separate judgement against support company, Rimini Street, which earlier this year resulted in a $75 million sum being awarded to Oracle.

But the payout didn’t end the case against Terix, who allegedly defrauded Oracle using the sort of cloak and dagger tactics that merited extra attention, according to recent court documents.

Terix allegedly set up three bogus shell companies which each bought a single license at low cost from Oracle, hiding their association with Terix. To maintain the deception, they received support from Oracle using “bogus email addresses and addresses, pre-paid telephones and pre-paid credit cards.” 

In total, 2,700 pieces of software IP worth $10 million were downloaded between 2010 and 2014, used to support 500 customers of Terix, who were unaware that the software had been obtained fraudulently.

Essentially, Terix had got hold of Oracle’s software on the cheap, reselling it at a big profit in support contracts that undercut Oracle. The scheme was alleged to have started as far back as 2005, four years before Oracle announced plans to buy Solaris developer Sun. “As the head of Terix’s executive management team and 70 percent co-owner of the company, Appleby was responsible for all aspects of the business,” said Ohio state Attorney Benjamin Glassman.

But it gets worse:

He designed the conspiracy and its evolution over almost 10 years and understood and directed all aspects of the criminal activity. As the scheme was uncovered, he instructed other company employees to devise ways to avoid detection.

Last week, after a guilty plea in 2017, a US court sentenced Terix CEO Appleby to pay $100,000 and spend two years in prison. Fellow executives were also sentenced: COO James A. Olding was handed a one-year sentence, sales director Lawrence Quinn Jr. given a symbolic one day in jail, while technical services director Jason Joyce received two years’ probation. Oracle responded:

Oracle is pleased that the United States District Court for the Southern District of Ohio accepted the guilty pleas of James Olding and Bernd Appleby, the principals of Terix, for their roles in misappropriating Oracle’s intellectual property and sentenced them both to prison for their criminal acts.

Oracle takes violations of its intellectual property rights very seriously and, as demonstrated by Oracle’s lawsuits against Terix, Rimini Street and other IP violators, Oracle will not hesitate to go after those who do so.

If the intended message from six years of litigation was ‘don’t mess with Oracle’ that will have come across loud and clear.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OYAEmzuAjHM/

Jail for white collar pirates who stole from Oracle

The struggle between software giant Oracle and services company Terix has finally concluded with the latter’s CEO and co-founder Bernd Appleby being handed two years in jail.

A US tech exec being put behind bars is not an everyday occurrence but, then again, what Oracle accused Terix of doing was not a run-of-the-mill crime. According to Oracle’s 2013 accusation, along with a separate company Maintech, Terix had illegally obtained software patches and firmware from Oracle’s Solaris support site, secretly distributing them to their own customers on a commercial basis.

A serious accusation, which led to Maintech settling the case for $14 million in 2014. The following year, Terix was ordered to pay the even larger sum of $57 million. Oracle also won a separate judgement against support company, Rimini Street, which earlier this year resulted in a $75 million sum being awarded to Oracle.

But the payout didn’t end the case against Terix, who allegedly defrauded Oracle using the sort of cloak and dagger tactics that merited extra attention, according to recent court documents.

Terix allegedly set up three bogus shell companies which each bought a single license at low cost from Oracle, hiding their association with Terix. To maintain the deception, they received support from Oracle using “bogus email addresses and addresses, pre-paid telephones and pre-paid credit cards.” 

In total, 2,700 pieces of software IP worth $10 million were downloaded between 2010 and 2014, used to support 500 customers of Terix, who were unaware that the software had been obtained fraudulently.

Essentially, Terix had got hold of Oracle’s software on the cheap, reselling it at a big profit in support contracts that undercut Oracle. The scheme was alleged to have started as far back as 2005, four years before Oracle announced plans to buy Solaris developer Sun. “As the head of Terix’s executive management team and 70 percent co-owner of the company, Appleby was responsible for all aspects of the business,” said Ohio state Attorney Benjamin Glassman.

But it gets worse:

He designed the conspiracy and its evolution over almost 10 years and understood and directed all aspects of the criminal activity. As the scheme was uncovered, he instructed other company employees to devise ways to avoid detection.

Last week, after a guilty plea in 2017, a US court sentenced Terix CEO Appleby to pay $100,000 and spend two years in prison. Fellow executives were also sentenced: COO James A. Olding was handed a one-year sentence, sales director Lawrence Quinn Jr. given a symbolic one day in jail, while technical services director Jason Joyce received two years’ probation. Oracle responded:

Oracle is pleased that the United States District Court for the Southern District of Ohio accepted the guilty pleas of James Olding and Bernd Appleby, the principals of Terix, for their roles in misappropriating Oracle’s intellectual property and sentenced them both to prison for their criminal acts.

Oracle takes violations of its intellectual property rights very seriously and, as demonstrated by Oracle’s lawsuits against Terix, Rimini Street and other IP violators, Oracle will not hesitate to go after those who do so.

If the intended message from six years of litigation was ‘don’t mess with Oracle’ that will have come across loud and clear.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OYAEmzuAjHM/