STE WILLIAMS

Connecting The Dots With Quality Analytics Data

Security analytics practices are only as good as the data they base their analysis on. If data simply isn’t mined, if it is of poor quality or accuracy, if it isn’t in a useable format or if it isn’t contextualized against complementary data or risk priorities, then the organization that holds it will be challenged to scratch value out of analytics.

“Your security analytics are only as accurate and useful as the data you put in,” says Gidi Cohen, CEO of SkySecurity. “If the data has gaping holes, misses important network zones, or lacks input from security controls, then you will have gaping holes in your view and miss key dependencies between the myriad security tools and processes you use.”

So what does a data-centric analysis process look like? It starts first with recognizing that you’ve got access to more relevant data than you think you do. Most organizations already have everything they need to know in order to know themselves for analytics’ sake, says Kelly White, vice president and information security manager of a top 25 U.S. financial institution, who shared best practices on the condition of not naming his employer.

[Do you see the perimeter half empty or half full? See Is The Perimeter Really Dead?.]

“If you just think about and internalize the amount of information your systems produce — just by the fact that they’re running on your network — if you think about all of the security information that your users produce as they go about their daily work, it’s not something that you have to go out and buy from somebody,” White says. “You don’t need to subscribe to a report. Really, everything you need to know yourself, you’ve got already.”

Organizations that get creative with their sourcing of data are the ones that tend to get more value out of analytics than those that simply lump together security system log data in a SIEM or who think of threat intelligence from outside sources interchangeably with security analytics.

Some of the data sources that could play a big part in forming more complete data sets could include network footprint data, platform configuration information, log-in and identity management data, database server logs and NetFlow data. White’s organization is even as creative as to use a Google appliance to index and search against unstructured data stores such as SharePoint servers to find relevant information, such as unstructured repositories of PII, and create a map of relevant information that would otherwise present blind spots when assessing security risks.

Identifying potential internal sources of data is only the first step in ensuring that it can provide value to an analytics program. Organizations also must groom and prepare the data to make sure it is of reliable quality and it is in a useful format. This means doing a bit of quality assurance — a sort of presecurity analytics, as Mike Lloyd, CTO of RedSeal Networks, calls it — to make sure gaps are filled and sources are refined so their feeds are accurate enough to make operational assumptions upon.

“If the data quality is bad, you have to do analysis on that first to decide what’s wrong with the data, how bad a problem is it and what you can do about it to make it useable,” Lloyd says, explaining that the more data sources you combine to get slightly different views of the same environment, the easier it is to do this. “When you combine data, you can criticize the data feed itself and not rush headlong into security analytics.”

And this kind of criticism of data feeds shouldn’t just happen on the front end of the analytics process — it should be an on-going routine. Because, as Rajesh Goel, CTO of Brainlink International, points out, changes from infrastructure vendors could greatly impact data feeds.

“Vendor updates, patches and changes can change the meaning of the raw data generated and subsequent analytics. Some vendors communicate the changes clearly, others bury them in massive updates, and do NOT take into account that the events being generated have changed,” he says. “It’s important to confirm/validate that we’re still getting the needed data and that the value of threats or events hasn’t changed.”

Even if the data itself is good, it may not be dispensed by a particular piece of software or hardware in any kind of format useable to a security analytics team.

The data required to perform accurate and thorough security big data analytics exists, however the challenge is in having to consume vast amounts of dissimilar and proprietary formats,” says Jim Butterworth, CSO of HBGary.

This is why normalization may also play an important role in getting data ready for analytics prime time.

“In order for the data to be useful, it must be collected and normalized, so that all of the data is speaking the same language,” says Cohen. “Once the data is normalized, your analytical tools can operate on that data in a common way, which reduces the amount of vendor-specific expertise needed.”

However, organizations shouldn’t worship at the normalization altar to the point where it holds back nimble analysis.

“I would argue that you don’t necessarily have to normalize everything. There’s going to be a lot of unstructured data that doesn’t necessarily have to be structured,” says Michael Roytman, data scientist for Risk I/O, explaining that for example an organization may take a piece of external data from a report like the DBIR that says its industry is 12% more likely experience something like a SQL injection attack and add a ‘fudge factor’ that increases the weight of those vulnerabilities. “It’s about looking at that data and figuring out a quick, easy and dirty way to apply that to your target asset.”

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/connecting-the-dots-with-quality-analyti/240161778

DigiCert Announces Certificate Transparency Support

LEHI, UT (Sept. 24, 2013) – DigiCert, Inc., a leading global authentication and encryption provider, announced today that it is the first Certificate Authority (CA) to implement Certificate Transparency (CT). DigiCert has been working with Google to pilot CT for more than a year and will begin adding SSL Certificates to a public CT log by the end of October.

DigiCert welcomes CT as an important step toward enhancing online trust. For several months, DigiCert has been working with Google engineers to test Google’s code, provide feedback on proposed CT implementations, and build CT support into the company’s systems. This initiative aligns with DigiCert’s focus to improve online trust-including tight internal security controls, development and adoption of the CA/Browser Forum Baseline Requirements and Network Security Guidelines, and participation in various industry bodies that are focused on security and trust standards.

“DigiCert’s business is built on trust, and we are committed to lead the industry toward better practices that enhance online security,” said DigiCert CEO Nicholas Hales. “Certificate Transparency accomplishes this goal by shining a light on certificate issuance practices and building in a scalable early detection system that relies upon trusted, widely used technologies and standards. We applaud Google for its forward-thinking mindset in advancing CT closer to implementation.”

CT provides early detection and mitigation of misissued or rogue SSL Certificates because it requires certificates to be posted to a public log. Google welcomes DigiCert’s efforts to advance CT adoption and awareness.

“We’re pleased to receive DigiCert’s declaration of support for CT and are encouraged by their continued work with us to help improve online trust and security,” said Ben Laurie, creator of CT and software security engineer at Google. “This is an important step that we hope other CAs will follow as we look to provide greater structural integrity to the SSL/TLS Certificate system.”

For more information about DigiCert and Certificate Transparency, please visit http://www.digicert.com/certificate-transparency.htm.

About DigiCert, Inc.

DigiCert is a premier online trust provider of enterprise security solutions with an emphasis on authentication, PKI and high-assurance digital certificates. Headquartered in Lehi, Utah, DigiCert is trusted by a continually growing clientele of more than 70,000 of the world’s leading government, finance, healthcare, education and Fortune 500 organizations. DigiCert has been recognized with dozens of awards for providing enhanced customer value, premium customer support and market growth leadership. For the latest DigiCert news and updates, visit digicert.com, like DigiCert on Facebook or follow Twitter handle @digicert.

Article source: http://www.darkreading.com/privacy/digicert-announces-certificate-transpare/240161779

5 Steps To Stop A Snowden Scenario

No organization wants to believe one of its own could go rogue. But after being blindsided by the Edward Snowden leaks, even the highly secretive National Security Agency has been forced to overhaul its procedures to lock down just what its most privileged users can access and do with sensitive information.

Click here for more articles from Dark Reading.
Click here to register to attend Interop.

As the Snowden case demonstrated, it’s not easy to detect an insider threat today. Some 54 percent of IT decision-makers say it’s harder to catch insider threats today than it was in 2011, and nearly half acknowledge that their organizations are vulnerable to a rogue insider attack, according to a new report (PDF) published today by Vormetric and co-authored by the Enterprise Strategy Group. It’s a matter of more users accessing the network, including contractors, and the loss of control over data due to cloud computing, for example, according to the findings.

Privileged users a la Snowden are their biggest concern, with 63 percent saying their organizations are ripe for abuse by those users; some 45 percent of IT decision-makers say they’ve changed their view on insider threats in the wake of reports on Snowden’s leaks to the press.

“Up until this [Snowden] case, it was all about providing support, getting customers supported and getting data to the right people. It was not about analyzing [the admin’s] access,” says Bob Bigman, former CISO of the CIA. “To provide support, Snowden was given more access than he should have been given … What exacerbated it was that not only did he have access to his systems there, but systems he had privileges on that were trusted to other systems within NSA. That enabled him to jump [among] various systems … It was all done under the banner of customer support.”

NSA officials told National Public Radio that as sys admin, Snowden had access to an NSA file-sharing location on the agency’s intranet in order to move sensitive documents to secure places on the network. The NSA didn’t catch his copying the files, however, and the agency now has implemented a “two-person rule” for access so a lone wolf can’t leak sensitive information like Snowden did.

“It’s human nature to hope for the best. But hope is not a security plan,” Bigman says.

Big-name companies are putting in place new insider threat prevention programs. Dell, for example, which was Snowden’s employer prior to his gig at NSA contractor Booz-Allen, coincidentally is beginning the rollout of its new insider threat prevention program, which has been in the works for the past two years. Dell calls the initiative its “knowledge assurance program.”

John McClurg, Dell’s CSO, says “insider risk” is a more appropriate term than “insider threat.”

“Not all insiders pose a threat. Many of them carry a vulnerability with them … that a threat vector might exploit, and some might become the threat vector,” says McClurg, who notes that avoiding false positives that misidentify an insider as malicious when instead his or her credentials are stolen, is important.

“You do an analysis of what gave way to the false positive,” says McClurg, who declined to comment on the Snowden case.

And like any advanced cyberattack, there’s no way to stop a determined rogue insider from stealing or leaking information—it’s all about minimizing the damage. “You put in layers that slow them down. Have an active detection capability in place,” says Larry Brock, former CISO at DuPont and president of Brock Cyber Consulting. “You have time to stop them in their tracks before they do damage,” says Brock, who also previously worked for the NSA.

[A determined user or contractor hell-bent on leaking data can’t be stopped, but businesses should revisit their user access policies and protections. See NSA Leak Ushers In New Era Of The Insider Threat.]

Rob Rachwald, senior director of market research at FireEye, who will present at Interop some best practices being adopted by enterprises to prevent, detect, and catch early any insider misbehavior, says the sys admin problem is really nothing new.

“I remember at one of my first jobs, a sys admin was busted for reading everybody’s email down there in the server room in the late ’90s,” Rachwald says. “It’s been going on forever. The big problem is sys admins are always being ‘defined’ by big companies like Microsoft and Oracle. They’ve put some security in [their software], but the fundamental problem is that they are not security companies.”

Here are some tips culled from Rachwald’s research as well as other security experts on how to trip up or catch a possible rogue insider in the act:

1. Work closely with the business side to ID critical information to protect—and loop in the senior execs.

Start small and think big, Rachwald says. “Quite often, security people come in with a little boil-the-ocean approach,” he says. Work with the various lines of business to pinpoint where the crown jewels reside and lock them down, he says.

“We found that from an alignment standpoint, good security people have made the problem very personal,” Rachwald says of research he conducted. “So they worked with the lines of business to understand the impact of what could go wrong: if this got breached, what would it do to your competitive situation or brand? They’re asking lots of those questions to make it personal.”

Dell’s McClurg notes that the first phase of Dell’s program was identifying where its critical data sits, ensuring it’s categorized or labeled, for instance. “The first phase of most everyone you talk to is: ‘what is the status, the environment?’ And call out those opportunities you need to improve, [such as] how you grapple with historical data points,” some of which could reside in access control systems, for example, he says.

Brock, meanwhile, says he’s seen companies assign a senior, non-IT person as a bridge to work closely with the CEO and security team to review security projects and progress. “Some organizations are reluctant to take this up to the senior leadership in the company. I believe that’s crucial. The CEO and [his or her] team really needs to understand these threats.”

2. Team up with your legal and human resources departments.
Make it as difficult as possible for an insider to go rogue by tying user policies in with the legal department and HR, Rachwald says.

One company in his study created legal processes that would trigger. “If the [employee] were off-boarded [from the company’s systems], they’d give a list of things he had access to, apps. If any of this came up with the competition, it would be under scrutiny,” Rachwald says.

Have HR inform employees on the consequences of a competitor getting stolen information, for instance. “A lot of companies are working closely with HR not to just implement policies around insider threat but also training” on the reason behind it, he says.

3. Decentralize your security department model.
Some large enterprises have embedded security staffers within the various lines of business so they forge closer ties with them and better understand their data security needs, FireEye’s Rachwald says.

“The could understand the line of business and work very carefully with the owners on what the important data is, what the important processes are, who the data owners are, and put processes in place,” he says. “There’s a big benefit when [security] people understand that business extremely well.”

The catch, of course, is such a model isn’t realistic for resource-strapped smaller companies, who are stuck with a more centralized approach.

Next page: Schooling and revoking privileges

Article source: http://www.darkreading.com/vulnerability/5-steps-to-stop-a-snowden-scenario/240161758

Apple Touch ID Fingerprint Reader Hack Heightens Biometrics Debate

That didn’t take long.

The biometrics hacking team of the Chaos Computer Club (CCC) has defeated Apple’s Touch ID feature, a fingerprint reader unveiled last week as part of Apple’s announcement of the iPhone 5s. The move by Apple led some security experts to express hope that its adoption could lead to increased interest in biometric technologies among consumers. But CCC researchers say it’s proof that fingerprint readers should be viewed skeptically.

“We hope that this finally puts to rest the illusions people have about fingerprint biometrics,” says Frank Rieger, spokesman for the CCC. “It is plain stupid to use something that you can’t change and that you leave everywhere every day as a security token.”

News of the hack came roughly 24 hours after the phone became publicly available Sept. 20. Essentially, CCC researchers demonstrated that an attacker with physical access to the phone could take a picture or scan the fingerprints of the device’s owner and use that to create a mold of the fingerprint to launch an attack.

“First, the residual fingerprint from the phone is either photographed or scanned with a flatbed scanner at 2400 dpi,” the researchers note. “Then the image is converted to black and white, inverted and mirrored. This image is then printed onto transparent sheet at 1200 dpi.”

“To create the mold, the mask is then used to expose the fingerprint structure on photo-sensitive PCB material,” CCC hackers explain. “The PCB material is then developed, etched and cleaned. After this process, the mold is ready. A thin coat of graphite spray is applied to ensure an improved capacitive response. This also makes it easier to remove the fake fingerprint. Finally a thin film of white wood glue is smeared into the mold. After the glue cures the new fake fingerprint is ready for use.”

The researchers also outlined another version of the attack, but said it was less reliable.

Apple did not respond to a request for comment.

Though the CCC criticized the use of fingerprint scanners for authentication and derided them as a technology designed for “oppression and control,” Paul Zimski, Lumenion Security’s vice president of solution marketing, says that the hack will probably not deter end users from leveraging the technology on their devices.

“Sure, it’s not highly secure, but the average end user will most likely still use and rely on the scanner,” Zimski says. “Trumping usability for security is somewhat of a universal constant in the consumerized world. If anything, this is also a good case for employing two-factor authentication.”

There’s an illusion of fingerprints as “some science-fiction thing” that is always highly accurate, says Michael Pearce, security consultant for Neohapsis. Unfortunately, he adds, that is not the case.

“They are problematic when used on their own to authenticate,” he says. “Further, because fingerprint measurements are never exactly the same, the manufacturer needs to balance an error rate for both letting people in falsely and locking them out wrongly. When most of your fingerprint measurements are going to be legitimate users every time they pick up their phone, you’re more concerned with the 9,999 times it’s the right user than the one time it’s the wrong one, and, as a result, you will lean on the permissive side if you want your product usable.”

Ultimately, noted cryptographer Bruce Schneier argues, Apple is trying to balance security with convenience.

“This is a cell phone, not an ICBM launcher or even a bank account withdrawal device,” he blogs. “Apple is offering an option to replace a four-digit PIN — something that a lot of iPhone users don’t even bother with — with a fingerprint. Despite its drawbacks, I think it’s a good trade-off for a lot of people.”

Still, blogs Errata Security’s Robert Graham, the notion that the hack is too much trouble is “profoundly wrong.”

“Just because it’s too much trouble for you doesn’t mean it’s too much trouble for a private investigator hired by your former husband,” he blogs. “Or the neighbor’s kid. Or an FBI agent. As a kid, I attended science fiction conventions in costume, and had latex around the house to get those Vulcan ears to look just right. As a kid, I etched circuit boards. This sort of stuff is easy, easy, easy — you just need to try.”

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/attacks-breaches/apple-touch-id-fingerprint-reader-hack-h/240161741

Post-NSA Revelations, Most Users Feel Less Safe

Recent revelations of the National Security Agency’s vast spying program has made users feel less secure, new data finds.

Some 65 percent of consumers, SMBs, large enterprises, and government agencies in the survey say they feel less safe now knowing that the NSA has access to electronic and phone records, while 26 percent are ambivalent, 4.5 percent feel safer, and 4 percent aren’t aware of the NSA program.

They consider government the biggest threat to their online privacy, followed by corporations such as Google, Facebook, and Apple, according to the survey of some 7,900 users conducted by private cloud backup and sharing provider SpiderOak. Nearly 90 percent say Google, Facebook, and private companies should prioritize privacy in their offerings, but 77 percent say they consider their privacy their own responsibility, not that of companies or government.

“People are becoming increasingly aware of how exposed they are online. Whereas historically this was limited in scope to private companies, we have now learned a great deal more about government surveillance and its pervasiveness,” says Ethan Oberman, CEO of SpiderOak. “In the end, both the organizations and government programs remain inert, leaving users with little choice but to take privacy into their own hands.”

Some 23 percent say privacy should be the responsibility of government legislation or private firms, with about 12 percent placing that on private firms and 11 percent on government.

About half of the respondents say they store data on hard drives, 32 percent in the cloud, and 8 percent on flash drives or CDs and DVDs.

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/government-vertical/post-nsa-revelations-most-users-feel-les/240161785

LinkedIn denies hacking into users’ email

Email access. Image courtesy of ShutterstockNo, LinkedIn most certainly does not sink its marketing fangs into users’ private email accounts and suck out their contact lists – well, at least, not without users’ permission – the company said over the weekend.

Blake Lawit, Senior Director of Litigation for LinkedIn, on Saturday responded to a class action lawsuit brought last week by four users who claimed that the professional networking site accesses their email accounts – “hacks into,” to use the diction of the lawsuit – without permission.

Lawit’s statement denies the plaintiffs’ accusations:

We do not access your email account without your permission. Claims that we “hack” or “break into” members’ accounts are false.
We never deceive you by “pretending to be you” in order to access your email account.
We never send messages or invitations to join LinkedIn on your behalf to anyone unless you have given us permission to do so.

On Tuesday, four LinkedIn users in the US filed the complaint, which alleges that the company “hacks into” users’ email accounts, downloads their address books, and then repeatedly spams out marketing email, ostensibly from the users themselves, to their contacts.

The suit charges LinkedIn with fuzzily-worded requests and notifications when it comes to just what, exactly “growing” a user’s network entails.

On the screen labelled “Grow your network on LinkedIn”, presented when a new user signs up for the free service, LinkedIn works its marketing sneakiness, the suit says, getting into a user’s email account without a password and then snapping up contacts and the email address for anybody with whom he or she has ever swapped email:

LinkedIn is able to download these addresses without requesting the password for the external email accounts or obtaining consent.

If a LinkedIn user has logged out of all their email applications, LinkedIn requests the username and password of an external email account to ostensibly verify the identity of the user.

However, LinkedIn then takes the password and login information provided and, without notice or consent, LinkedIn attempts to access the user’s external email account to download email addresses from the user’s external email account.

If LinkedIn is able to break into the user’s external email account using this information, LinkedIn downloads the email addresses of each and every person emailed by that user.

The suit mentions “hundreds” of user complaints about the practice on LinkedIn’s own site.

It’s not difficult to see why users might well be appalled, given some of the situations they describe on the site’s help center thread on the topic.

One user, Cynthia Hubbard, describes LinkedIn invitations getting sent out “at [her] alleged behest” to a coworker with whom she “had a great deal of trouble”, to five individuals from opposing in-house counsel and corporate defendants in a lawsuit she was involved in, and to a worker’s compensation client she referred to another law firm and whom she would never personally invite to her contact list, among others.

One reader commented on my coverage last week that he or she had read an account on another posting of this story, about a psychologist whose professional email messages to patients had triggered invitations to connect that were actionable malpractice breaches for which he could face disciplinary action.

Email. Image courtesy of ShutterstockIn his statement, Lawit says that LinkedIn most certainly gives users the choice to share email contacts and that the company “will continue to do everything we can to make our communications about how to do this as clear as possible.”

From what I can suss out, LinkedIn does tell users what it’s up to, but the language is hidden away and is a far cry from “as clear as possible.”

Users have been decrying LinkedIn’s practices for months, at the very least, without any satisfaction.

It’s easy, in a case like this, to blame users for not reading the fine print. That logic holds that free services are only free from a financial standpoint, but you pay, one way or the other, to keep them alive, including letting a service like LinkedIn vacuum up your contacts for marketing purposes.

There’s merit to that argument.

Then again, there’s no excuse for tucking your marketing practices away where they’re not obvious to users.

The hallmark of clear communication is that you don’t wind up with pages full of comments from outraged, surprised users. And that is exactly what LinkedIn is dealing with now, with the added problem that all that user surprise and outrage has festered and is now boiling up into the legal realm.

Image of email access and checking email courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SYJZ6LKmdiM/

Teen privacy “eviscerated” by planned Facebook changes

Girl on phone. Image courtesy of ShutterstockA coalition of US groups that advocate for teenagers is crying foul over proposed changes to Facebook policy that would rubber-stamp the use of teenagers’ names, images and personal information to endorse products in advertisements.

The coalition, which includes over 20 public health, media, youth, and consumer advocacy groups, sent a letter to the Federal Trade Commission (FTC) on 17 September asking that the government take a closer look at how the proposed changes will expose teenagers to the same “problematic data collection and sophisticated ad-targeted practices that adults currently face.”

The changes to Facebook’s Statement of Rights and Responsibilities will give the site permission to use, for commercial purposes, the name, profile picture, actions, and other information of all of its nearly 1.2 billion user base, including teens.

The group also objects to new language, directed at 13-17 year-old users, that says that if you’re a teenager, and you’re on the site, Facebook assumes it has consent from your parent or legal guardians to use your information.

The proposed language:

If you are under the age of eighteen (18), or under any other applicable age of majority, you represent that at least one of your parents or legal guardians has also agreed to the terms of this section (and the use of your name, profile picture, content, and information) on your behalf.

Joy Spencer, who runs the Center for Digital Democracy’s digital marketing and youth project, said parents, for one, should be worried about the proposed privacy policy changes:

These new changes should raise alarms among parents and any groups concerned about the welfare of teens using Facebook. By giving itself permission to use the name, profile picture and other content of teens as it sees fit for commercial purposes, Facebook will bring to bear the full weight of a very powerful marketing apparatus to teen social networks.

The coalition for teens is just the latest to join in the hue and cry over the proposed privacy policy changes.

On 4 September, the top six privacy organisations in the US – the Electronic Privacy Information Center, Center for Digital Democracy, Consumer Watchdog, Patient Privacy Rights, U.S. PIRG, and the Privacy Rights Clearinghouse – sent a joint letter to politicians and regulators asking that some of Facebook’s proposed changes be blocked.

Facebook had issued the proposed changes as part of an agreement that was made in settlement of a class-action lawsuit.

However, the changes would actually weaken the privacy policy’s wording, this earlier letter claims, and would violate a 2011 privacy settlement with the FTC.

Furthermore, the amended language regarding teens “eviscerates” limits on commercial exploitation of the images and names of young Facebook users, the letter states.

It reads:

The amended language involving teens – far from getting affirmative express consent from a responsible adult – attempts to “deem” that teenagers “represent” that a parent, who has been given no notice, have consented to give up teens’ private information. This is contrary to the Order and FTC’s recognition that teens are a sensitive group, owed extra privacy protections.

Facebook was supposed to update its policy two weeks ago but has delayed the decision following the six consumer watchdog groups’ petition of the FTC to block the changes.

In an emailed statement to the LA Times, Facebook said that it put on the brakes in order to get this thing right:

We want to get this right and are taking the time to review feedback, respond to any concerns, and clarify the explanations of our practices. We routinely discuss policy updates with the FTC and are confident that our policies are fully compliant with our agreement.

In my opinion, Facebook won’t get it right until it embraces the radical notion of opt-in as opposed to making users continually jump through hoops to opt out of having their personal information used in ever new ways.

As far as deemed consent goes, it’s ludicrous to presume that teens on Facebook are a) there with their parents’ blessing and b) that that presumed blessing somehow includes letting their child’s likeness be plastered onto every money-generating shill that Facebook advertisers can cook up.

The proposed changes predate last week’s truly awful incident, when a Facebook advertiser got hold of two images of a gang-rape and suicide victim and used them in dating ads.

That dating company has since gone offline, its Facebook account has been shuttered, and Facebook has apologized.

The proposed changes go beyond teens’ images, of course, to encompass all their personal data, including their posted activities. Do we really think that the online history of children should be fair game for Facebook, when even adults leave often breathtakingly embarrassing, not to mention career-threatening, trails?

As far as images in particular go, perhaps the case I mention is only tangentially related to the proposed privacy policy changes. Maybe it just comes to mind because it tastelessly featured images of a teen who met a horrific fate.

Maybe it comes to mind because the images of children, to my mind, should be considered too precious to play games with, or perhaps even to generate profits from.

Image of girl on phone courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ITxCxBnJ9js/

SSCC 117

News, opinion, advice and research: here’s our latest two-weekly quarter-hour security podcast, featuring Chet and Duck (Chester Wisniewski and Paul Ducklin) with their informative and entertaining take on the latest security news.

By the way, you can keep up with all our podcasts via RSS or iTunes, and catch up on previous Chet Chats by browsing our podcast archive.

Listen to this episode

Play now:

(24 September 2013, duration 14’57”, size 9.0MB)

Download for later:

Sophos Security Chet Chat #117 (MP3)

Stories covered in Chet Chat #117

Previous episodes

Don’t forget: for a regular Chet Chat fix, follow us via RSS or on iTunes.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/XtCNh02d0xc/

California gives teenagers an ‘eraser button’ to delete their web mistakes

Computer key. Image courtesy of ShutterstockLegislators in California are working to give teens more control over content they have posted on the web by giving them the ability to push the reset button on their social media profiles.

California Governor, Jerry Brown, received a letter from the CEO of Common Sense Media, James P Steyer, in which it states:

Children and teens often self-reveal before they self-reflect and may post sensitive personal information about themselves – and about others – without realizing the consequences.

Now a unanimously passed Senate Bill will guarantee privacy rights for minors in California as well as an ‘eraser button’ which will allow them to delete their faux pas. This new bill will make the West Coast state the first in the US to require websites to allow under-18s to remove their own content from the site, as well as to make it clear how to do so.

The law does have some limitations though – it only covers content posted by the child making the removal request and so does not cover anything that their friends or family may have uploaded about them. The bill also only requires removal of information from public websites and not from servers.

California’s governor has yet to take a stance on the bill but, as reported in The New York Times, he has until mid-October to sign it, after which it will become law even without his signature. The new law would have an effective start date of January 1, 2015.

The law, designed to protect kids from bullying and embarrassment, also considers the potential harm to future educational or job prospects. This is timely considering how companies are increasingly likely to use the web to run background checks on prospective new employees.

In April this year, a survey by CareerBuilder discovered that 1 in 3 employers reject applicants based on unsavory social media posts. The kind of information that led to their decision included embarrassing photos, evidence of drink or drug use, and lack of good communication skills – i.e. just the type of profile many teens are presenting to the world.

Whether California’s new ‘eraser button’ will help kids bury their indiscretions and avoid having their youthful past determine their adult futures is debatable and Senate Bill 568 is not universally approved of. There is concern that it could lead to other States passing their own laws, thereby leading to a situation whereby website operators would have to navigate a multitude of legislation in order to serve content that may be consumed by minors.

In a letter to lawmakers the non-profit group, the Center for Democracy and Technology, who lobby for internet freedoms said,

We are principally concerned that this legal uncertainty for website operators will discourage them from developing content and services tailored to younger users, and will lead popular sites and services that may appeal to minors to prohibit minors from using their services.

And then there is the question of how a website operator would know they were serving content to a minor and in what state? Presumably that would involve asking for a site visitor’s age and location – someone better hold onto the privacy advocates’ collars!


Image of smartphone courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/lUqgMfzpczk/

Cisco Delivers Safety And Security Solution Innovations/Enhancements

SAN JOSE, Calif., Sept. 23, 2013 Cisco today announced enhancements to its portfolio of safety and security solutions with video surveillance architectures and new IoT enabled solutions that ease management of millions of connected cameras and devices. The Internet of Things is the next technology transition where devices will allow us to sense and control the physical world by making objects smarter and connecting them through an intelligent network. Video Surveillance Manager 7 (VSM7) provides new capabilities that enable central or multiple operations management of intelligent video surveillance and an open platform for analytics and metadata.

Organizations are looking to deploy more cameras in more areas to protect citizens, customers, and employees. According to a new report published in the Markets and Markets catalogue, the video surveillance systems and services market is set to reach $36.28 billion by 2018. Organizations need secure and flexible video surveillance architecture to manage the growing number of cameras they are deploying in highly distributed harsh and industrial environments. Cisco VSM 7 adapts to evolving business needs and provides an easy way to manage video, resources, troubleshooting and business intelligence.

Intelligent video surveillance systems are increasingly being used as valuable business tools to mitigate enterprise-wide and global risk and to improve operational processes. By providing a platform where businesses can use their own operational information to improve productivity through analytics and big data, organizations can identify and respond quickly to drive operational efficiencies across different industries, such as healthcare, manufacturing, transportation, oil and gas, and public safety.

KEY HIGHLIGHTS

– Cisco VSM7 is delivering three new scalable video architectures providing flexibilities to manage millions of endpoints to meet the challenges increasing business risk.

– Cisco VSM7 also has IoT solution capabilities for business intelligence with analytics and metadata.

– Cisco IP cameras are now smart endpoints providing an application development platform. Partners participating in the Cisco Developers Network (CDN) program can develop applications that integrate with Cisco IoT solutions.

– An ecosystem of CDN partners are now providing a range of solutions for video analytics, network monitoring, physical security information management (PSIM) and other applications.

VSM7 Provides Highly Secure And Scalable Video Surveillance Architectures With The Following Capabilities:

– Federator helps organizations to centrally manage millions of IP cameras from a single user interface across globally distributed video surveillance deployments.

– Dynamic Proxy efficiently uses network bandwidth to delivers quality video to multiple users accessing the same video streams from any location.

– Enhanced medianet capabilities simplify management, operations and troubleshooting with end-to-end visualization of video flows and on-demand Mediatrace from the camera to the video client.

– Zero Data Loss ensures that video can be preserved. In mobile environments, like transportation when network connectivity is often interrupted, cameras can store video locally until network connectivity has been restored, then automatically transmit the video to the media server.

Analytics and Metadata Platform

– Cisco VSM 7 provides an analytics and metadata platform for business intelligence with the support of new IoT solution capabilities. This platform supports secure APIs for integration with third party eco-system partner analytics to record and index metadata with video.

– An industry first in Cisco VSM 7 allows motion metadata search that reduces the time and improves method to search through days of recorded video using motion detection.

IP Cameras

– Cisco IP Cameras are Smart Endpoints providing an application development platform for IP cameras that will allow new analytics capabilities to be easily added as applications are developed.

– The new Cisco 6050 Camera is a 1080p, ruggedized camera suitable for transportation vehicles such as buses and trains. The combination of high-resolution imaging and protective housing gives the Cisco 6050 the reliability required to maximize passenger safety and optimize mobile surveillance.

– Validated third party applications will allow customers to easily add functionality to the cameras such as video analytics, audio communications through cameras, IoT sensors/aggregators, and audio analytics capabilities.

SUPPORTING QUOTES:

Guido Jouret, VP and general manager, Internet of Things Group at Cisco:

Just a year after we launched Video Surveillance Manager 7 with virtualization and medianet, we are excited to present a major upgrade. We are bringing scalability for VSM 7 and a platform for application development. This is great example of how businesses and organizations can use the Internet of Things to gain efficiency, harness intelligence, improve operations and increase customer satisfaction.

Charles Byrd, public safety technology manager, Dallas Area Rapid Transit:

“Cisco VSM 7 and Cisco medianet enabled IP cameras give us the ability to easily use video surveillance technologies for public safety across all of our DART locations to help keep passengers safe and achieve efficient operations.”

Ricardo Max, contract coordinator, Brazil Ministry of Justice: Cisco IPICS enables us to effectively communicate and coordinate across security agencies to provide safety and support large events.

Dana Matsunaga, president and CEO, ActionPacked Networks: The fully integrated, scalable LiveAction and Cisco Video Surveillance Manager solution bridges the gap between security personnel and the IT help desk to accelerate deployment, troubleshooting, and resolution of video surveillance performance issues, thus improving incident investigation. Furthermore, LiveAction helps IT teams simplify network readiness for new video services by enabling easy GUI-based IP SLA VO traffic generation.

Steve Russel, CEO, PrismSkylabs: “Our work with Cisco enables integrators to deliver a powerful visual merchandising, auditing and analytics solution to their customers.”

SUPPORTING RESOURCES:

Visit Cisco Physical Security at: www.cisco.com/go/physec or at booth #2134 at the ASIS show next week: www.cisco.com/go/asis

Please visit the Cisco Video Surveillance Manager website

(http://www.cisco.com/en/US/products/ps10818/index.html) or read more in the VSM white paper (http://www.cisco.com/en/US/prod/collateral/ps6712/ps10491/ps9145/ps10818/wh

ite_paper_c11_729589_v1.pdf)

About Cisco

Cisco (NASDAQ: CSCO) is the worldwide leader in IT that helps companies

seize the opportunities of tomorrow by proving that amazing things can

happen when you connect the previously unconnected. For ongoing news, please

go to http://thenetwork.cisco.com.

Article source: http://www.darkreading.com/perimeter/cisco-delivers-safety-and-security-solut/240161707