STE WILLIAMS

Choosing, Managing, And Evaluating A Penetration Testing Service

[The following is excerpted from “Choosing, Managing and Evaluating a Penetration Testing Service,” a new report posted this week on Dark Reading’s Vulnerability Management Tech Center.]

Hiring a security consulting company to perform penetration testing can make a company more secure by uncovering vulnerabilities in security products and practices — before the bad guys do. It can also be an extremely confusing and frustrating experience if deliverables don’t meet the needs or requirements of the business units. Understanding and properly managing relationships with outsourced security providers can be the difference between an expensive mistake and a well-executed exercise in security risk management.

To establish and maintain an effective relationship with a security consulting firm, one thing is needed above all: communication. Clear, concise and meaningful communication between your organization and your chosen vendor will absolutely affect the level of service and value on the deliverable side of the engagement.

And clear communication requires a firm understanding of the entire process of working with a consulting firm — from contract to payment and everything in between. Knowing what goes into and influences each of these steps, and having a firm set of reasonable expectations, will go a long way toward ensuring that there are no surprises along the way.

Before any discussion about developing effective relationships with security consultants can take place, it’s important to define what “penetration test” really means. You may think you know what a penetration test is, but the definition of the practice and its variables has changed in recent years.

A decade ago, a penetration test was generally a “black box” test that took place at the network level. Security researchers were given no details about the network they had been hired to attack, and, as the attack targeted the network level, the researchers usually attacked ports, services, operating systems and other components that comprise the lower layers of the OSI model. Indeed, the OSI model, antiquated as it may seem, offers a good way to define the scope of a penetration test.

There are several categories of penetration test, and each requires different levels of management and coordination.

1. Black-box testing
Black-box testing is performed by an attacker who has no knowledge of the victim’s network technology. While pen testers can certainly still provide this type of testing, the model isn’t used as often as it once was because attackers are now sophisticated enough that they will probably know a vast amount about your technology in advance of an attack.

2. White-box testing
White-box testing usually involves close communication and information sharing between your technology group and pen testers. Pen testers are typically supplied with legitimate user accounts, URLs, and even user guides and documentation. This type of penetration test will usually provide the most comprehensive results and is currently the most commonly requested.

3. Gray-box testing
As you might imagine from the name, gray-box testing is a mix of black-box and whitebox testing. With gray-box testing, pen test customers don’t hand over the company jewels but do provide testers with some information. This might include credentials or access to a corporate intranet site.

Companies will have other things to consider when determining the scope of the pen test they will undergo. The first is whether to secure services that include social engineering assessment.

When you look at security, one of the biggest risks is people. Indeed, the end user is commonly considered the weakest link in computer security. Most security consultancies will be able to assess the ability of your users to, say, detect phishing schemes, but there are some legal and human resources issues that must be considered before including social engineering as part of your pen testing suite of services.

For detailed recommendations on how to select a pen testing service provider — and for some advice on ways to evaluate the service you receive — download the free report.

Have a comment on this story? Please click “Add a Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/choosing-managing-and-evaluating-a-penet/240161596

Black Hat To Launch First Regional Summit In Brazil

SAN FRANCISCO, Sept. 19, 2013 /PRNewswire/ — Today, Black Hat, the world’s leading family of information security events, announced the Black Hat Regional Summit to take place in Brazil this November. This two-day event will be co-located with IT Forum Expo, providing attendees with two parallel tracks of high intensity content on the latest security research, trends and industry challenges. Both events will take place November 26 – 27, 2013, at the Transamerica Expo Center in Sao Paulo, Brazil. For more information and to register, please visit: https://www.blackhat.com/sp-13/

The Black Hat Regional Summit in Sao Paulo will introduce a mix of local in-region experts and researchers from around the globe for two full days of Briefings and Workshops. The sessions will provide candid insight and education for IT security professionals and enthusiasts of all levels. Some of the highlights of the upcoming Black Hat Regional Summit include:

Carna Botnet: Telnet’s Threat to the World (and Brazil): Parth Shukla of the Australian Computer Emergency Response Team (AusCERT) will offer attendees an exclusive look into how the first-ever internet census was conducted, the devices that were compromised for the Carna Botnet, and analysis of the data that was gathered – never before shared publically.

Defeating WhatsApp’s Lack of Privacy: Jaime Sanchez will present a novel way to add a new layer of security and privacy to the popular chat app, WhatsApp. By anonymizing and encrypting conversations, no one will transmit data to the app directly, keeping your conversations truly off the record.

A Practical Attack Against MDM Solutions: Ohad Bobrov will expose how mobile cyber-espionage attacks are carried out through a novel proof-of-concept attack that bypasses traditional defenses including Mobile Device Management (MDM) features such as encryption.

“I couldn’t be more thrilled to see Black Hat in Brazil – a place unlike anywhere else on Earth,” explained Trey Ford, General Manager, Black Hat. “Their vibrant community and expanding technology market make it an ideal location for a Regional Summit.”

For more information and to review the full list of sessions to be presented at the Black Hat Regional Summit, please visit: http://www.blackhat.com/sp-13/. Delegates registering for the Black Hat Regional Summit will also have full access to the IT Forum Expo. Registration for both events is managed through the IT Forum Expo site and can be completed here.

Future Black Hat Dates and Events

Black Hat Regional Summit, Sao Paulo, Brazil, November 26-27, 2013

Black Hat Trainings, Seattle, Washington, December 9-12, 2013

Black Hat Asia 2014, Singapore, March 25-28, 2014

Black Hat USA 2014, Las Vegas, Nevada, August 2-7, 2014

Black Hat Europe 2014, Amsterdam, The Netherlands, October 14-17, 2014

Connect with Black Hat

Twitter: https://twitter.com/BlackHatEvents – hashtag #BlackHat

Facebook: http://www.facebook.com/blackhat

LinkedIn Group: http://www.linkedin.com/groups?home=gid=37658

Flickr: http://www.flickr.com/photos/blackhatevents/

About Black Hat

For more than 16 years, Black Hat has provided attendees with the very latest in information security research, development, and trends. These high-profile global events and trainings are driven by the needs of the security community, striving to bring together the best minds in the industry. Black Hat inspires professionals at all career levels, encouraging growth and collaboration among academia, world-class researchers, and leaders in the public and private sectors. Black Hat Briefings and Trainings are held annually in the United States, Europe and Asia, and are produced by UBM Tech. More information is available at: http://www.blackhat.com.

About UBM Tech

UBM Tech is a global media business that brings together the world’s technology industry through live events and online properties. Its community-focused media and events provide expertly curated content along with user-generated content and peer-to-peer engagement opportunities through its proprietary, award-winning DeusM community platform. UBM Tech’s brands include EE Times, Interop, Black Hat, InformationWeek, Game Developer Conference, CRN, and DesignCon. The company’s products include research, education, training, and data services that accelerate decision making for technology buyers. UBM Tech also offers a full range of marketing services based on its content and technology market expertise, including custom events, content marketing solutions, community development and demand generation programs. UBM Tech is a part of UBM (UBM.L), a global provider of media and information services with a market capitalization of more than $2.5 billion.

Article source: http://www.darkreading.com/management/black-hat-to-launch-first-regional-summi/240161553

(ISC)2 Congress Addresses Security’s “People” Problems

There are many conferences and get-togethers around cyber security every year, but only a few would be considered “mandatory” by the whole community of security professionals. The RSA Conference, held each year in San Francisco, offers the industry’s biggest exhibit floor and a chance to see security products in action. Black Hat USA, held annually in Las Vegas, is where the smartest and best security researchers come to reveal vulnerabilities and share knowledge on potential threats.

While these events offer a depth of technological insight unmatched in IT security, though, they don’t necessarily focus on the “people” issues faced every day by the average security professional. That’s why I’ll be in Chicago next week for the third annual (ISC)2 Security Congress, the yearly meeting of the world’s biggest cyber security professionals’ organization.

(ISC)2’s Congress — held concurrently with ASIS, the granddaddy of physical security conferences — doesn’t have an overriding technological “theme” because it isn’t focused on technology. Its focus is discussing the day-to-day, non-sexy issues that all security professionals grapple with, such as staffing, hiring, management and administration. Where other events might have more of a “show” of leading-edge technology or new threats, (ISC)2 is more like a water-cooler conversation among colleagues faced with similar security problems and issues.

Meetings of security professional organizations such as (ISC)2, ISSA, and ISACA represent the “everyman” infosec pro, who may not always be up on the most current products or attacks because he or she is fighting the everyday fires of the enterprise. These are people who work in the trenches of security and are limited by time, budgets, and short staffing. They spend a frustrating amount of time in meetings, arguing with top executives or end users who don’t understand the dangers their systems face every day. Their job is not to be on the leading edge, but to get their data secure as best they can with what they’ve got.

This year, many of (ISC)2’s sessions will focus on how to do more with less, how to train staffers and end users to improve enterprise defenses, and how to make tough decisions about security in a rapidly-changing environment where the needs of the business and the growing range of threats often outweigh the security department’s resources.

If the security industry is to progress, it will occasionally have to step away from technological problems and wrestle with some of these types of people problems. How to fund, find, and keep good security people. How to teach end users not to click on suspicious attachments. How to build security policies that are realistic for the business, yet also enforceable by monitoring and security controls.

These issues won’t be solved at the conference next week, but it’s good to see security professionals working on them together. Cyber criminals are famous for sharing (and stealing) each other’s ideas and techniques, and that sharing has helped them to get an edge on enterprise defenders. Anytime security professionals get together to share their knowledge — whether in small groups or at a major conference — it improves the enterprise’s chances of successfully fighting back.

Article source: http://www.darkreading.com/management/isc2-congress-addresses-securitys-people/240161635

Three Steps To Keep Down Security’s False-Positive Workload

Security needs to be better automated, but while detecting attackers is great, but all too often automation means that security teams are left with chasing down a list of security events that turn out to be, not an attack, but unexpected system, network or user behavior.

These “false positives” are the bane of most machine-learning systems: Valid e-mail messages blocked by anti-spam systems, unexploitable software defects flagged by software analysis systems, and normal application traffic identified as potentially malicious by an intrusion detection system. First-generation security information and event management (SIEM) systems, for example, would often deliver lists of potential “offenses” to security teams, leading to a lot of work in wild goose chases, says Jay Bretzmann, market segment manager for security intelligence at IBM Security Systems.

“If you cannot manage the list of offenses that come in to the product in a day, then you need to do some tuning, or you need to go out and do some proactive defense, such as eliminating vulnerabilities by patching or looking at your configurations,” Bretzmann, says.

Beating the “false positives” problem is key to making the company more secure, and the first step is not think of such alerts as false positives, says Paul Stamp, director of product marketing at RSA. There is always a reason why a security system flags some behavior as threatening, he says.

“There is no such thing as a ‘false’ positive,” he says. “It is just that some positives are more important than others.”

To raise the bar on security events, experts recommend a few tactics.

1. Better tuning
The initial training of a security system–whether a network anomaly detector or the log analysis component of a SIEM–is a necessary step toward teaching the appliance what should be considered bad and what’s good on the network.

However, the training is a crash course for the device on what is typically normal, or not, in the network for a short period of time. The training set often includes previously detected malicious behavior at other companies, which are quickly turned into rules and exported to the rest of the client base. Most of the time that helps detect true threats, but sometimes a vulnerability scanner, uncommon user behavior, or other event can set off the security system.

“We can help you identify what is malicious traffic based on what our other customer thought was malicious traffic, but it’s only a start,” says IBM’s Bretzmann. “You really have to get in there and investigate what’s causing the false positives.”

Security administrators should work from list of the most common types of alerts, or “offenses” as IBM calls them. Alerts that occur frequently are likely false positives but should be remediated to reduce the noise.

[Companies analyzing the voluminous data produced by information systems should make sure to check user access and configuration changes, among other log events. See 5 Signs Of Trouble In Your Network.]

2. Pro-active defense
If tuning fails to remove enough alerts to allow the security team to focus on the most severe events, then the IT security managers should consider proactive work that can reduce vulnerability and allow the rules to be tightened. For example, legacy systems that are not patched regularly can dramatically increase the vulnerable attack surface area of the company, resulting in a higher number of alerts.

“The rate of new vulnerability disclosures ranges between 12 and 15 a day,” IBM’s Bretzmann says. “Trying to keep up with that is a never ending job.”

Patching vulnerable systems, shutting down unnecessary systems, and limiting the types of network traffic that can enter the network are all steps that can reduce the attack surface area of the business, and allow the security systems to be tuned more tightly, reducing the number of alerts.

3. Add more context
Combining external threat intelligence can help focus security administrators on the most important threats, so they can prioritize security events, says Erik Giesa, senior vice president of business development at operational-intelligence provider ExtraHop.

“You want the ability to be very surgical and precise,” he says. “You don’t want it to be a firehose.”

False positives can also be eliminated by knowing more about the assets in the network, such as which systems are most important and which can effectively be ignored for the moment.

IT security should cooperate with business executives to collect data on what information-technology components are core to the business, and so should be closely watched, says RSA’s Stamp

“A lot of the knowledge about the risk associated with a system isn’t held by IT–IT doesn’t know, the business knows,” says RSA’s Stamp. “So the question is how do you involve the business in the process?”

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/monitoring/three-steps-to-keep-down-securitys-false/240161634

LinkedIn users sue over service’s “hacking” of contacts and spammy ways

Brian Guan, a Principal Software Engineer at Linkedln (currently on sabbatical) said it all when he described his role on the site:

Devising hack schemes to make lots of $$$ with Java, Groovy and cunning at Team Money!

Also, LinkedIn’s 2011 10-K [*] identified its key strategy as being to “Foster Viral Member Growth.”

Mind you, the fact that LinkedIn wants to grow virally and make money isn’t terribly surprising, but the way the professional networking site is doing it has now spawned a class action lawsuit.

Four LinkedIn users in the US are suing the company for allegedly “hacking” users’ email accounts, downloading their address books, and then repeatedly spamming out marketing email, ostensibly from the users themselves, to their assumably beleaguered contacts.

The complaint, filed in US District Court on Tuesday for the Northern District of California, outlines the steps LinkedIn goes through to “hack” into users’ external email accounts and extract email addresses, all without obtaining users’ consent or requesting a password.

First, LinkedIn requires an email address to sign up for the service. Next, it harvests email addresses of anyone with whom the users have ever exchanged email.

The service then sends a total of three emails to a given user’s contacts, including an initial pitch, followed up by two reminder emails if the users don’t sign up for a LinkedIn account.

Each of these reminder emails contains the Linkedln member’s name and likeness so as to appear that the Linkedln member is endorsing Linkedln, and none of them entail notice or consent from the LinkedIn member, the complaint charges:

The hacking of the users’ email accounts and downloading of all email addresses associated with that user’s account is done without clearly notifying the user or obtaining his or her consent. If a LinkedIn user leaves an external email account open, LinkedIn pretends to be that user and downloads the email addresses contained anywhere in that account to LinkedIn servers.

The LinkedIn users who filed the complaint are Paul Perkins, Pennie Sempell, Ann Brandwein, and Erin Eggers.

Perkins, a New York resident, formerly served as manager of international advertising sales for The New York Times, the complaint says.

Brandwein is a statistics professor at Baruch College in New York. Eggers is a film producer and former vice-president of Morgan Creek Productions in Los Angeles, and Sempell is a lawyer and author in San Francisco.

The quartet acknowledge that in the complaint that LinkedIn asked for permission to “grow” their networks, but they claim that the service never said it would send a series of email invitations to their contacts.

In fact, it’s only Google that gives Gmail users a heads-up that downloading is going on, the complaint states (all four LinkedIn users on the complaint are also Gmail users):

In cases where the user’s external email account is a Google Gmail account, a Google screen pops up stating, “Linkedln is asking for some information from your Google Account.” … The Google notification screen, however, does not indicate that Linkedln will download and store thousands of contacts to Linkedln servers. Rather, this notification screen misleadingly states that Linkedln is asking for “some information.” Linkedln does not provide this notification to its users; it is Google that provides this screen.

The complaint notes that LinkedIn’s site contains hundreds of complaints linked to the practice.

The plaintiffs are accusing LinkedIn of violating the federal wiretap law as well as California privacy laws, and are seeking class-action status.

LinkedIn users, are your friends complaining about LinkedIn’s sending spam under your name and photo?

Would you sign up for the suit, or do you instead consider LinkedIn’s process just the cost of getting a free service?

And furthermore, what do you think of the word “hacking” with regards to LinkedIn’s alleged practices? It sounds more like “marketing” to me, but that all boils down to semantics.

Let us know what you think in the comments below.

[*] US companies submit Form 10-K reports each year to the Securities and Exchange Commission, giving detailed information about corporate performance, finances and so forth.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/bGRjlLYEL4Q/

Latest Snowden reveal: It was GCHQ that hacked Belgian telco giant

Free ESG report : Seamless data management with Avere FXT

Leaked documents provide evidence that GCHQ planted malware in the systems of Belgacom, the largest telecommunications company in Belgium.

According to slides obtained by NSA whistleblower Edward Snowden and supplied to German newspaper Der Spiegel , the attack targeted several Belgacom employees and involved planting an attack technology called “Quantum Insert”, which was developed by the NSA. The attack technique surreptitiously directs victims to spook-run websites where they are exposed to secondary malware infection.


The ultimate goal of “Operation Socialist” was to gain access to Belgacom’s Core GRX routers in order to run man-in-the middle attacks against targets roaming with smartphones.

The documents shows that spooks in Cheltenham were particularly interested in BICS – a joint venture between Belgacom, Swisscom and South Africa’s MTN – which provides wholesale carrier services to mobile and fixed-line telcos around the world, including trouble spots such as Yemen and Syria. BICS is among a group of companies that run the TAT-14, SEA-ME-WE3 and SEA-ME-WE4 cables connecting the United States, UK, Europe, North Africa, the Middle East and Singapore to the rest of the world.

Early goals for the spies included mapping its network to understand Belgacom’s infrastructure as well as investigating VPN links from BICS to other telecoms providers. The leaked slides describe the exercise as already being a success and close to achieving its ultimate goal of compromising enough of Belgacom’s infrastructure to run man-in-the-middle attacks. One slide explains spooks had successfully compromised “hosts with access” to Belgacom’s Core GRX routers, leaving them just one step away from their objective. The slides themselves aren’t dated but other leaked documents date the compromise of Belgacom’s systems to around three years ago in 2010.

In a statement issued earlier this week, Belgacom admitted its internal systems were compromised but played down the impact of the breach, saying the intrusion did not compromise the “delivery” of communications. It added that the intrusion is under investigation by Belgian law enforcement.

If GCHQ was indeed the agency concerned then this investigation is unlikely to go anywhere and the most that can be expected is some sort of diplomatic complaint from Belgium to the UK, its EU and Nato partner. We’ve asked Belgacom if it has any comment on Der Spiegel‘s revelations.

In response, a spokesman supplied the following short statement which clarifies that Belgacom filed a criminal complaint in July shortly after detecting the hack, and long before going public with the problem on Monday:

We have filed on July 19 a complaint against an unknown third party and have granted since then our full support to the investigation that is being performed by the Federal Prosecutor.

Background on GRX (GPRS Roaming Exchange), a tasty target for signals intelligence types, can be found in a presentation put together by Philippe Langlois, founder and chief exec of P1 Security, from the Troppers security conference in Germany back in 2011, and available here in PDF. ®

Supercharge your infrastructure

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/09/20/gchq_belgacom_hack_link/

Choosing, Managing And Evaluating A Penetration Testing Service

[The following is excerpted from “Choosing, Managing and Evaluating a Penetration Testing Service,” a new report posted this week on Dark Reading’s Vulnerability Management Tech Center.]

Hiring a security consulting company to perform penetration testing can make a company more secure by uncovering vulnerabilities in security products and practices — before the bad guys do. It can also be an extremely confusing and frustrating experience if deliverables don’t meet the needs or requirements of the business units. Understanding and properly managing relationships with outsourced security providers can be the difference between an expensive mistake and a well-executed exercise in security risk management.

To establish and maintain an effective relationship with a security consulting firm, one thing is needed above all: communication. Clear, concise and meaningful communication between your organization and your chosen vendor will absolutely affect the level of service and value on the deliverable side of the engagement.

And clear communication requires a firm understanding of the entire process of working with a consulting firm — from contract to payment and everything in between. Knowing what goes into and influences each of these steps, and having a firm set of reasonable expectations, will go a long way toward ensuring that there are no surprises along the way.

Before any discussion about developing effective relationships with security consultants can take place, it’s important to define what “penetration test” really means. You may think you know what a penetration test is, but the definition of the practice and its variables has changed in recent years.

A decade ago, a penetration test was generally a “black box” test that took place at the network level. Security researchers were given no details about the network they had been hired to attack, and, as the attack targeted the network level, the researchers usually attacked ports, services, operating systems and other components that comprise the lower layers of the OSI model. Indeed, the OSI model, antiquated as it may seem, offers a good way to define the scope of a penetration test.

There are several categories of penetration test, and each requires different levels of management and coordination.

1. Black-box testing

Black-box testing is performed by an attacker who has no knowledge of the victim’s network technology. While pen testers can certainly still provide this type of testing, the model isn’t used as often as it once was because attackers are now sophisticated enough that they will probably know a vast amount about your technology in advance of an attack.

2. White-box testing

White-box testing usually involves close communication and information sharing between your technology group and pen testers. Pen testers are typically supplied with legitimate user accounts, URLs, and even user guides and documentation. This type of penetration test will usually provide the most comprehensive results and is currently the most commonly requested.

3. Gray-box testing

As you might imagine from the name, gray-box testing is a mix of black-box and whitebox testing. With gray-box testing, pen test customers don’t hand over the company jewels but do provide testers with some information. This might include credentials or access to a corporate intranet site.

Companies will have other things to consider when determining the scope of the pen test they will undergo. The first is whether to secure services that include social engineering assessment.

When you look at security, one of the biggest risks is people. Indeed, the end user is commonly considered the weakest link in computer security. Most security consultancies will be able to assess the ability of your users to, say, detect phishing schemes, but there are some legal and human resources issues that must be considered before including social engineering as part of your pen testing suite of services.

For detailed recommendations on how to select a pen testing service provider — and for some advice on ways to evaluate the service you receive — download the free report.

Have a comment on this story? Please click “Add a Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/choosing-managing-and-evaluating-a-penet/240161596

FireEye Goes Public

NEW YORK, September 20, 2013 – The NASDAQ OMX Group, Inc. (NDAQ: NDAQ) announced today that trading of FireEye, Inc. (NDAQ: FEYE), the leader in stopping today’s new breed of cyber attacks, commenced on The NASDAQ Global Select Market on Friday, September 20, 2013.

FireEye has invented a purpose-built, virtual machine-based security platform that provides real-time threat protection to enterprises and governments worldwide against the next generation of cyber attacks. The FireEye Threat Protection Platform provides real-time, dynamic threat protection without the use of signatures to protect an organization across the primary threat vectors, including Web, email, and files and across all stages of an attack life cycle. FireEye has over 1,100 customers across more than 40 countries, including over 100 of the Fortune 500.

“FireEye has given the world a new security model to help protect data and intellectual property at a time when cyber attacks are at pandemic levels,” said Nelson Griggs, Senior Vice President, NASDAQ OMX Corporate Client Group. “NASDAQ OMX is pleased to welcome FireEye to the NASDAQ Stock Market and we look forward to their continued success in the future.”

Since its founding 42 years ago, NASDAQ has been the exchange of choice for the some of the world’s largest and most revolutionary companies. By listing with NASDAQ, FireEye joins leading technology companies including Microsoft, Adobe, Oracle, Cisco and Apple. NASDAQ is home to over 74% of technology companies listed on U.S. exchanges.

About NASDAQ OMX Group:

The inventor of the electronic exchange, The NASDAQ OMX Group, Inc., fuels economies and provides transformative technologies for the entire lifecycle of a trade – from risk management to trade to surveillance to clearing. In the U.S. and Europe, we own and operate 26 markets, 3 clearinghouses and 5 central securities depositories supporting equities, options, fixed income, derivatives, commodities, futures and structured products. Able to process more than 1 million messages per second at sub-40 microsecond speeds with 99.99% uptime, our technology drives more than 70 marketplaces in 50 developed and emerging countries into the future, powering 1 in 10 of the world’s securities transactions. Our award-winning data products and worldwide indexes are the benchmarks in the financial industry. Home to approximately 3,400 listed companies worth $6 trillion in market cap whose innovations shape our world, we give the ideas of tomorrow access to capital today. Welcome to where the world takes a big leap forward, daily. Welcome to the NASDAQ OMX Century. To learn more, visit nasdaqomx.com. Follow us on Facebook (http://www.facebook.com/NASDAQ) and Twitter (http://www.twitter.com/nasdaqomx). (Symbol: NDAQ and member of SP 500)

Article source: http://www.darkreading.com/management/fireeye-goes-public/240161597

Facebook “Likes” gain constitutional protection for US employees

Happy day, USA: When we click “Like” on Facebook, we are now constitutionally protected from getting fired!

If you’re thinking, “Well, duh, wasn’t I already?”, join the club.

In fact, at least one court had hitherto decreed that the First Amendment to the US Constitution, which (more or less) ensures the right to free speech, didn’t apply to Facebook Likes.

The case came to court after a sheriff from the state of Virginia fired six employees for supporting his opponent in an election.

Mashable’s Lorenzo Franceschi-Bicchierai reports that B.J. Roberts, the sheriff of Hampton, Virginia, had fired the employees who supported Jim Adams, his opponent in the sheriff’s election.

One of the fired employees, Former Deputy Sheriff Daniel Ray Carter, had Liked Adams’s Facebook page.

The fired employees, Facebook and the American Civil Liberties Union (ACLU) joined forces to fight the dismissals.

Together, they argued that a Facebook Like must be considered free speech, which would in turn mean that employers couldn’t legally fire employees for expressing their opinions on the network.

In the first federal ruling on the case, a federal district judge had said that a Like was “insufficient speech to merit constitutional protection”, as Mashable reports.

The judge ruled that a Facebook Like didn’t involve an “actual statement”, unlike Facebook posts, which have hitherto been granted constitutional protection.

On Wednesday, that decision got its own thumbs-down in a federal appeals court.

Judge William Traxler, who authored the decision, said that clicking Like is much the same as putting up a political sign supporting a candidate in your front yard:

“Liking a political candidate’s campaign page communicates the user’s approval of the candidate and supports the campaign by associating the user with it. … It is the Internet equivalent of displaying a political sign in one’s front yard, which the Supreme Court has held is substantive speech.”

Both the ACLU and Facebook’s legal counsel are applauding the decision.

The decision reinstates the claims of Carter, along with two other fired employees, but they haven’t yet actually won the case. If they do, they might get their jobs back, Franceschi-Bicchierai reports.

As commenters on the Mashable story have noted, Facebook Likes can be convoluted creatures. In order to continue to see posts appear in our news streams, we need to click Like, whether that aligns us with candidates we detest or news we abhor.

But regardless of why we click Like, it shouldn’t come back to haunt us. Facebook is now very much an outlet for speech that deserves protection, whether it’s to support a candidate or to follow news about, for example, cancer research.

We follow things. We Like things. We shouldn’t be punished for it.

That doesn’t mean you shouldn’t clean up your slimy Facebook trail if you post about your drunken binges or how much you hate your boss.

As far as I know, the First Amendment doesn’t cover dumb.

Good luck with the case, Mr. Carter, et al. I hope you get your jobs back.

UPDATE: As commenters on my original post have pointed out, this decision doesn’t necessarily protect us all, but it will hopefully set a precedent for how other courts interpret the First Amendment as it pertains to online activities. Thanks for the input goes to Don Amith and csh.

Image of suited bloke telling you to get your coat courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/dbX0mcCulgM/

Win Bitcoins, booze and cash! Be the first to crack the iPhone’s Touch ID fingerprint sensor…

The fingerprint sensor on Apple’s new iPhone 5s could well be the device-within-a-device that brings biometrics into the everyday mainstream.

(There’s good and bad in that. The good news is that if you paid extra for a laptop, years ago, because it had a fingerprint scanner you could never get to work, you’ll no longer be seen as a technology sucker but as an early adopter.

The bad news is that any hope of arguing for the end of fingerprint scanners in US immigration lines will be lost forever. Heck, if you can do it for Apple, you can do it for Uncle Sam!)

For all that I recently wrote – this very morning, in fact – that convenience is “one of security’s mortal enemies,” Apple’s Touch ID might end up as a blessing in disguise entirely on account of its ease of use.

People who are too lazy to bother with proper passwords or even four-digit passcodes on their phones (like Marissa Mayer, CEO of Yahoo!, no less) might be willing to use Touch ID, since it makes it slicker for them to get back into their phone one-handed.

But one burning question still remains, and in common with many Naked Security readers, you’re probably asking it yourself: “How safe is it?”

Could you defeat it with a gelatin mould, for example?

Well, if you’re willing to put Touch ID to the test, you might find yourself in line for some crowdsourced prizes.

Numerous individuals have so far pledged a mixture of cash, booze and patent application payments if you can clone someone’s fingerprint (it can be one of your own, which simplifies the experimentation) and unlock an iPhone 5s.

Actually, the rules are a little stricter than that: you have to “lift” a fingerprint off something else the user has touched, so you’re not allowed to press your finger into a Gummi Bear and then swipe the confectionery over your iPhone.

A Gummi Bear hack would be cool if it worked, but it wouldn’t be enough to walk off with what currently amounts to about US$15,000 in cash, several litres of spiritous liqour, roughly 20 Bitcoins in various fragmentary sizes, “one free patent application covering the hack”, and more.

Here’s what you need to do:

It sounds like an interesting and amusing experiment, and I look forward to seeing if anyone can find a way to defeat the sensor reliably.

The touch ID sensor isn’t supposed to work with a severed finger, which is a modest comfort, although ironically it implies that a genuinely desperate and violent criminal would need to threaten you with worse than merely cutting off your finger to force you to unlock your phone against your will.

On the other hand, we know Touch ID doesn’t actually need a finger, or even a human being, as Darell Etherington over at TechCrunch discovered “after commandeering a cat.”

Fancy giving it a try? (Cloning a fingerprint, not commandeering a cat.)

Go for it, although if you succeed, you’ll have another set of problems to solve: actually getting your prizes out of the crowdsourcers.

According to the website, even the terms and conditions are “up to each individual bounty offerer,” which sounds as though things might get labyrinthine.

And the lion’s, or at least the cat’s, share of the prize money so far ($10k of it) has been put up by a startup venture capital startup that seems to be having trouble paying to keep its website running right now, let alone coming up with ten large ones for left-field experiments into fingerprint trickery:

But you won’t be doing it for the money, I’m sure – you’ll do it for the fame, right? (That’s listed as one of the prizes.)

Image of fingerprint on main page courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RwZ7UZtzQO4/