STE WILLIAMS

How much money should you fork over to that internet cutie pie?

He went by the name of Christian Anderson on the online dating site, and what a sweet fairytale he spun.

He was a well-to-do engineer working in the oil industry, he said. He was divorced, with a daughter, having lost his father and sister to cancer.

Within weeks, they’d met in person.

He confessed that he’d fallen in love and wanted to leave the project he was working on in Benin, Africa, to come home to her.

But first – there’s always a “but,” isn’t there? – he needed “special machinery” to finish the project.

He’d need upwards of £30,000 to pay import duty on the machinery. Could the love of his life find it in her heart to help him?

Fast forward to January 2015 and the end of this romance scam, with its numerous iterations of “plus then too there’s this cost”, and you arrive at a woman in her 40s from Hillingdon, UK, telling ActionFraud that she’d been bilked out of an astonishing £1.6 million (about $2.4 million).

Her case was referred to London’s Metropolitan Police cyber crime and fraud team, FALCON.

The Met last week said that the money got swindled out of her not only by the online dating Lothario but also by a whole gang of crooks posing as his “associates.”

Two of the gang, 31-year-old Ife Ojo, from Peterborough, and 43-year-old Olusegun Agbaje, from Hornchurch, Essex, have pleaded guilty to conspiracy to defraud.

mugshots

According to news reports, they’ve been remanded in custody, their case adjourned until 8 January 2016.

According to the Met, the victim paid over £30,000 into the business account of somebody who posed as “Christian Anderson’s” personal assistant: a man who was allegedly using the name Brandon Platt.

That £30,000 wasn’t good enough. Anderson wanted more cash.

The requests ranged from £25,000 for a police fine to thousands of pounds to free up inheritance money left by his mother, who lived in Cape Town.

Anderson told his mark that he’d use the inheritance to set up a life with her.

The fees to free up the money included costs for holding it in a vault in Amsterdam and $170,000 to pay for what the Met said was a fictional “anti-terrorist certificate” so that the money could be deposited at a bank.

The woman was, more or less, convinced. She was looking for a house that they could buy.

She met with someone claiming to be Anderson’s lawyer. She even travelled to an office in Amsterdam to meet a man calling himself Dr Spencer, who was supposedly responsible for holding the money in a vault.

The victim paid the £1.6 million into numerous bank accounts between March and December 2014. From there, the crooks transferred the funds into personal accounts, including £35,000 to the bank accounts of Ojo and Agbaje.

Still, the victim had doubts. But every time she asked Anderson for proof, he either sent false documentation or sweet-talked her, coming up with excuses for why he couldn’t give her evidence.

FALCON investigators found what they identified as that sweet talking: a financial investigation led to Agbaje being one recipient of the stolen money, so they went to his home address and found him with Ojo.

Upon arresting the pair and searching their homes, they found a laptop at Ojo’s home that contained records of conversations with the victim, as well as a memento book that seems to have been sent to Anderson by another victim and a copy of the book The Game.

The Met provided these excerpts from emails that “Christian Anderson” sent to the victim:

I know our relationship is still young, but I am really trying to hang on here and after the contract we have all the time in the universe together.

I called you this morning for us to have a sweet good Friday together and you did a good job in letting me feel down.

But most times when your brain tells you things, it’s all because of the hurts you had in the past and insecurities.

On a related note, most times when your brain tells you things about online cutie pies, we really, really hope it’s saying DO NOT SEND YOUR INTERNET LOVER EVEN ONE SLIM DIME.

Fortunately for a North Wales man whose online friend convinced him to strip in front of a webcam, his brain recently told him not to pay the £6000 she then tried to sextort from him.

As the BBC reports, the fraudsters are believed to be in Africa, though the police admit that it’s extremely difficult to trace a scam like this.

Speaking anonymously in an interview, the man said she looked like a local woman to him:

I could see her clearly. She looked like a woman from Wales – a white woman with dark hair. We never spoke to each other even though I could see her. She always messaged me.

The day after he stripped, the con artist got in touch and said that she had something to show him.

She played the recording of the man stripping and warned that if he didn’t cough up the £6K, she’d post it to Facebook and claim that he’d stripped in front of an 8-year-old girl.

He refused. She posted it. He called police.

The woman had originally approached the man online. Details about where, exactly, weren’t provided, but we know that romance scams – and sextortion attempts – don’t just originate on online dating sites.

Detective Chief Inspector Gary Miles of FALCON had this to say about the case of “Christian Anderson”, but of course it pertains to all sorts of sex and romance scams:

Any stranger who approaches you on a chat site, via email or any other way could potentially be a fraudster. In a recent case, a woman was defrauded of £250,000 after a suspect relentlessly tried striking up a conversation on Skype. She eventually answered and the scam progressed from there.

The Met offered these tips to anybody who’s talking to a potential partner online:

  1. See through the sob stories.
    Con artists will tell you tales to pluck at your heartstrings, with a view to gaining your trust and sympathy. Sometimes they ask for money to help them through a difficult situation. These are lies to get you to send them money.
  2. Don’t be fooled by a photo.
    Justin Bieber probably doesn’t need to chat up strangers to get a date, so is that really the Biebster contacting you? Anyone can send a picture to support whatever story they’re spinning. Scammers often use the same story and send the same photo to multiple victims. You may be able to find evidence of the same scam posted on anti-fraud websites by other victims: here’s how to do a reverse image search on Google, for one.
  3. Keep your money in your bank account.
    Never send money abroad to somebody you’ve never met or don’t know well, no matter how strongly you feel about them. No one who loves you will ask you to hand over your life savings and get into debt for them.
  4. Question their questions.
    Suspects will pay you a lot of compliments and ask you a lot of questions about your life, yet tell you very little themselves beyond a few select tales. Never disclose your personal details, such as bank details, which leaves you vulnerable to fraud.
  5. Don’t keep quiet.
    It can be embarrassing to admit that you’ve been taken in, but not reporting this type of fraud plays right into fraudsters’ hands. Sometimes scammers ask you to keep your relationship secret, but that’s just a ruse to keep you from talking to someone who’ll realize you’re being scammed. If you think you’re being scammed, stop communicating with the fraudsters and report it to police immediately.

Image of internet dating scam courtesy of Shutterstock.com. Image of Ojo and Agbaje courtesy of Metropolitan Police.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/DHPY4Th3Y6k/

Google accused of spying on students in FTC complaint

Google is still spying on students, the Electronic Frontier Foundation has claimed.

The EFF on Tuesday filed a complaint with the US Federal Trade Commission (FTC) in which it said that Google is breaking a pledge to keep students’ personal information private and is using things like a student’s entire browsing history to build profiles to use for its own purposes.

The EFF said in a release that it uncovered Google’s data mining of schoolchildren’s personal information while researching its Spying on Students campaign, which launched the same day the EFF filed the complaint.

The campaign’s goal is to educate parents and school administrators about the risks to student privacy posed by digital devices.

Those devices collect “far more information on kids than is necessary,” the EFF says, and store it “indefinitely,” sometimes even uploading it to the cloud automatically.

From the campaign’s site:

In short, [digital devices are] spying on students – and school districts, which often provide inadequate privacy policies (or no privacy policy at all), are helping them.

The EFF examined Google’s Chromebook and Google Apps for Education (GAFE), a suite of educational cloud-based software programs used in many US schools by students as young as 7.

This isn’t the first time that Google’s been called out on GAFE spying.

It was dragged into court in March 2014 for scanning millions of students’ email messages and allegedly building “surreptitious” profiles to target advertising at them.

At the time, the company had argued that it had already turned off ads by default in GAFE services, but that still left the option for users to turn them back on.

So in April 2014, it killed that option, permanently removing all ads scanning in GAFE, meaning that it couldn’t collect or use the services’ student data for advertising purposes.

But even after all that, the EFF claims that Google’s still getting at children’s data.

The group says it found that Google’s “Sync” feature for the Chrome browser is enabled by default on Chromebooks sold to schools.

That gives Google all the access it needs to build profiles on students, the EFF said, all without permission from students or their parents.

From the release:

[The default “sync” feature in Chromebooks] allows Google to track, store on its servers, and data mine for non-advertising purposes, records of every internet site students visit, every search term they use, the results they click on, videos they look for and watch on YouTube, and their saved passwords.

While many of us can choose to stay away from Google’s scanning clutches by avoiding use of its email, students don’t have that choice, given that some schools require them to use Chromebooks.

All of this flies in the face of the commitments Google made by signing the Student Privacy Pledge, the EFF said.

That’s a legally enforceable document whereby companies promise to refrain from collecting, using, or sharing students’ personal information except when needed for legitimate educational purposes or if parents provide permission.

Google’s one of 200 companies that have signed the pledge, which holds the companies accountable to:

  • Not sell student information
  • Not behaviorally target advertising
  • Use data for authorized education purposes only
  • Not change privacy policies without notice and choice
  • Enforce strict limits on data retention
  • Support parental access to, and correction of errors in, their children’s information
  • Provide comprehensive security standards
  • Be transparent about collection and use of data

The Guardian quoted a Google spokesperson who said that Google’s hands are clean:

Our services enable students everywhere to learn and keep their information private and secure. While we appreciate EFF’s focus on student privacy, we are confident that these tools comply with both the law and our promises, including the Student Privacy Pledge.

The EFF says that Google told the organization that it plans to disable the setting on school Chromebooks that allows Chrome Sync data, such as browsing history, to be shared with other Google services.

That’s a “small step in the right direction,” the EFF says, but it “doesn’t go nearly far enough” to correct Google’s violations of the Student Privacy Pledge currently inherent in Chromebooks being distributed to schools.

Namely, according to the complaint the EFF filed with the FTC:

  • When students are logged into their Google for Education accounts, student personal information in the form of data about their use of non-educational Google services is collected, maintained, and used by Google for its own benefit, unrelated to authorized educational or school purposes.
  • Google for Education’s Administrative settings, which enable a school administrator to control settings for all program Chromebooks, allow administrators to choose settings that share student personal information with Google and third-party websites in violation of the Student Privacy Pledge.

EFF Staff Attorney Nate Cardozo:

Despite publicly promising not to, Google mines students’ browsing data and other information, and uses it for the company’s own purposes. Making such promises and failing to live up to them is a violation of FTC rules against unfair and deceptive business practices.

Minors shouldn’t be tracked or used as guinea pigs, with their data treated as a profit center. If Google wants to use students’ data to ‘improve Google products,’ then it needs to get express consent from parents.

Image of Google on cork board courtesy of rvlsoft / Shutterstock.com

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/m275uaGq7K8/

Google accused of spying on students in FTC complaint

Google is still spying on students, the Electronic Frontier Foundation has claimed.

The EFF on Tuesday filed a complaint with the US Federal Trade Commission (FTC) in which it said that Google is breaking a pledge to keep students’ personal information private and is using things like a student’s entire browsing history to build profiles to use for its own purposes.

The EFF said in a release that it uncovered Google’s data mining of schoolchildren’s personal information while researching its Spying on Students campaign, which launched the same day the EFF filed the complaint.

The campaign’s goal is to educate parents and school administrators about the risks to student privacy posed by digital devices.

Those devices collect “far more information on kids than is necessary,” the EFF says, and store it “indefinitely,” sometimes even uploading it to the cloud automatically.

From the campaign’s site:

In short, [digital devices are] spying on students – and school districts, which often provide inadequate privacy policies (or no privacy policy at all), are helping them.

The EFF examined Google’s Chromebook and Google Apps for Education (GAFE), a suite of educational cloud-based software programs used in many US schools by students as young as 7.

This isn’t the first time that Google’s been called out on GAFE spying.

It was dragged into court in March 2014 for scanning millions of students’ email messages and allegedly building “surreptitious” profiles to target advertising at them.

At the time, the company had argued that it had already turned off ads by default in GAFE services, but that still left the option for users to turn them back on.

So in April 2014, it killed that option, permanently removing all ads scanning in GAFE, meaning that it couldn’t collect or use the services’ student data for advertising purposes.

But even after all that, the EFF claims that Google’s still getting at children’s data.

The group says it found that Google’s “Sync” feature for the Chrome browser is enabled by default on Chromebooks sold to schools.

That gives Google all the access it needs to build profiles on students, the EFF said, all without permission from students or their parents.

From the release:

[The default “sync” feature in Chromebooks] allows Google to track, store on its servers, and data mine for non-advertising purposes, records of every internet site students visit, every search term they use, the results they click on, videos they look for and watch on YouTube, and their saved passwords.

While many of us can choose to stay away from Google’s scanning clutches by avoiding use of its email, students don’t have that choice, given that some schools require them to use Chromebooks.

All of this flies in the face of the commitments Google made by signing the Student Privacy Pledge, the EFF said.

That’s a legally enforceable document whereby companies promise to refrain from collecting, using, or sharing students’ personal information except when needed for legitimate educational purposes or if parents provide permission.

Google’s one of 200 companies that have signed the pledge, which holds the companies accountable to:

  • Not sell student information
  • Not behaviorally target advertising
  • Use data for authorized education purposes only
  • Not change privacy policies without notice and choice
  • Enforce strict limits on data retention
  • Support parental access to, and correction of errors in, their children’s information
  • Provide comprehensive security standards
  • Be transparent about collection and use of data

The Guardian quoted a Google spokesperson who said that Google’s hands are clean:

Our services enable students everywhere to learn and keep their information private and secure. While we appreciate EFF’s focus on student privacy, we are confident that these tools comply with both the law and our promises, including the Student Privacy Pledge.

The EFF says that Google told the organization that it plans to disable the setting on school Chromebooks that allows Chrome Sync data, such as browsing history, to be shared with other Google services.

That’s a “small step in the right direction,” the EFF says, but it “doesn’t go nearly far enough” to correct Google’s violations of the Student Privacy Pledge currently inherent in Chromebooks being distributed to schools.

Namely, according to the complaint the EFF filed with the FTC:

  • When students are logged into their Google for Education accounts, student personal information in the form of data about their use of non-educational Google services is collected, maintained, and used by Google for its own benefit, unrelated to authorized educational or school purposes.
  • Google for Education’s Administrative settings, which enable a school administrator to control settings for all program Chromebooks, allow administrators to choose settings that share student personal information with Google and third-party websites in violation of the Student Privacy Pledge.

EFF Staff Attorney Nate Cardozo:

Despite publicly promising not to, Google mines students’ browsing data and other information, and uses it for the company’s own purposes. Making such promises and failing to live up to them is a violation of FTC rules against unfair and deceptive business practices.

Minors shouldn’t be tracked or used as guinea pigs, with their data treated as a profit center. If Google wants to use students’ data to ‘improve Google products,’ then it needs to get express consent from parents.

Image of Google on cork board courtesy of rvlsoft / Shutterstock.com

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/m275uaGq7K8/

Advent tip #3: Set your Facebook posts to ‘Friends only’

You wouldn’t go up to a stranger in a street and tell them what you’ve been up to, so why would you let just anyone see what you’ve posted on Facebook?

Set your posts and photos to be seen by ‘Friends only’. Here’s how:

How to lock down the privacy of your future posts

  1. Click the down arrow at the top right of any Facebook page and choose Settings
  2. Select Privacy from the menu on the left hand side
  3. Under Who can see my stuff?, click Who can see your future posts?.
  4. Here you can choose to limit the posts to Friends only, or a custom lists of people you choose.

How to limit the audience of past posts

  1. Click the down arrow at the top right of any Facebook page and choose Settings
  2. Select Privacy from the menu on the left hand side
  3. Under Who can see my stuff?, click Limit the audience for posts you’ve shared with friends of friends or Public?.
  4. Click Limit Old Posts

How to check how others view you on Facebook

  1. Go to your profile and click the three dots on the bottom right of your cover photo.
  2. Click View As…. This will then display your profile as seen by anyone who isn’t your friend.

If you’d like to keep up to date with all our Facebook-related tips and news, please Like our page.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Iy7g1ml-X-0/

Advent tip #3: Set your Facebook posts to ‘Friends only’

You wouldn’t go up to a stranger in a street and tell them what you’ve been up to, so why would you let just anyone see what you’ve posted on Facebook?

Set your posts and photos to be seen by ‘Friends only’. Here’s how:

How to lock down the privacy of your future posts

  1. Click the down arrow at the top right of any Facebook page and choose Settings
  2. Select Privacy from the menu on the left hand side
  3. Under Who can see my stuff?, click Who can see your future posts?.
  4. Here you can choose to limit the posts to Friends only, or a custom lists of people you choose.

How to limit the audience of past posts

  1. Click the down arrow at the top right of any Facebook page and choose Settings
  2. Select Privacy from the menu on the left hand side
  3. Under Who can see my stuff?, click Limit the audience for posts you’ve shared with friends of friends or Public?.
  4. Click Limit Old Posts

How to check how others view you on Facebook

  1. Go to your profile and click the three dots on the bottom right of your cover photo.
  2. Click View As…. This will then display your profile as seen by anyone who isn’t your friend.

If you’d like to keep up to date with all our Facebook-related tips and news, please Like our page.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Iy7g1ml-X-0/

Congress Clears Path for Information Sharing But Will It Help?

The key challenge companies will face with the new Cybersecurity Information Sharing Act of 2015 is how quickly they can separate data they need to share with data they need to protect.

With the Senate’s recent passing of the Cybersecurity Information Sharing Act of 2015 (CISA), we are now very close to having a law that provides companies liability protection when sharing information around cybersecurity threats. In the coming weeks, Congressional leaders and staff will be working in conference to officially merge CISA with the two complementary House bills passed in April, the Protecting Cyber Networks Act (PCNA) and the National Cybersecurity Protection Advancement Act of 2015 (NCPAA).

All three bills have the following in common: they provide liability protection for companies sharing cyber threat indicators and defensive measures for a cybersecurity purpose both among themselves and with the government. There are some differences in how these three key terms are defined across the bills, and they are not insignificant to the eventual implementation of the law.

The bills also offer differing levels of prescriptive details around the process by which this information is to be shared and the role of various government entities in ensuring compliance. Given the technical nature of the discussion and the impact these definitions have on the resolution of some of the privacy concerns surrounding the bills, (as well as the recent changes in committee leadership), we can expect a challenging conference process that is likely take at least a few weeks once underway.

The debate surrounding the bills has largely focused on privacy concerns, with far less discussion around how they will actually impact information sharing programs now that they have been passed. The resolution of the differences between the bills during the conference process leaves some open questions on implementation, but we can draw some general conclusions given what we know now.

[For more information on the Cybersecurity Information Sharing Act of 2015, read 5 Things To Know About CISA.]

It appears that we will see a process whereby the Department of Homeland Security, likely through the National Cybersecurity and Communications Integration Center (NCCIC), will play the lead role both in collecting and distributing information shared with the government. It is clear that legislators envision some type of DHS-managed portal to accept and communicate cyber threat indicators and defensive measures from any entity in real time. The final legislation is also likely to include explicit limitations around how government can use the data it receives with the objective of confining usage to cybersecurity defense.

Given concerns surrounding government usage of the data and privacy protection, it is frequently overlooked that these bills provide private-sector entities the same liability protections when they exchange information with one another, even with no government involvement in the process at all. In this way, the legislation aims to address concerns about legal liability, antitrust violations, and protection of intellectual property and other proprietary business information that have long been obstacles to rapid information sharing within industry.

In order to be covered by the liability protections, which are fairly narrow, companies will need to ensure that the information they share fits the forthcoming definitions of “cyber threat indicator” and “defensive measure” and that they are sharing the information for no other reason than cybersecurity defense. As an example, information shared amongst companies regarding consumer violation of license agreements is likely to be explicitly excluded from liability protection under the new law. Further, companies are likely to be responsible for scrubbing data of any personally identifiable information before sharing it. This will require companies participating in information sharing initiatives to have some controls in place to ensure that they are sharing the right information for the right purpose and not running afoul of privacy protections.

On its surface, this legal-speak may not sound incredibly game changing, especially for those companies already accepting some of the risk of participation in information sharing initiatives. But consider that even when companies decide to share information, lengthy internal legal reviews frequently prevent companies from sharing it quickly enough to be of value to their own mitigation efforts or a useful early warning for others. New liability protections hold the potential to shorten that legal review significantly if companies can put in place a streamlined process to ensure the data they share meets the criteria for coverage under the law.

The key challenge for companies will be separating the data they need to share (cyber threat indicators and defense measures) with the data they need to protect (PII) – and to do so quickly enough that the information shared is still relevant. Fortunately, there are a number of new solutions and standards aimed at automating much of this process.

As an industry, we’ve known for a long time that we need to get better at sharing cyber threat information to reduce uncertainty around cyber incidents and get ahead of our adversaries. While legislation is certainly not a cure-all, the government has done its part to clear at least one of the longstanding hurdles to effective cybersecurity collaboration by addressing many of the industry’s legal concerns. It will be interesting to watch as the guidance around the implementation of the bill progresses and see whether the industry is finally able to use information sharing as a key factor in staying ahead of the bad guys.

Paul Kurtz is the CEO and cofounder of TruSTAR Technology. Prior to TruSTAR, Paul was the CISO and chief strategy officer for CyberPoint International LLC where he built the US government and international business verticals. Prior to CyberPoint, Paul was the managing partner … View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/congress-clears-path-for-information-sharing-but-will-it-help/a/d-id/1323392?_mc=RSS_DR_EDT

Congress Clears Path for Information Sharing But Will It Help?

The key challenge companies will face with the new Cybersecurity Information Sharing Act of 2015 is how quickly they can separate data they need to share with data they need to protect.

With the Senate’s recent passing of the Cybersecurity Information Sharing Act of 2015 (CISA), we are now very close to having a law that provides companies liability protection when sharing information around cybersecurity threats. In the coming weeks, Congressional leaders and staff will be working in conference to officially merge CISA with the two complementary House bills passed in April, the Protecting Cyber Networks Act (PCNA) and the National Cybersecurity Protection Advancement Act of 2015 (NCPAA).

All three bills have the following in common: they provide liability protection for companies sharing cyber threat indicators and defensive measures for a cybersecurity purpose both among themselves and with the government. There are some differences in how these three key terms are defined across the bills, and they are not insignificant to the eventual implementation of the law.

The bills also offer differing levels of prescriptive details around the process by which this information is to be shared and the role of various government entities in ensuring compliance. Given the technical nature of the discussion and the impact these definitions have on the resolution of some of the privacy concerns surrounding the bills, (as well as the recent changes in committee leadership), we can expect a challenging conference process that is likely take at least a few weeks once underway.

The debate surrounding the bills has largely focused on privacy concerns, with far less discussion around how they will actually impact information sharing programs now that they have been passed. The resolution of the differences between the bills during the conference process leaves some open questions on implementation, but we can draw some general conclusions given what we know now.

[For more information on the Cybersecurity Information Sharing Act of 2015, read 5 Things To Know About CISA.]

It appears that we will see a process whereby the Department of Homeland Security, likely through the National Cybersecurity and Communications Integration Center (NCCIC), will play the lead role both in collecting and distributing information shared with the government. It is clear that legislators envision some type of DHS-managed portal to accept and communicate cyber threat indicators and defensive measures from any entity in real time. The final legislation is also likely to include explicit limitations around how government can use the data it receives with the objective of confining usage to cybersecurity defense.

Given concerns surrounding government usage of the data and privacy protection, it is frequently overlooked that these bills provide private-sector entities the same liability protections when they exchange information with one another, even with no government involvement in the process at all. In this way, the legislation aims to address concerns about legal liability, antitrust violations, and protection of intellectual property and other proprietary business information that have long been obstacles to rapid information sharing within industry.

In order to be covered by the liability protections, which are fairly narrow, companies will need to ensure that the information they share fits the forthcoming definitions of “cyber threat indicator” and “defensive measure” and that they are sharing the information for no other reason than cybersecurity defense. As an example, information shared amongst companies regarding consumer violation of license agreements is likely to be explicitly excluded from liability protection under the new law. Further, companies are likely to be responsible for scrubbing data of any personally identifiable information before sharing it. This will require companies participating in information sharing initiatives to have some controls in place to ensure that they are sharing the right information for the right purpose and not running afoul of privacy protections.

On its surface, this legal-speak may not sound incredibly game changing, especially for those companies already accepting some of the risk of participation in information sharing initiatives. But consider that even when companies decide to share information, lengthy internal legal reviews frequently prevent companies from sharing it quickly enough to be of value to their own mitigation efforts or a useful early warning for others. New liability protections hold the potential to shorten that legal review significantly if companies can put in place a streamlined process to ensure the data they share meets the criteria for coverage under the law.

The key challenge for companies will be separating the data they need to share (cyber threat indicators and defense measures) with the data they need to protect (PII) – and to do so quickly enough that the information shared is still relevant. Fortunately, there are a number of new solutions and standards aimed at automating much of this process.

As an industry, we’ve known for a long time that we need to get better at sharing cyber threat information to reduce uncertainty around cyber incidents and get ahead of our adversaries. While legislation is certainly not a cure-all, the government has done its part to clear at least one of the longstanding hurdles to effective cybersecurity collaboration by addressing many of the industry’s legal concerns. It will be interesting to watch as the guidance around the implementation of the bill progresses and see whether the industry is finally able to use information sharing as a key factor in staying ahead of the bad guys.

Paul Kurtz is the CEO and cofounder of TruSTAR Technology. Prior to TruSTAR, Paul was the CISO and chief strategy officer for CyberPoint International LLC where he built the US government and international business verticals. Prior to CyberPoint, Paul was the managing partner … View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/congress-clears-path-for-information-sharing-but-will-it-help/a/d-id/1323392?_mc=RSS_DR_EDT

The Programming Languages That Spawn The Most Software Vulnerabilities

PHP, ASP Web scripting languages breed more vulnerabilities than Java, .NET programming platforms, Veracode’s new state of software security report says.

The wave of WordPress and Drupal vulnerability warnings and patches over the past couple of years, as well as the never-ending discovery of SQL injection bugs in Web applications, can actually be traced back to their underlying scripting language — PHP.

Some 86% of applications written in PHP contained at least one cross-site scripting (XSS) vulnerability and 56% came with at least one SQL injection bug, according to new research released today from Veracode, which studied applications written in the most pervasive programming languages — PHP, Java, Microsoft Classic ASP, .NET, iOS, Android, C and C++, JavaScript, ColdFusion, Ruby, and COBOL. The data is based on its cloud-based scans and code analysis of more than 50,000 applications in the past 18 months.

Some 64% of applications written in Classic ASP and 62% of those written in ColdFusion had at least one SQL injection bug. Meantime, .NET and Java fare the best, with far fewer instances of security flaws in their applications: 29% of .NET apps and 21% of Java apps were found with at least one SQL injection bug.

Chris Wysopal, founder and CTO of Veracode, says PHP’s problems are one of the reasons SQL injection — one of the most abused yet easiest vulns to fix — just won’t die. “When I see a breach, one of the things that sticks out in my head is ‘I’ll bet that was a PHP site.'” Wysopal says. “What keeps some of these vulnerabilities alive and well is using languages that are harder to program securely.

“I had always suspected that scripting languages are worse. Now we have solid data to show we are getting twice the number of serious issues on those languages,” he says.

It comes down to how these programming languages are designed, and their use. While Java and .NET have built-in functions to reduce the risk of buffer overflows, XSS, and SQL injection, PHP and ASP don’t come as well-equipped and have fewer security APIs. According to Veracode’s report, it traditionally has been difficult to write apps in PHP that “bind parameters in SQL queries,” making it more prone to SQL injection flaws.

“It’s harder to program in those languages [scripting languages]. There are not as many built-in functions,” Wysopal says. “And .NET and Java programs are typically used by computer science graduates who learned those languages in school. A lot of the scripting languages like ColdFusion and ASP came out of the Web dev world, where you’re designing websites and starting to learn coding, [and] to make sites more interactive.”

These languages also fail the OWASP Top 10: four out of five apps written in PHP, Classic ASP, and ColdFusion failed at least one of the application security standard’s benchmarks. Veracode points out that this has a big impact on the Net overall, as some 70% of content management systems on the Web are PHP-based WordPress, Drupal, and Joomla. So “organizations seeking to use these CMSes should carefully plan their deployments,” Veracode said in its report.

“If I put on my attacker hat and want to break into a site, I’m going to find PHP sites,” Wysopal says.

Developers are basically stuck with the language and platform their organization chooses. “It’s not often that a developer gets to select that,” he says. “They are kind of [limited] by the environment and language they need to build their applications on.”

That’s not to say they can’t be better trained to write secure code. Veracode also studied vulnerability remediation rates, which showed a 30% improvement in vuln fixes in organizations that employ secure coding training for its developers.

Mobile

Veracode also found mobile applications in both Android and iOS contain rampant cryptographic weaknesses. There isn’t much daylight between Android and iOS app crypto bugs, either:  some 87% of Android apps were found with the bugs, and 81% of iOS apps.

Wysopal says it came down to four issues: insufficient entropy or “randomness;” not checking SSL certificates; not encrypting sensitive information to disk; and using outdated crypto algorithms. “Developers are not understanding how to write crypto properly,” he says.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/the-programming-languages-that-spawn-the-most-software-vulnerabilities/d/d-id/1323397?_mc=RSS_DR_EDT

The Programming Languages That Spawn The Most Software Vulnerabilities

PHP, ASP Web scripting languages breed more vulnerabilities than Java, .NET programming platforms, Veracode’s new state of software security report says.

The wave of WordPress and Drupal vulnerability warnings and patches over the past couple of years, as well as the never-ending discovery of SQL injection bugs in Web applications, can actually be traced back to their underlying scripting language — PHP.

Some 86% of applications written in PHP contained at least one cross-site scripting (XSS) vulnerability and 56% came with at least one SQL injection bug, according to new research released today from Veracode, which studied applications written in the most pervasive programming languages — PHP, Java, Microsoft Classic ASP, .NET, iOS, Android, C and C++, JavaScript, ColdFusion, Ruby, and COBOL. The data is based on its cloud-based scans and code analysis of more than 50,000 applications in the past 18 months.

Some 64% of applications written in Classic ASP and 62% of those written in ColdFusion had at least one SQL injection bug. Meantime, .NET and Java fare the best, with far fewer instances of security flaws in their applications: 29% of .NET apps and 21% of Java apps were found with at least one SQL injection bug.

Chris Wysopal, founder and CTO of Veracode, says PHP’s problems are one of the reasons SQL injection — one of the most abused yet easiest vulns to fix — just won’t die. “When I see a breach, one of the things that sticks out in my head is ‘I’ll bet that was a PHP site.'” Wysopal says. “What keeps some of these vulnerabilities alive and well is using languages that are harder to program securely.

“I had always suspected that scripting languages are worse. Now we have solid data to show we are getting twice the number of serious issues on those languages,” he says.

It comes down to how these programming languages are designed, and their use. While Java and .NET have built-in functions to reduce the risk of buffer overflows, XSS, and SQL injection, PHP and ASP don’t come as well-equipped and have fewer security APIs. According to Veracode’s report, it traditionally has been difficult to write apps in PHP that “bind parameters in SQL queries,” making it more prone to SQL injection flaws.

“It’s harder to program in those languages [scripting languages]. There are not as many built-in functions,” Wysopal says. “And .NET and Java programs are typically used by computer science graduates who learned those languages in school. A lot of the scripting languages like ColdFusion and ASP came out of the Web dev world, where you’re designing websites and starting to learn coding, [and] to make sites more interactive.”

These languages also fail the OWASP Top 10: four out of five apps written in PHP, Classic ASP, and ColdFusion failed at least one of the application security standard’s benchmarks. Veracode points out that this has a big impact on the Net overall, as some 70% of content management systems on the Web are PHP-based WordPress, Drupal, and Joomla. So “organizations seeking to use these CMSes should carefully plan their deployments,” Veracode said in its report.

“If I put on my attacker hat and want to break into a site, I’m going to find PHP sites,” Wysopal says.

Developers are basically stuck with the language and platform their organization chooses. “It’s not often that a developer gets to select that,” he says. “They are kind of [limited] by the environment and language they need to build their applications on.”

That’s not to say they can’t be better trained to write secure code. Veracode also studied vulnerability remediation rates, which showed a 30% improvement in vuln fixes in organizations that employ secure coding training for its developers.

Mobile

Veracode also found mobile applications in both Android and iOS contain rampant cryptographic weaknesses. There isn’t much daylight between Android and iOS app crypto bugs, either:  some 87% of Android apps were found with the bugs, and 81% of iOS apps.

Wysopal says it came down to four issues: insufficient entropy or “randomness;” not checking SSL certificates; not encrypting sensitive information to disk; and using outdated crypto algorithms. “Developers are not understanding how to write crypto properly,” he says.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/the-programming-languages-that-spawn-the-most-software-vulnerabilities/d/d-id/1323397?_mc=RSS_DR_EDT

Robot that was “busted” for buying drugs on the Dark Web is back

Random Darknet Shopper, a bot that was busted earlier this year by prosecutors for buying ecstasy on a Dark Web marketplace, is back at it again.

Random Darknet Shopper is an “automated online shopping bot,” and the basis of an art installation developed by a team of Swiss artists calling themselves !Mediengruppe Bitnik.

In the past two weeks, the bot purchased a couple of items that you could definitely call random – a knock-off Lacoste polo shirt from Thailand for $35, and a pair of Bitcoin USB miners from the US for $25.

With a weekly budget of $100 in bitcoins, Random Darknet Shopper scours the Dark Web for items it can purchase within its budget, and orders the goods to be shipped directly to the art installation, where they are put on display.

The Bitcoin miners and shirt are the first purchases the bot has made since it ran into trouble with the law.

The Random Darknet Shooper gained international attention last January when Swiss authorities seized the installation from a gallery in Gallen, Switzerland.

A few months later, the authorities released all of the work – including the laptop running the Random Darknet Shopper program and the objects it purchased, back to the artists. Except the ecstasy, which was destroyed.

The artists wrote on their website that charges had been dropped:

We as well as the Random Darknet Shopper have been cleared of all charges. This is a great day for the bot, for us and for freedom of art!

It seems the prosecutors had a change of heart and decided that the art installation was a good way to spark public debate – about the Dark Web, robots, drugs, and art.

Random Darknet Shopper’s legal troubles do raise some interesting questions, like: can you arrest a robot? And, if a bot commits a crime, is the programmer liable?

The art project helps to answer another question that anyone should ask about buying goods from the Dark Web – can you trust that you’ll get what you paid for?

You sure can buy lot of strange stuff on the Dark Web.

In addition to the 10 ecstasy pills that Random Darknet Shopper bought for $48, the bot previously purchased a baseball cap fitted with a spycam, a phony Sprite “stash” can for hiding drugs or cash, cheap cigarettes from Moldavia, counterfeit Diesel jeans, and “Kanye West” sneakers from China for $75.

For the previous installation in Switzerland, Random Darknet Shopper bought all of its goodies from the Dark Web marketplace known as Agora, which has since been shut down.

This time around, the bot is using AlphaBay, currently the largest marketplace on the Dark Web, according to the artists.

Random Darknet Shopper will be on display at an art gallery in London beginning 11 December.

What will Random Darknet Shopper buy next?

Image of robot courtesy of Shutterstock.com.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OecU9TIIM9o/