STE WILLIAMS

Ads in mobile apps – SophosLabs invites your thoughts on “How far is far enough?”

It’s nearly October, which means several things.

Spring has sprung, for a start: we’ve passed the vernal equinox, meaning it’s as good as summer already. (Your mileage may vary.)

It’s Cybersecurity Awareness Month (CSAM), starting tomorrow.

OK, CSAM is a US thing, but the lessons apply to us all, so we should all listen up.

And the annual Virus Bulletin conference starts on Wednesday in Berlin, Germany.

Numerous Sophos researchers will be giving papers this year, and with two Naked Security regulars in attendance (Chester Wisnieweski and John Hawes), we hope to bring you a blow-by-blow account of who says what, and why, as the conference unfolds.

Even though the event hasn’t started, however, I’d like to tell you about a paper that two of my long-term friends and colleagues from SophosLabs will be presenting.

Vanja Svajcer and Sean McDonald will be presenting a mixture of research, analysis and proposal they’ve written up under the headline Classifying Potentially Unwanted Applications in the mobile environment.

At this point, you’re probably wondering:

  • Why a write-up of a talk that hasn’t been given yet?
  • Isn’t every application potentially unwanted to someone?

Taking the second question first, you need to know that Potentially Unwanted Applications, or PUAs, are programs that aren’t unequivocally malicious.

Nevertheless, PUAs sail close enough to the metaphorical wind that well-informed system administrators often want to ban them from (or at least to regulate them tightly) on their networks.

Often, security products can’t block this sort of application by default, no matter how reasonable that might seem, for legalistic reasons.

For example, it’s easy to argue that a computer virus – a self-replicating program that spreads without authorisation or control – should be blocked outright.

On the other hand, you can argue that software that isn’t intrinsically illegal, but merely happens to be ripe for abuse, ought to be given the benefit of the doubt, and should be classified somewhere between “known good” and “outright bad.”

Indeed, if you are the vendor of such software – spyware that is sold to monitor children, or to investigate an errant spouse, for example – you might even choose to argue such a matter through the courts.

That’s why most security software has a category of possible threats known as PUAs, or perhaps PUPs (potentially unwanted programs), or Potentially Unwanted Software. (That’s Microsoft’s name, and the acronym proves that at least someone in Redmond has a sense of humour.)

PUAs are programs that some people may want to use, that don’t openly break the law, and yet that many people will want to block.

And now to the second question.

I’m writing about Vanja’s and Sean’s yet-to-happen talk in order to offer you a chance, in the comments below, to pose questions (or blurt out opinions) that I can send to them, as part of helping them with their work.

I’ll pass your comments and questions to them to consider in the “question time” at the end of their talk, thus giving you a chance of having your say from a distance!

After all, most of us aren’t going to be attending the VB 2013 conference (though there is still time to register if you’re in the Berlin area), but we probably have some feelings – perhaps even strong feelings – about PUAs in the mobile ecosystem.

That’s down to adware, one of the mobile world’s biggest sub-categories of PUA.

In Sean’s and Vanja’s own words:

Has the world of PUAs changed with the advent of mobile apps? As the revenue model for application developers changes, should the security industry apply different criteria when considering mobile potentially unwanted applications?

In mid 2013, there are over 700,000 apps on Google Play and over 800,000 apps on iTunes, with numerous alternative application markets serving their share of Android apps. The major source of income for most of the apps are advertising revenues realised by integrating one or more of advertising frameworks.

The difference between malware, PUAs and legitimate apps for mobile platforms is often less clear than in the desktop world… This leads application developers as well as developers of individual advertising frameworks into confusion about which features are acceptable.

Indeed, if you think about it, the appearance of banner ads inside mobile apps seems much more tolerable, and tolerated, than the same sort of thing in desktop applications.

Even amongst online ad-haters, there seems to be a general recognition that ads in mobile apps, done gently enough, represent a fair way for developers to earn a crust without needing to charge an up-front fee.

(Or there’s a reasonable and modest fee – typically a dollar or three – that will turn the ads off but still reward the developers.)

Vanja’s and Sean’s concerns, if they will forgive me oversimplifying what they have argued, is that the computer security industry would like to be proactive in stamping out aggressive – possibly even dangerous and privacy-sapping – mobile adware behaviour.

At the same time, the security industry doesn’t want to spoil the ad-supported mobile app industry for those who are prepared to play fair.

But where do we draw the line?

Sean and Vanja identify several grades of adware aggression in the mobile world:

  • Banner ads. (Appear in ad-sized windows in the app itself, and are visible only in the app.)
  • Interstitial ads. (Typically fill the screen temporarily, for example between levels in gameplay.)
  • Push or notification ads. (Use the operating system notification area to present their message.)
  • Icon ads. (Appear outside the app, even after it exits, typically as home screen icons.)

So, what do you think? How far is too far in the ad-funded mobile ecosystem?

Let us know and we’ll pose your questions and comments from the floor at the Virus Bulletin conference…

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/7fFH879eiMQ/

Ads in mobile apps – SophosLabs invites your thoughts on “How far is far enough?”

It’s nearly October, which means several things.

Spring has sprung, for a start: we’ve passed the vernal equinox, meaning it’s as good as summer already. (Your mileage may vary.)

It’s Cybersecurity Awareness Month (CSAM), starting tomorrow.

OK, CSAM is a US thing, but the lessons apply to us all, so we should all listen up.

And the annual Virus Bulletin conference starts on Wednesday in Berlin, Germany.

Numerous Sophos researchers will be giving papers this year, and with two Naked Security regulars in attendance (Chester Wisnieweski and John Hawes), we hope to bring you a blow-by-blow account of who says what, and why, as the conference unfolds.

Even though the event hasn’t started, however, I’d like to tell you about a paper that two of my long-term friends and colleagues from SophosLabs will be presenting.

Vanja Svajcer and Sean McDonald will be presenting a mixture of research, analysis and proposal they’ve written up under the headline Classifying Potentially Unwanted Applications in the mobile environment.

At this point, you’re probably wondering:

  • Why a write-up of a talk that hasn’t been given yet?
  • Isn’t every application potentially unwanted to someone?

Taking the second question first, you need to know that Potentially Unwanted Applications, or PUAs, are programs that aren’t unequivocally malicious.

Nevertheless, PUAs sail close enough to the metaphorical wind that well-informed system administrators often want to ban them from (or at least to regulate them tightly) on their networks.

Often, security products can’t block this sort of application by default, no matter how reasonable that might seem, for legalistic reasons.

For example, it’s easy to argue that a computer virus – a self-replicating program that spreads without authorisation or control – should be blocked outright.

On the other hand, you can argue that software that isn’t intrinsically illegal, but merely happens to be ripe for abuse, ought to be given the benefit of the doubt, and should be classified somewhere between “known good” and “outright bad.”

Indeed, if you are the vendor of such software – spyware that is sold to monitor children, or to investigate an errant spouse, for example – you might even choose to argue such a matter through the courts.

That’s why most security software has a category of possible threats known as PUAs, or perhaps PUPs (potentially unwanted programs), or Potentially Unwanted Software. (That’s Microsoft’s name, and the acronym proves that at least someone in Redmond has a sense of humour.)

PUAs are programs that some people may want to use, that don’t openly break the law, and yet that many people will want to block.

And now to the second question.

I’m writing about Vanja’s and Sean’s yet-to-happen talk in order to offer you a chance, in the comments below, to pose questions (or blurt out opinions) that I can send to them, as part of helping them with their work.

I’ll pass your comments and questions to them to consider in the “question time” at the end of their talk, thus giving you a chance of having your say from a distance!

After all, most of us aren’t going to be attending the VB 2013 conference (though there is still time to register if you’re in the Berlin area), but we probably have some feelings – perhaps even strong feelings – about PUAs in the mobile ecosystem.

That’s down to adware, one of the mobile world’s biggest sub-categories of PUA.

In Sean’s and Vanja’s own words:

Has the world of PUAs changed with the advent of mobile apps? As the revenue model for application developers changes, should the security industry apply different criteria when considering mobile potentially unwanted applications?

In mid 2013, there are over 700,000 apps on Google Play and over 800,000 apps on iTunes, with numerous alternative application markets serving their share of Android apps. The major source of income for most of the apps are advertising revenues realised by integrating one or more of advertising frameworks.

The difference between malware, PUAs and legitimate apps for mobile platforms is often less clear than in the desktop world… This leads application developers as well as developers of individual advertising frameworks into confusion about which features are acceptable.

Indeed, if you think about it, the appearance of banner ads inside mobile apps seems much more tolerable, and tolerated, than the same sort of thing in desktop applications.

Even amongst online ad-haters, there seems to be a general recognition that ads in mobile apps, done gently enough, represent a fair way for developers to earn a crust without needing to charge an up-front fee.

(Or there’s a reasonable and modest fee – typically a dollar or three – that will turn the ads off but still reward the developers.)

Vanja’s and Sean’s concerns, if they will forgive me oversimplifying what they have argued, is that the computer security industry would like to be proactive in stamping out aggressive – possibly even dangerous and privacy-sapping – mobile adware behaviour.

At the same time, the security industry doesn’t want to spoil the ad-supported mobile app industry for those who are prepared to play fair.

But where do we draw the line?

Sean and Vanja identify several grades of adware aggression in the mobile world:

  • Banner ads. (Appear in ad-sized windows in the app itself, and are visible only in the app.)
  • Interstitial ads. (Typically fill the screen temporarily, for example between levels in gameplay.)
  • Push or notification ads. (Use the operating system notification area to present their message.)
  • Icon ads. (Appear outside the app, even after it exits, typically as home screen icons.)

So, what do you think? How far is too far in the ad-funded mobile ecosystem?

Let us know and we’ll pose your questions and comments from the floor at the Virus Bulletin conference…

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/7fFH879eiMQ/

Metasploit creator seeks crowd’s help for vuln scanning

Email delivery: Hate phishing emails? You’ll love DMARC

Security outfit Rapid7 has decided that there’s just too much security vulnerability information out there for any one group to handle, so its solution is to try and crowd-source the effort.

Announcing Project Sonar, the company is offering tools and datasets for download, with the idea that the community will provide input into the necessary research.


The brainchild of Metasploit creator HD Moore, the aim of Project Sonar is to scan publicly-facing Internet hosts, compile their vulnerabilities into datasets, mine those datasets, and share the results with the security industry.

Even though there’s widespread insecurity across the Internet, Rapid7 says “at the moment there isn’t much collaboration and internet scanning is seen as a fairly niche activity of hardcore security researchers.

“We believe that the only way we can effectively address this is by working together, sharing information, teaching and challenging each other. Not just researchers, but all security professionals.”

None of the tools HD Moore’s blog post lists are brand-new: they’re familiar names like ZMap (led by the University of Michigan), Nmap and MASSCAN. The first three datasets Rapid7 collected for the project cover IPv4 TCP banners and UDP probe replies; reverse DNS PTR records; and SSL certificates.

Moore told SecurityWeek it’s the size of the datasets that demands a crowd approach: “If we try to parse the data sets ourselves, even with a team of 30 people, it would take multiple years just to figure out the vulnerabilities in the data set,” he said. ®

5 ways to prepare your advertising infrastructure for disaster

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/09/30/hd_more_seeks_crowd_help_for_vuln_scanning/

Hundreds of hackers sought for new £500m UK cyber-bomber strike force

Email delivery: Hate phishing emails? You’ll love DMARC

The UK’s Ministry of Defence wants to recruit an army of computer experts to serve as “cyber reservists” to defend national security.

Defence Secretary Philip Hammond said the MoD will take on “hundreds” of IT wizards to work “at the cutting edge of the nation’s cyber defences” at a cost of up to £500m. The tech talent will work with existing government IT security teams to protect critical infrastructure and data stores were the country to come under electronic attack.


Speaking at the annual Conservative Party conference this week, Hammond said Blighty was investing more and more of its defence budget in “cyber” capabilities.

“Last year our cyber defences blocked around 400,000 advanced malicious cyber threats against the government’s secure internet alone, so the threat is real,” he claimed, according to Reuters.

“But simply building cyber defences is not enough: as in other domains of warfare, we also have to deter. Britain will build a dedicated capability to counterattack in cyberspace and if necessary to strike in cyberspace.”

He also told the Mail on Sunday that these cyber strikes could knock out enemy communications, planes, ships and nuclear and chemical weapons. He told the paper that “cyber weapons” could be used along with regular munitions in future conflicts.

The reserve forces will work with the Joint Cyber Units in Corsham and Cheltenham, as well as other units in the defence network and will be recruited from folks leaving the Armed Forces along with IT workers without military experience. ®

5 ways to prepare your advertising infrastructure for disaster

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/09/30/uk_cyber_reserve_force/

Would you hire a hacker to run your security? ‘Yes’ say Brit IT bosses

5 ways to prepare your advertising infrastructure for disaster

More than two in three IT professionals would consider ex-hackers for security roles, providing they have the right skills to do the job, a survey has found.

In addition, 40 per cent of respondents to CWJobs’ survey of 352 IT bods reckoned there aren’t enough skilled security professionals in the UK technology industry.


As if that news wasn’t unsurprising enough, two thirds of the 352 tech professionals surveyed by website CWJobs stated that they would consider re-training in order to take on a role in IT security. Most respondents also believed that recruiting security professionals should be a priority in IT recruitment programmes.

Richard Nott, website director at CWJobs.co.uk, commented: “These findings present an interesting tactic for those keen to find new ways to meet the demand for security professionals within their organisations – though perhaps one that should be treated with some caution.”

Two thirds (70 per cent) of those surveyed stated that demand for professionals with security skills is growing, and specifically, 95 per cent believe that large organisations are in the greatest need.

The survey was released on Friday, days before the extension of a CESG-backed scheme to certify the competence and skills of cyber-security professionals working for the UK government to individuals working in the private sector.

The CCP certifications are valid for three years and provide a CESG-approved benchmark of skills, knowledge and expertise in cyber security. The whole scheme appears to be an attempt to regulate an industry where skills and experience have always counted for much more than qualifications.

If CWJobs’ (admittedly limited) survey is to be believed, the market wants more skilled experts, whatever their background, and government certification isn’t a requirement of industry – providing someone’s around who can do the job. ®

Free ESG report : Seamless data management with Avere FXT

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/09/30/it_pros_would_hire_exhackers/

Commerce In A World Without Trust

Trust is kind of a squishy concept. If you refer back to the definition from our pals at Merriam-Webster, trust is the “belief that someone or something is reliable, good, honest, effective, etc.” Reliable? Honest? Sounds great, right?

Our world of increasingly frequent online commerce is based on trust. Your merchants need to trust that you are who you say you are. You trust you’re dealing with the legitimate merchant/vendor that you think it is. Ultimately the entire process depends on trust that your transaction will be accepted and that, at some point, you’ll receive goods or a service in exchange for your payment.

Of course, fraud has existed since the beginning of time. Identity theft makes it difficult for merchants to know who is actually buying something. Site scraping and phishing make it difficult for consumers to know whether the site they are using is legitimate. A third party emerged to bridge the gap and provide financial protection to both sides of the online transaction — credit card brands (and their associated issuers) vouch for a consumer to the merchant and protect the consumer from a fraudulent merchant. For their 2-3.5 percent transaction fees, both merchants and consumers are _protected_ from fraud. As long as the card brands don’t suffer more loss than they make in transaction fees, the system works.

But what happens when we hit the tipping point — when we don’t know who is who, and online fraud is so rampant that the models the financial institutions use to make sure they don’t lose money on transactions become obsolete. If those models break down, then transaction fees could skyrocket. Or maybe they would bottom out as aggressive financials look to gain market share (we’ve seen that movie before). No one knows what would happen.

After reading Brian Krebs’ totally awesome investigatory piece, “Data Broker Giants Hacked,” we may be closer to that point than we wanted to believe. I mean, we always knew fraud was rampant, but reading about the SSNDOB service that traded in personal data takes it to another level given the recent trends in authentication technology.

I know, you’re probably thinking, “What’s the big deal?” ChoicePoint got popped over 10 years ago, and this is the same thing, right? Well, not so much. It turns out that many organizations (especially financial organizations) use adaptive authentication to reduce the risk of their transactions, which involves asking personal questions to validate a consumer’s identity depending on what they are trying to do.

If the attackers have access to many (if not all) of these standard questions, then you can be as adaptive as you want — you still can’t be sure who is on the other end of a connection. Even better, many of the new health-care insurance exchanges rolling out in the U.S. heavily use this kind of adaptive authentication to validate citizens and offer services. Soon enough your dog may be online buying health insurance from one of these exchanges (though I’m not sure if there will be checkbox for ringworm on the medical history page).

If we live by the old adage that the Internet is as secure as it needs to be, we need to question whether we’re getting to the point where we have to reset expectations of security. Do we have to fundamentally rethink our dependence on personal information for authentication, knowing full well that this data is easily accessible and not really a secret? Remember the old days when the Social Security number was a primary unique identifier and something you had to protect at all costs? Pete Lindstrom was early to point out the misplaced reliance on the SSN since it’s neither unique nor hard to get for an attacker. It turns out he was right, and now we should be asking the same questions about all of this other personal information. Are your previous addresses and mother’s maiden name becoming as useless as the SSN?

If you think about alternative technologies, we’ve learned that biometrics will be a tough sell, as evidenced by Apple’s TouchID technology, so we’ll need to expect pushback about centrally storing biometric information. Do the financial institutions just jack up their shrinkage estimates and adjust transaction fees accordingly? Do consumers become more aware and go back into brick-and-mortar stores? Although it’s not like personal data captured in the physical world has proved any more secure.

Some days I wish my crystal ball were back from the shop. If I had to bet, I’d bet on Mr. Market gradually adjusting transaction fees until it’s too expensive to do online commerce, and that will result in a wave of new security/authenticity technology to make the Internet once again “as secure as it needs to be” and restore balance to the Force that is online commerce. Until then, monitor the crap out of your financial accounts because you can’t trust anyone or anything nowadays.

Mike Rothman is President of Securosis and author of the Pragmatic CSO

Article source: http://www.darkreading.com/management/commerce-in-a-world-without-trust/240161994

Monday review

Catch up with the last seven days of security stories in our weekly roundup.

Watch the top news in 60 seconds, and then check out the individual links to read in more detail.

Monday 23 September 2013

Tuesday 24 September 2013

Wednesday 25 September 2013

Thursday 26 September 2013

Friday 27 September 2013

Saturday 28 September 2013

Sunday 29 September 2013

Would you like to keep up with all the stories we write? Why not sign up for our daily newsletter to make sure you don’t miss anything. You can easily unsubscribe if you decide you no longer want it.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/8XUWtLqPjNc/

LA schoolchildren found having too much fun on their hacked iPads

Girls with iPad. Image courtesy of Shutterstock.The US school system of Los Angeles, in the state of California, spent $1 billion this year to equip every student with an iPad.

Half of the funds were dedicated to purchasing the tablets, while the other half went to power the WiFi infrastructure that ideally should have fed the students a steady, nourishing diet of bore-your-brains-out curriculum material.

Unfortunately, a virulent plague of fun broke out within the first week of iPad possession after students quickly learned how to kick over the so-called firewall keeping them away from truly interesting things such as Twitter and Facebook.

300 high school ‘hackers’

According to the Los Angeles Times, nearly 300 students at Theodore Roosevelt High School managed to “hack” through security (if you can use that word with a straight face, given how simple it was) so as to surf the Web on their new school-issued iPads.

The LA Times reported on Wednesday that it had gotten a peek at a confidential memo sent by top school brass to senior staff.

In that memo, LA Unified School District Police Chief Steven Zipperman suggested that the district might want to delay distribution of the devices.

It had come to light the day before that students were suffering an outbreak of non-schoolwork-related glee caused by sending tweets, socializing on Facebook, watching videos on YouTube and streaming music through Pandora.

Zipperman wrote:

I’m guessing this is just a sample of what will likely occur on other campuses once this hits Twitter, YouTube or other social media sites explaining to our students how to breach or compromise the security of these devices. … I want to prevent a ‘runaway train’ scenario when we may have the ability to put a hold on the roll-out.

The LA Times reports that the problem of iPad-related fun was also an issue at Westchester High and the Valley Academy of Arts and Sciences in Granada Hills.

When the newspaper asked students to explain what sophisticated hacking technique was used to break the security on the iPads, Roosevelt students explained that the trick was to delete their personal profile information.

Students had begun to tinker with the security lock on the tablets because “they took them home and they can’t do anything with them,” Roosevelt senior Alfredo Garcia told the newspaper.

With their profiles deleted, the students were then free to surf at will.

I’m making fun of the incident only because it boggles my mind.

This school district is on track to spend a total of $1 billion on a technology rollout of expensive gadgets that were secured with a user profile that could be deleted.

Seriously? That was the extent of the security put on these devices?

Was this quote-unquote security vetted by anybody, at any point in the process?

Mind: boggled.

It’s funny, but this incident actually represents a serious problem.

As two senior administrators said in a memo to LA schools Superintendent John Deasy that the LA Times reviewed, the lack of strong security meant that outside of the district’s network, children were free to download content and applications and browse without restriction.

The memo read:

As student safety is of paramount concern, breach of the … system must not occur.

Endangering the school network is one potential danger of unrestricted surfing. The internet can be a slimy place even for grownups, what with the nastyware you can pick up at dodgy sites.

For children, unsupervised, unfettered surfing is dangerous on a deeper, far more disturbing level still.

Those dangers include sextortion, often targeting children, as well as cyberbullying.

From trolls making death threats against children on Facebook to creeps who hack into cell phones to steal and distribute explicit images of children, the internet can be a swamp.

LA has reportedly stopped distributing iPads.

I would sincerely hope that before it starts handing them out again, it finds a way to secure the devices a bit more thoroughly and makes sure it properly configures its firewall.

Kids, it’s not that we don’t want you to have fun online.

We just don’t want you to walk in front of a rattlesnake to do it.

Image of girls with iPad courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/DKAypU7eXQM/

Sextortionist who preyed on Miss Teen USA, Cassidy Wolf, turns himself in

Crown and webcam images courtesy of ShutterstockThe 19-year-old man, who reportedly hacked Miss Teen USA’s webcam and threatened to ruin her career by publishing nude photos taken in her bedroom surrendered to FBI agents on Thursday.

Authorities said that the man, Jared James Abrahams, of the US city of Temecula, in California, knew the beauty queen, Cassidy Wolf, in real life.

Abrahams, a college freshman majoring in computer science, will face a charge of extorting nude photos and videos of not only Miss Teen USA, Cassidy Wolf, but more than a dozen other women, including victims in Ireland, Moldova and Canada, the FBI believe.

According to the criminal complaint [PDF], Abrahams allegedly hacked computers and took nude photos or videos of his victims by remotely turning on their webcams.

He would then allegedly email some of his victims, sending copies of the images and threatening to publish them on social media unless the women sent him more nude photos, sent a nude video, or logged onto Skype to do whatever he told them to do for five minutes.

The complaint reports that Abrahams threatened one victim that he’d transform her “dream of being a model … into [the victim] being a porn star” if she didn’t follow his commands.

Upon a search of Abrahams’ house on 4 June 2013, federal agents said they seized a computer, a laptop, a mobile phone, and thumb drives. Those devices contained evidence of hacking software and malware used to take over victims’ computers, the complaint says.

Images and videos of some victims were also found on the devices, according to the criminal complaint.

FBI Special Agent Julie Patton said in the complaint that Abrahams admitted to her and another agent that he had, in fact, infected victims’ computers, watched his victims change their clothes, and used photos against his victims.

Cassidy Wolf. Image  Glenn Francis, www.PacificProDigital.comThe court documents refer to one victim as C.W., and Ms. Wolf has publicly acknowledged that she is that victim.

Abrahams admitted that Wolf was the first hacking victim whom he knew personally.

Abrahams also admitted to getting another victim, M.M., to take off her clothes during a Skype session, after which he pretended to delete the original photos he’d used to sextort her compliance, the complaint says.

On 21 March 2013,Wolf received an email from Facebook saying that somebody was trying to change her account password.

She later learned that somebody had changed her Twitter, Tumblr and Yahoo email passwords and had changed her Twitter profile picture to a half-nude picture.

Within 30 minutes of having received the Facebook message, Wolf received an anonymous email from someone who claimed to have nude photos of her, taken via the webcam on her computer.

Here’s an excerpt from the threatening email, from the court papers:

Here’s what’s going to happen! Either you do one of the things listed below or I upload these pics and a lot more (I have a LOT more and those are better quality) on all your accounts for everybody to see and your dream of being a model will be transformed into a pornstar. Do one of the following and I will give you back all your accounts and delete the pictures. 1) send me good quality pics on snapchat 2) Make me a good quality video 3) Go on skype with me and do what I tell you to do for 5 minutes If you don’t do those or if you simply ignore this then those pics are going up all over the internet. It’s your choice :) Also I’m tracking this email so I’ll know when you open it. If you don’t respond then your pics are going up.

He used a smiley emoticon. Isn’t that adorable? Ugh.

Further analysis turned up forum messages asking about using a fully undetectable (FUD) keylogger. Also, Abrahams allegedly asked for advice on getting victims to download it, given that he “[sucks] at social engineering.”

Other messages reference infecting a schoolmate with “blackshades” and “darkcomet” – both remote-access tools (RATs) or Trojans.

Abrahams was freed on $50,000 bail, though a judge confined him to his family’s home, ordered him to wear a GPS monitor, and said he could only use the home computer for schoolwork, with software to be installed that will monitor its use, the Daily Mail reports.

If found guilty, Abrahams could be sentenced to federal prison for up to two years.

The typical advice in such cases of webcam hijacking is to advise people to put a patch – black tape, a sticker or a bandage, for example – over their cameras when they’re not in use, or to point external devices at the wall.

That’s still good advice. But this particular case points to other threat vectors, as well.

Namely, Abrahams admitted to somehow tricking his victims into installing malware that allowed him to take over their computers.

Rat. Image courtesy of Shutterstock.RATs are nasty creatures. Beyond allowing attackers to remotely turn on your webcam to spy on and record you, they enable remote viewing and modification of your computer’s files and functions, storage of files and programs on your computer, or even the use of your computer to attack other computers.

How do you prevent and detect RATs?

Here are some tips:

  • Install anti-virus software, and keep it up-to-date. Get into the habit of running a full system scan on a regular basis as well.
  • Run anti-spyware software on your computer. It will detect most Trojans.
  • Anti-keyloggers are another good idea, given that RATs are often bundled with keystroke loggers, which secretly record keystrokes and even capture screenshots.
  • Have your computer guarded with a personal firewall. A firewall will keep hackers out and block any malware already installed on your computer, such as RAT or a keylogger, from remotely sending your personal data to a cyber crook.
  • Try to avoid visiting sites from dubious corners of the internet, containing file-sharing, freeware, shareware and the like. Shady sites are notorious for hosting malicious software.
  • Don’t open email attachments from someone you don’t know. Heck, be careful even if you do know the sender. Such attachments could still be infected, even if coming from somebody you know.

For her part, Wolf has gone on to use her victimization to help others. After she was named Miss California, she traveled to schools to raise awareness about cybercrime among teens.

She’s my hero. She’s no sucker. She and other victims didn’t allow themselves to be bullied. Instead, they immediately reported the extortion attempts.

So after you rig your system up to protect from RATs and other malware, perhaps the most important thing to do is to make sure that the young people in your life know what to do if they get contacted by an extortionist:

Tell somebody. Immediately.

Images of yellow ribbon, rat, tiara and webcam courtesy of Shutterstock.
Image of Cassidy Wolf © Glenn Francis, www.PacificProDigital.com.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/bKel03lI2o8/

NSA in new SHOCK ‘can see public data’ SCANDAL!

Email delivery: Hate phishing emails? You’ll love DMARC

In the latest round of increasingly-hyperbolic leaks about what spy agencies are doing with data, reports are emerging that the NSA has been graphing connections between American individuals. Moreover, it’s using stuff that people publish on their social media timelines to help the case along.

According to this item in the New York Times, the NSA extended its analysis of phone call and e-mail logs in 2010 “to examine Americans’ networks of associations for foreign intelligence purposes”, something that was previously prevented because the agency was only allowed to snoop on foreigners.


While great emphasis is given to the use of software to “sophisticated graphs” of the connections between individuals, the latest “Snowden revelation”, for the leaker handed the paper some documents, seems to be more about whether the NSA persuaded its masters that it should be able to feed vast sets of phone and e-mail records into its analysis software without having to “check foreignness” of the individuals covered by a search.

More spooky but less surprising: the NSA seems to have worked out that if punters are already publishing information about themselves on social networks like Facebook or Twitter, it might be able to scoop that information into its databases (and from there into its analysis) without a warrant.

In the outside world, The Register notes that the mass collection and analysis of Twitter information is used by all sorts of people, nearly always without government oversight or warrant, to provide everything from detecting rainfall to earthquakes.

Other so-called “enrichment data” cross-matched by the NSA can include “bank codes, insurance information … passenger manifests, voter registration rolls and GPS location information … property records and unspecified tax data”, some of which may be more troubling since each of these carries different privacy expectations.

A “foreign intelligence justification” is needed for the data collection, and the NYT notes NSA spooks weren’t allowed to use any data they could get their hands on:

“Analysts were warned to follow existing “minimization rules,” which prohibit the N.S.A. from sharing with other agencies names and other details of Americans whose communications are collected, unless they are necessary to understand foreign intelligence reports or there is evidence of a crime.”

The project’s, called Mainway, receives “vast amounts of data … daily from the agency’s fiber-optic cables”, the article states. Which demonstrates that the NSA hasn’t get gotten around to implementing either RFC 1149 or its successor, RFC 2549.

While The Register would not try to minimise the legitimate concern that a vast amount of information can be derived from communications metadata alone, beyond the name of the project, it’s hard to see what’s new in the latest leak. ®

5 ways to prepare your advertising infrastructure for disaster

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/09/30/nsa_in_shock_can_see_public_data_scandal/