STE WILLIAMS

Four Dangers To Mobile Devices More Threatening Than Malware

From Trojan horses to viruses, botnets to ransomware, malicious software garners a great deal of attention from security vendors and the media.

Yet, mobile users–especially those in North America–should worry more about other threats. While smartphones and tablets could be platforms for a whole new generation of malicious functionality, the ecosystems surrounding the most popular devices work well to limit their exposure to malware. The number of malware variants targeting the Android platform is certainly expanding–surpassing 275,000 as of the first quarter of 2013, according to security firm Juniper Networks–but few of the malicious programs have snuck into the mainstream application marketplaces.

Instead, the top threats to organizations grab fewer headlines. While security experts continue to put malware as a significant threat, lost and stolen devices, insecure communications, and insecure application development affect many more users. Juniper, for example, puts insecure communications at the top of their list, says Troy Vennon, director of the Mobile Threat Center at Juniper Networks.

“We see a lot of organizations that have gone to the BYOD model, and they are encouraging their users to connect back into the enterprise for access to data and resources,” he says. “They are trying to figure out how they are going to secure that communication and secure that transfer of data.”

Enterprises also have to be aware of what their users are installing on their phones and how they may be using the devices for handling sensitive corporate data, says Con Mallon, a senior director of Symantec’s mobility business.

“You can only secure what you know about, so knowing what you have walking around your enterprise is important,” he says, adding that the defenses should extend to application and how those applications deal with data. “I should not be able to take the company data and put it in my own personal dropbox folder.”

Based on data and interviews with experts, here are the top four threats.

1. Lost and stolen phones
In March 2012, mobile-device management firm Lookout analyzed its data for U.S. consumers who activated the company’s phone-finding service, estimating that the nation’s mobile users lose a phone once every 3.5 seconds. In another study released around the same time, Symantec researchers left 50 phones behind in different cities finding that 83 percent of the devices had corporate applications accessed by the person finding the phone.

“Mobile phones and tablets are being lost or stolen on an increasing basis,” says Giri Sreenivas, vice president and general manager for mobile at vulnerability management firm Rapid7. “The challenge is that there is relatively easy techniques for evading some of the on-device security controls, such as bypassing a lock screen password.”

[Embedded device dangers don’t just plague consumers or industrial control systems. See Tackling Enterprise Threats From The Internet Of Things.]

While Apple’s TouchID, announced this week, may help consumers and employees better secure their devices against theft, the majority of users still do not even use a passcode to lock their device against misuse. Companies should train users to lock their smartphones and tablets and use a mobile-device management system to erase the device if necessary, says Juniper’s Vennon. In the company’s latest mobile-security report, Juniper found that 13 percent of users used its MDM solution to locate a phone and 9 percent locked a device. Only 1.5 percent of users–or about one in every 8 that lost a device–wiped the smartphone, indicating that the device was likely not found, says Vennon.

“Every company should be able to locate, lock and wipe,” he says. “It’s hugely necessary.”

2. Insecure communications
While there is a lot less data on how often mobile users connect to open networks, companies consider insecure connections to wireless network a top threat, says Rapid7’s Sreenivas. The problem is that wireless devices are often set to connect to an open network that matches one to which it had previous connected.

“A lot of people will look for a Wifi hotspot and they won’t look to see if it is secure or insecure,” he says. “And once they are on an open network, it is quite easy to execute a man-in-the-middle attack.”

The solution is to force the user to route traffic through a mobile virtual private network before connecting to any network, he says.

3. Leaving the walled garden
Users that jailbreak their smartphone or use a third-party app store that does not have a strong policy of checking applications for malicious behavior put themselves at greater risk of compromise. For example, while only about 3 percent of users in North America have some sort of suspicious or malicious software on their smartphone, the incident of such badware is much higher in China, with more than 170 app stores, and Russia, with more than 130 stores, according to Juniper’s Third Annual Mobile Threats Report.

A well-secured app store, which vets each submitted application, is part of the overall ecosystem that secures a mobile device. Any users that buys from a marketplace with little security puts their phone at risk, Juniper’s Vennon says.

“There is not question that if you, as a user, are making the decision to download an app from an unknown source in a third-party app store, you are opening yourself up for the potential of malware,” he says.

4. Vulnerable development frameworks
Even legitimate applications can be a threat to the user, if the developer does not take security into account when developing the application. Vulnerabilities in popular applications and flaws in frequently used programming frameworks can leave a device open to attack, says Rapid7’s Sreenivas.

The Webkit HTML rendering library, for example, is a key component of the browser in most smartphones. However, security researchers often find vulnerabilities in the software, he says. Companies should make sure that employees devices are updated–currently the best defense against vulnerabilities.

“Understanding the corresponding vulnerability risk and making sure that the devices are patched,” says Sreenivas. “It is very interesting that proximity attacks, and techniques for jailbreaks, and other attacks can all be mitigated by bringing the mobile platform for your device up to date.”

Malicious and suspicious software
Malware, adware and other questionable software are a threat, but mainly in China, Russia and other countries. Yet, while North American users have less to worry about malware, suspicious software–including privacy-invasive apps–are quite rampant. Juniper, for example, has blocked infections of malicious and unwanted software on 3.1 percent of its customers’ devices.

Moreover, security researchers continue to analyze mobile devices for vulnerabilities, and cybercriminals are getting better at monetizing mobile-device compromises–two pre-requisites for the malware to take off on mobile devices, says Symantec’s Mallon.

“We can see malware and monetization happening, toolkits are out there–all of these things parallel the development of malware in the Windows world,” he said.

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/mobile/four-dangers-to-mobile-devices-more-thre/240161141

Rudest man in Linuxdom rants about randomness

Linus Torvalds is a very clever man – he invented Linux, after all – but he seems to struggle with simple human decency.

(He recently expressed the wish that the designers of some hardware he doesn’t like might “die in some incredibly painful accident“, and invited you to puncture the brake lines on their car as a way to make it so.)

So it’s hardly surprising that when he heard a cryptographic suggestion he thought was silly, he let rip like this:

Where do I start a petition to raise the IQ and kernel knowledge of people? Guys, go read drivers/char/random.c. Then, learn about cryptography. Finally, come back here and admit to the world that you were wrong. Short answer: we actually know what we are doing. You don’t. Long answer: we use RDRAND as _one_ of many inputs into the random pool, and we use it as a way to _improve_ that random pool. So even if RDRAND were to be back-doored by the NSA, our use of RDRAND actually improves the quality of the random numbers you get from /dev/random. Really short answer: you’re ignorant.

That’s right – yet more “NSA cracked my crypto” conspiracy, and this time, the rudest man in Linuxdom is in the thick of it!

Interestingly, there are some useful lessons to be learned here – and they’re more about how to deal will technical issues well than they are about surveillance or digital snooping.

So, at the risk of receiving a Royal Rant from Torvalds himself (me for writing this, and you for reading it), let me explain.

Linux has a special file called /dev/random that doesn’t exist as a real file.

If you open it in a program, and read from it, you get a stream of pseudorandom numbers, generated right inside in the kernel.

The idea of doing the work in the kernel is to end up with randomess of a very high quality.

That means minimal bias (the next bit is always zero with a probability of 50%), minimal predictability (even if you have a detailed history of recent outputs), and minimal repeatability (you can’t trick the system into giving the same sequence twice).

The way this works, very loosely speaking, is that the kernel continually sucks in pseudorandom data from various hard-to-predict sources – how much did the mouse move last time? how quickly did you type? how much time elapsed between two hardware events? – and stirs it all together into a bucket of digital slurry.

Along the way, the pseudorandom inputs are each shovelled through a non-cryptographic hash function to hasten the slurrification.

An estimate is kept – expressed in bits – of the amount of randomness that has been mixed in so far.

When you need random numbers, some of the slurry is fed into a cryptographic hash function in order to extract a pseudorandom bitstream from it.

The amount of randomness extracted is never allowed to exceed the amount currently swilling around in the sludge-bucket: if necessary, your reads from /dev/random are slowed down until the bucket fills up again.

This stops an attacker diluting the sludginess of /dev/random simply by reading wastefully from it until the metaphorical water runs clear.

The question, in the light of recent implications that the NSA has tainted the cryptographic sanctity of everything it could get its tentacles into or onto, is whether it is acceptable for the Linux kernel to use random numbers generated by the CPU itself as part of its official pseudorandom stream.

After all, modern Intel CPUs have an instruction called RDRAND which is supposed to use thermal noise, generally considered an unpredictable byproduct of the fabric of physics itself, to generate high quality random numbers very swiftly.

Sounds like just the ticket, but what if Intel tainted RDRAND, by order of the NSA?

Linus’s school of thought, which is entirely understandable, is that mixing a tainted data stream with a pseudorandom one can’t reduce randomness: even if you stir your bucket of sludge in a really careful and ordered fashion, you still end up with sludge.

(I’m not sure I agree with Linus that mixing in a known-tainted RDRAND stream would nevertheless invariably improve randomness, but on the surface, it shouldn’t reduce it.)

Of course, you can counter that claim – and some concerned digerati have done just that – by postulating an actively hostile RDRAND instruction.

This RDRAND might monitor the state of the rest of the CPU in order to produce “random” data that is specially matched to the existing contents of the sludge-bucket so as to cancel out some of its randomness.

But how likely is that, given the cryptographic and non-cryptographic hash-churning that goes on inside the Linux kernel to stir in new pseudorandom input?

Can you “cancel out” randomess under those circumstances?

How much of the state of the CPU, and even the computer as a whole, would the tainted RDRAND instruction have to track in order to produce a real time active cancellation stream that could predictably tweak the overall output of /dev/random?

Well, here’s the thing.

If you take Linus’s advice, and go read drivers/char/random.c, as an interested spectator called Taylor Hornby did, you won’t find quite the clarity that the rant-master seems to suggest.

For example, the core function get_random_bytes() says that it “does not use the hw random number generator” (which would handily render this whole discussion moot), yet calls a function which does just that:

Furthermore, the hardware-generated random data (that the algorithm isn’t supposed to be using at all, remember?) is consumed after both the non-cryptographic and the cryptographic hash-churning described above.

The RDRAND data is merely XORed into the already-hashed output of the random number generator as the last step of the process.

In theory, then, a hostile RDRAND instruction wouldn’t need to keep track of much CPU state at all, since you can cancel out an XOR merely by repeating it. (X XOR X = 0; X XOR 0 = X; and so Y XOR X XOR X = Y.)

As Taylor Hornby notes, in a mock dialogue amusingly modelled on Galileo’s Dialogue Concerning the Two Chief World Systems:

Ironically, the random.c source code suggests that a tainted source of randomess is a problem – even at the stage when the bucket of sludge is still being filled, let alone after it has been drained – when it says:

So, if I were King, what would I do to sort this out?

  • I’d order my subjects to stop worrying about a tainted RDRAND, at least for now, and concentrate on all the other problems in my Kingdom, such as IE 6, browser Java, unencrypted USB keys, XP’s forthcoming funeral, and sources of randomness that really are broken.
  • I’d have some King’s Messengers fix the comments in random.c so that they matched the code, like good documentation should, and actually helped prove the Rude Man’s assertion that “we know what we are doing.”
  • I’d fold in the data from RDRAND earlier in the process, along with all the other sources of entropy, so that no-one would need to answer the question, “Who asked you to leave RDRAND until an XOR right at the very end?”
  • I’d sentence Mr Torvalds to 200 hours of community service in a hospital orthopaedic ward, helping those who can’t help themselves because of serious injuries sustained in automobile accidents.

Right now, we could do with a bit more clarity in cryptography.

Sadly, ranting that people should read a bunch of historically inconsistent comments in a source code file in order to conquer their ignorance is not a means to that end.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Y3tdWPuKoxI/

It’s not up to Google to stop child abuse, says expert

Man hands on computer. Image courtesy of ShutterstockWhen your children go online, who should be their nanny?

Is it the internet big boys who should keep children from being preyed on, perhaps by adopting a blacklist of “abhorrent” search queries that leave no doubt that a searcher’s intent is malevolent?

That’s one piece of what UK Prime Minister David Cameron put forth in a speech in July, when he announced new measures to protect children and challenged outfits such as Google, Yahoo, and Microsoft to do their part.

Now, the former head of Britain’s online child protection agency, Jim Gamble, has deemed the government policy nonsensical, being based on a fundamental misunderstanding of how paedophiles target victims.

According to The Independent, Gamble suggested that Cameron targeted Google because the company didn’t fork over enough tax in the UK.

Cameron was badly briefed, Gamble said. It’s not search terms or filtering that will help protect children – rather, we need to instead look at stopping predators much earlier.

Gamble resigned as head of the Child Exploitation and Online Protection Centre in 2010.

The Telegraph quoted Gamble as saying that we’re missing the chance to explore the motivation and methodology of child killers such as Mark Bridger, who killed 5-year-old April Jones, and Stuart Hazell, who murdered 12-year-old Tia Sharp.

Gamble said:

It’s nonsensical. The advice to the Prime Minster is bad from people who clearly don’t understand the first thing about the internet and child protection. We are now focusing on Google rather than investing in greater research: why they do it, when they do it. Why? Google [doesn’t] pay enough tax.

At the time he gave his speech, Cameron said that if the search giants don’t implement a blacklist voluntarily, legislation forcing them to do so would follow in short order.

Access denied. Image courtesy of ShutterstockIn addition, he announced changes to the law that would make extreme pornography harder to obtain, such as making it illegal to depict rape in porn.

The government also plans to institute pervasive network-level filtering at the default level of internet access of even legal, adult content in the UK.

Mobile phone operators will implement adult content filters that users over 18 can opt out of, while family-friendly filters were due to be applied by the end of August across 90% of public WiFi wherever children are likely to be present.

Other filters are coming for broadband within the next two years.

All this focus on search terms and filtering misses a much bigger opportunity, Gamble said at a conference in Belfast:

Rather than having a debate about predatory paedophiles and how we can stop them earlier, we have had a debate about Google and blocking search terms… Mark Bridger or Stuart Hazell weren’t made paedophiles because they searched for something on Google.

Is Gamble right? Is it a waste of time to go after companies such as Google?

Yes, the search giants should be forced to intercept search terms suggesting child abuse.

Gamble’s right: a Google search doesn’t turn somebody into a paedophile.

But we should adopt both approaches plus any other means to protect children.

Image of man on computer and access denied courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/bALn1C4ItrQ/

BlackBerry goes all ‘patch Tuesday’ with multi vuln fixes

Free ESG report : Seamless data management with Avere FXT

Blackberry has issued four patches covering vulnerabilities in Flash, Webkit and libexif on its devices.

The Z10, Q10 and PlayBook all need patching for Adobe Flash vulnerabilities. If a user were led to a page containing crafted Flash content, an attacker could execute arbitrary code on an affected device. BSRT-2013-007 notes that an alternative attack would be to trick users into downloading an Adobe AIR application.


BlackBerry also states that since the Flash player isn’t enabled by default on the devices, only phones whose owners have added Flash are vulnerable.

That’s not the case for EXIF, whose tag parsing library libexif is also vulnerable to malicious crafted content. If a user viewed an “attack” image, and then tried to open it separately or save it, the attacker could “execute arbitrary code in the context of the application that opens the specially crafted image.”

BlackBerry seems to have been using an old version of the EXIF library, since this notice about the library vulnerabilities is dated July 2012.

Two Webkit vulnerabilities are also patched, one affecting only the Z10, the other also taking in the PlayBook tablet. The company says neither vulnerability is currently being exploited.

In both, BSRT-2013-008 and BSRT-2013-010, malicious JavaScript could offer a remote code execution opportunity.

BlackBerry says its sandboxing should limit the risk to users, should any exploits to these vulnerabilities exist. ®

Free ESG report : Seamless data management with Avere FXT

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/09/12/blackberry_goes_all_patch_tuesday_with_multi_vuln_fixes/

NORKS fingered for APT on South Korean think tanks

Free ESG report : Seamless data management with Avere FXT

Security researchers have unearthed yet another highly targeted advanced persistent threat (APT) attack, this time launched by suspected North Korean attackers against a small group of South Korean think tanks.

The Kimsuky campaign, which can be traced back to April this year, was analysed by researchers at Kaspersy Lab in a lengthy blog post on its Securelist portal.


Although pegged as an “unsophisticated” spy program communicating with its operator through a Bulgarian public email server, it attracted their attention because some of its code contained Korean script, Kaspersky Lab’s Dmitry Tarakanov wrote.

Nevertheless, the malware was described as relatively basic, containing coding errors and even traces of infection by the Viking virus.

Tarakanov said his team isn’t sure how the attacks spread but that the samples collected are consistent with the “early stage malware” usually delivered by spear phishing emails.

An initial Trojan dropper loaded more malware onto an infected machine, disabling the system firewall and any Ahn Lab firewall installed – Ahn Lab being a popular Korean security software company.

It also turned off Windows Security Center to prevent any alerts about the disabled firewall, Tarakanov said.

The package contained several modules, each performing a single function: keylogging, directory list collection, remote control access, remote control download/execution and .HWP file theft. The latter is a file format which supports Hangul script and is used in a popular South Korean word processor.

The campaign also used a modified version of the Team Viewer remote access app rather than a bespoke backdoor to nab any interesting looking files from the victim’s machine.

Kaspersky Lab suspects North Koreans behind the attack campaign for several reasons, not least because the emails registered for “drop box mail accounts” are assigned the Korean sounding names “kimsukyang” and “Kim asdfa”.

The targets are also telling, including the Korean Ministry for Unification, the Korean Institute for Defence Analysis, and non-profit the Sejong Institute.

Tarakanov added the following:

Taking into account the profiles of the targeted organisations – South Korean universities that conduct research on international affairs, produce defence policies for government, [a] national shipping company, supporting groups for Korean unification – one might easily suspect that the attackers might be from North Korea.

The targets almost perfectly fall into their sphere of interest. On the other hand, it is not that hard to enter arbitrary registration information and misdirect investigators to an obvious North Korean origin

In terms of originating IP address, ten of them used by Kimsuky operators were located in Jilin and Liaoning province, just over the North Korean border in China

“No other IP-addresses have been uncovered that would point to the attackers’ activity and belong to other IP-ranges,” Tarakanov added. “Interestingly, the ISPs providing internet access in these provinces are also believed to maintain lines into North Korea.”

If it is a North Korean APT campaign, it won’t be the first online attack launched by Pyongyang.

In March around 30,000 PCs in banks, insurance companies and TV stations were knocked out in the “Dark Seoul” attack which the South has blamed on Norks.

Seoul has even claimed that its feisty neighbour to the north has amassed a 3000-strong cyber army of highly trained hackers ready to steal military secrets and disrupt systems. ®

Free ESG report : Seamless data management with Avere FXT

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/09/12/norks_apt_kimsuky_campaign/

Are PHP SuperGlobal Parameters Really That Big A Deal?

A new report out this week from Imperva detailing the potential danger for attacks through vulnerable PHP SuperGlobal parameters suggests that organizations running PHP servers should ditch the use of these variables in application requests. But while other security experts agree that PHP security must be addressed to prevent serious breaches, they argue that the real problem is in server patching practices rather than the use of SuperGlobal variables.

“PHP is definitely a vulnerable language when not implemented correctly and unfortunately most web programmers don’t truly understand the vulnerabilities or attack vectors associated with them,” says Joshua Crumbaugh, lead penetration tester at IT Cyber Security.

Released on Monday, the report chronicled the attack methods that Imperva researchers observed across a sample of 24 applications containing attack vectors related to SuperGlobal variables, noting that they identified 144 related attacks per application within a month, with some attack campaigns lasting over the course of five months. In particular, the report showed how attackers are commonly able to chain together multiple low-impact vulnerabilities related to SuperGlobal in order to achieve variable manipulation, security filter evasion and arbitrary code execution.

[Is IPS in it for the long haul? See The Future of IPS.]

“One of the key lessons for enterprises that they should defend themselves even against what seems to be in the beginning a really not so important vulnerability because when it is chained with other not so important vulnerabilities, together they can create a really powerful exploit,” says Tal Be’ery, leader of Imperva’s web research team.

According to Be’ery, while PHP security has generally improved over the last few years has improved, it’s not getting better fast enough, particularly for a language that by his firm’s estimates powers over 80 percent of the web. While most security would agree with that sentiment, some are taking issue with Be’ery’s and Imperva’s public push against SuperGlobal.

“Instead of calling to remove SuperGlobals, it might be better to call on people to update their PHP,” says Serge Batchilo, a security researcher for Security Innovation. “The vulnerabilities at the root of this wave of attacks are CVE-2010-3065 and CVE-2011-2505, which means they have been assigned CVE identifiers in 2010 and 2011 respectively and are almost certainly patched in PHP versions for the past couple of years.”

Batchilo accused Imperva of drumming up controversy with what he calls an “essentially trivial finding,” explaining that the best way to improve PHP security is through more timely patching.

“Removing SuperGlobals would break a lot of PHP applications and is not likely to happen in the short term, while installing patches that have been available for years is a simple and effective solution that can be easily implemented in the short term,” Batchilo says. “When a patched vulnerability is being exploited, it is common sense to install the patch. It’s even better just to update servers periodically as a preventative measure.”

Crumbaugh agrees, reiterating that the number one recommendation he has for those administering PHP applications is to keep those applications and the system upgraded.

“Unless there are some serious flaws in the implementation of your software or gigantic configuration errors it’s rare that I can break into a server with fully patched software and services,” he says, explaining that he frequently exploits out-of-date PHP systems in his penetration tests, noting that he frequently runs into companies taking years to update critical vulnerabilities. “Keep everything up to date and you’ll increase your security posture.”

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/attacks-breaches/are-php-superglobal-parameters-really-th/240161140

Size doesn’t matter – at least, not quite as much as smartphone privacy

Group with smartphones. Image courtesy of ShutterstockPrivacy when using potentially data-leaking mobile phone apps is concern Numero Uno for 22% of smartphone users, according to a new study.

Privacy, it seems, trumps screen size, camera resolution, or whether a given handset weighs enough to bend your wrist in half.

The report – the TRUSTe 2013 Consumer Data Privacy Study, Mobile Edition – surveyed 700 US smartphone users from 12-19 June, 2013.

Privacy concern weighs in second only to battery life, which ranks as the primary concern for 46% of users.

Smaller slices of the surveyed are primarily concerned with brand or screen size, each of which is the primary concern for 9%.

Nearly 8 out of 10 smartphone users in the US steer clear of downloading apps they don’t trust.

Let us now spend some time nagging the 20% who don’t.

Dear Twenty-Percenter: If you’re not quite sure what a dodgy mobile app looks like, Sophos’ Paul Ducklin draws a pretty picture of one subset here, that being Android scareware. Scareware, also known as fake anti-virus, tricks you into paying money by pretending to find threats such as viruses and Trojans on your computer – or, in this particular case, your Android smartphone.

The study also found that the majority of those surveyed dislike the notion of being tracked, though nearly a third of smartphone users aren’t even aware of when it’s happening.

Security experts who’ve been warning about the risks to privacy from smartphones can take heart in the study’s finding that a sizable number of users – 48% – are now as worried about privacy on their smartphones as they are about privacy on their desktops.

Meanwhile, 63% worry “frequently or always” about privacy when banking online. (Hmmm…. OK…. but, given that we’re talking about our bank accounts, shouldn’t 100% of people worry – or at least consider the risks – all the time?)

Another 43% of smartphone users are choosing not to sell privacy down the river in exchange for a free or lower-cost app.

Interestingly enough, the number of smartphone users willing to share at least some information is creeping up.

More people are also willing to share age, full name and their web-surfing behavior.

On the other hand, people are increasingly cagey about their contacts and photos – more so than their home address, phone number or current location.

That might have something to do with revelations such as those from February 2012, when social media iPhone apps Path and Hipster were found to be uploading user address book information without permission.

The TRUSTe study also found that US smartphone users are actively managing their mobile privacy, with 76% saying that they themselves are ultimately most responsible for managing their privacy.

On top of that, 40% say they check for an app’s privacy policy, 35% say they actually read such privacy policies, and a growing number – 29% – check for a trustmark or seal.

It’s certainly a good idea for us all to take privacy into our own hands, because experience shows that our internet overlords often take a casual approach to letting us know how they handle our oh-so-tasty, revenue-generating data.

Smartphone in hand. Image courtesy of ShutterstockAn example: at least as recently as the Path and Hipster revelations, Apple’s iOS permission system wasn’t providing notification of what information an app might have been sending to its keepers, aside from location information.

Here’s hoping that the numbers for people who check for an app’s privacy policy and then the smaller number who actually read it continue to grow.

(Want to see what apps are eating into your Android’s privacy? Check out the totally free, 4.5-star rated Sophos Mobile Security app!)

Image of people with smartphones and smartphone privacy courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nuuL5mLVIiw/

SSCC 116 – Google Authenticator, Apple bugs, Facebook data probes, WordPress phishing [PODCAST]

September Patch Tuesday is out – one update lost en route, 13 patches left, 8 RCE, 4 critical

The first thing you’ll notice about the September 2013 Patch Tuesday is that there are only 13 patches to apply, even though there were 14 bulletins in last week’s pre-announcement.

One of the patches didn’t make it.

With all the fuss about Big Brother and computer security in the news right now, I don’t doubt that there will be conspiracy theories about the missing patch.

(For example, “What if the intelligence services ordered the patch held back for a while in order to keep a backdoor open?”)

As it happens, I don’t know what didn’t get patched, or why the patch didn’t come out, so I can’t disprove anybody’s fears – but I do think you can put away the tinfoil hats.

All eight of the originally-announced Remote Code Execution holes got patched, so you’re not missing any critical updates, literally or figuratively.

And with two patches having gone haywire for Microsoft last month, you might well expect a touch more conservatism from Redmond this time around.

Here are the fixes that did come out, neatly compressed into a table:

A reminder: RCE is remote code execution; EoP is elevation of privilege; DoS is denial of service; and Leak is incorrect data disclosure.

The big-ticket items this month – if any remote code execution hole can be dismissed as low-ticket, of course – are the fixes for Internet Explorer and Outlook.

These patches may well stop your users getting infected with malware by merely browsing to a web site or reading (even as a preview) an email.

Also of concern is the patch at the very top of the list: according to Microsoft, the hole in SharePoint could allow an attacker to take control of the server simply by sending malformed content to it.

The Office, Excel and Access RCE vulnerabilities are similar, with those applications at risk if you inadvertently open a boobytrapped file.

Note that the IE, Outlook and Office holes only give an attacker the same privileges as the user who is running the vulnerable application.

But any of those holes could be combined with one of the abovementioned EoP vulnerabilites.

This means an attacker could use RCE to get access as a locally logged in user, followed by an EoP to promote himself to an administrator.

Best get patching right away, then!

Image of patch courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nOXfBohTkhA/

Men are twice as likely to spy on their partner’s phone

Mobile phone. Image courtesy of Shutterstock.A study has found that men are almost twice as likely to snoop on their partner’s mobile phone, peeking without permission to read “incriminating messages or activity” that might point to infidelity.

The study, which came out of www.mobilephonechecker.co.uk and was written up by The Telegraph, found that out of 2,081 surveyed UK adults currently in a relationship, 62 per cent of men said they’ve peeked at a current or former partner’s mobile phone without permission.

That compares to 34 percent of women who admitted to doing so.

Most of those – 89 percent – who admitted to mobile snooping said they did so to check on conversations that might stray into the romantic or sexual and thereby indicate signs of infidelity.

How did the snoops crack the phone’s passcode or password? Easy enough, no hacking involved: 52 percent said that they already knew the credential.

Out of those who mobile-spied, 48 percent confessed that they did, in fact, find incriminating evidence of unfaithfulness.

And out of that lot, 53 percent said they got tipped off by reading text messages, while 42 percent got wind of it through direct Facebook messaging – the two most common means of ferreting out infidelity.

Many of us, evidently, cherish fidelity over our loved ones’ rights to privacy. So how do the survey participants feel about all that?

The study found that 31 percent of those surveyed said they’d end their relationship if they found that their partner had rifled through their messages.

Another 36 percent said that they wouldn’t wind up in that situation to begin with, given that they’d never find themselves in a position where their partner could conduct a highly personalized, mini-NSA surveillance campaign against them.

But how exactly do you prevent your partner – or random strangers who find your lost phone, or thieves for that matter – from reading your private text or Facebook messages?

(None of this is to condone sneaking around on your partner, mind you. Better off signing up with the polyamorous camp and being honest about it all, if you ask me.)

Privacy is privacy and deserves to be protected, I submit, whether we’re talking about covering philandering tracks, avoiding data abused in recrimination after a bad breakup, or protecting political activists.

How do you keep your partner from prying? Fortunately, it’s safe to say that most of us are not dating the NSA, so we can assume that the goal is achievable.

An obvious step to take, of course, is to avoid sharing your mobile phone passcode or password with your partner.

And regardless of protecting one’s dalliances, it’s important to protect mobile phone data, given what’s at stake if you lose your phone.

As Sophos found in an October 2012 study, 42 percent of devices that were lost or left in insecure locations had no active security measures to protect data.

Dire as the consequences of discovered infidelity might be, lost devices point to a much wider world of jeopardized privacy.

We’re talking here about mobile data that could give access to work email, potentially exposing confidential corporate information; sensitive personal information such as national insurance numbers, addresses and dates of birth; payment information such as credit card numbers and PINs; and access to social networking accounts via apps or web browser-stored cookies.

Sophos offers mobile protection for businesses, as well protection for personal Androids (free).

It’s your data, and it’s no-one else’s business.

So keep it safe!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/T7vSpDLHi_0/