STE WILLIAMS

Securing Social Media: National Safety, Privacy Concerns

It’s a critical time for social media platforms and the government agencies and private businesses and individuals using them.

RSA CONFERENCE 2018 – San Francisco – Governments and businesses around the world are navigating concerns around social media, which is playing an increasingly important role in both national and enterprise security.

Cyberspace is redrawing borders we haven’t seen before, said James Foster, CEO at ZeroFOX, in a session entitled “POTUS is Posting: Social Media and National Security.” Borders between people, once based on geography, are now based on apps. He presented a graphic illustrating their size: Facebook has 2 billion  users, YouTube has 1.5 billion, WhatsApp has 1.2 billion, WeChat has 938,000.

“Social media is unavoidable,” said Dr. Kenneth Geers, senior research scientist at Comodo Group. Platforms like Twitter and Facebook have greater influence on national security as they become a communication tool for global leaders and an attack vector for threat actors.

The presenters turned to the example of President Donald Trump, who is notorious for sharing updates and making national policy decisions on Twitter. Geers pointed out how the former Secretary of State, who didn’t have a good relationship with the President, printed tweets to see the foreign policy of the day from the White House. Earlier on April 18, Trump tweeted an update stating CIA director Mike Pompeo had recently met with Kim Jong Un in North Korea.

“I promise you, people are printing out this tweet to figure out what to do today,” said Geers. “The power of social media, to some degree, speaks for itself.”

In this sense, Foster said, modern social media is the technological medium for sharing messages the same way television was decades ago. “Like it or not, regardless of the side of the aisle you’re on, this is the new communication form for government, and it’s not going to go away,” Foster said. “Of course war can be declared on social media, for the first time in history.”

The power and reach of social media extends to threat actors, who are leveraging it as a platform in increasingly large and dangerous attacks. It’s a perfect area for information operations and false accounts; after all, social media provides the perfect amount of anonymity and distance for attackers to fire their virtual weapons from afar.

We should believe half of what we hear and see on social media, said Geers. When it comes to national security, everything is suspicious. Accounts and activity are easy to fake. As an example of account hijacking, he pointed to a fake Twitter account for the US Central Command. The account had a broad reach of 110,000 followers, giving its owners a great deal of influence.

“Social media and cyberattacks are more important than we think if they have any impact on national security at a high level,” Geers noted.

In the private sector, one of the biggest threats to the business will be fraudulent and spoofed accounts, Foster pointed out. With social as their platform, attackers can get to the two most important groups of enterprise targets: employees and customers. It puts businesses in a strange position: to what extent do employees’ social media accounts pose a threat? How do they govern social media? Are they responsible for protecting employees’ accounts?

Foster and Geers outlined several steps organizations can take to lessen the risk of social media-based threats in the enterprise. Their recommendations: work with the communications teams to build a social media policy and dictate what can and cannot be posted. Tell employees how to report abuses and potential threats. Teach best practices for hardening their accounts, and establish a policy around breach notifications and lost credentials.

Data Privacy: An Ongoing Issue

Alongside national security, data privacy is another critical issue facing social platforms and users today. A few days ago, Facebook shed more light on its privacy practices. The social media giant has been in the thick of controversial congressional hearings on how it uses customer data, and its account holders want to know what’s going on.

People are placing higher value on their privacy and showing greater concern for how companies use their information. In a 10,000-person study conducted by Harris Poll and sponsored by IBM, researchers found 78% of US respondents say an organization’s ability to keep their data private is “extremely important” but only 20% “completely trust” them to do so.

In one post, Facebook explained its reasoning for collecting data when users aren’t on the platform. Several websites and apps use Facebook services, like its login and analytics tools, to personalize their content. When users visit a site or app that uses its services, Facebook gets info even when the user is logged out – or doesn’t have a Facebook account at all.

“There are three main ways in which Facebook uses the information we get from other websites and apps: providing our services to these sites or apps, improving safety and security on Facebook, and enhancing our own products and services,” wrote product management director David Baser in a blog post discussing its data usage and users’ information control.

In a follow-up post the next day, Erin Egan, vice president and chief privacy officer for policy, and vice president and deputy general counsel Ashlie Beringer explained how Facebook is complying with new privacy laws and adding new protections.

As part of continued privacy efforts, Facebook plans to ask for users’ input on various aspects of their activity on the platform. People will be able to weigh in on ads based on data from Facebook partners, information in their profiles, and facial recognition technology. It’s also rolling out new GDPR-compliant tools to access, delete, and download information.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/securing-social-media-national-safety-privacy-concerns/d/d-id/1331594?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Successfully Using Deception Against APTs

According to Illusive CEO Ofer Israeli, deception technology can provide a vital layer of protection from advanced persistent threats (APTs) by presenting attackers with seemingly genuine servers that both divert them from high-value digital assets and make it easier to pinpoint malicious network activity.

Article source: https://www.darkreading.com/successfully-using-deception-against-apts-/v/d-id/1331601?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Should CISOs Be Hackers?

Justin Calmus, Chief Security Officer at OneLogin, believes that cybersecurity professionals – including CISOs and other security team leaders – can be much more effective at their jobs if they stay actively engaged with hacking communities that keep them on their toes and give them deep insight into attack trends.

Article source: https://www.darkreading.com/should-cisos-be-hackers/v/d-id/1331602?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Can AI improve your endpoint detection and response?

To intervene with optimum efficiency, response team needs to zero in on the most potentially dangerous endpoint anomalies first. And according to Harish Agastya, VP of Enterprise Solutions at Bitdefender, machine learning-assisted EDR can help you do exactly that.

Article source: https://www.darkreading.com/can-ai-improve-your-endpoint-detection-and-response/v/d-id/1331603?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Free endpoint scanning service powered by Open Threat Exchange

Russ Spitler, AlienVault’s SVP of product strategy, explains how security pros can leverage the community-powered threat intelligence of OTX – which sees more than 19 million IoCs contributed daily by a global community of 80,000 peers – to quickly protect themselves against emerging attacks.

Article source: https://www.darkreading.com/free-endpoint-scanning-service-powered-by-open-threat-exchange/v/d-id/1331604?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Employee from hell busted by VPN logs

We’ve said it before, but an employee from Hell apparently didn’t get the memo: VPN, as in Virtual Private Network, is not shorthand for secure internet connection.

What the “private” means is that your VPN connection can be made to behave as though you had a direct hook-up to your destination network. What it does not mean is that your hacking forays into your ex-employer’s network – using the company’s own VPN – are going to be hidden away when the FBI starts digging around.

Suzette Kugler, who last year left her job after a 29-year career at PenAir, was sentenced on 12 April for repeatedly hacking the airline’s reservation and ticket system. According to the Department of Justice (DOJ), Kugler pleaded guilty in January to one felony offense of fraud in connection with computers.

As part of the plea agreement, Kugler will pay $5,616 in restitution to PenAir, and the DOJ dropped a second count of the same offense. If it seems like a light sentence, bear in mind that this was her first ever crime.

Kugler had left her job with the southwest Alaska regional airline as of February 2017. According to the local TV station KTVA, PenAir filed for Chapter 11 bankruptcy last year, shuttering most of its operations outside Alaska.

Over her 29-year career, Kugler rose to the position of director of system support. According to her LinkedIn profile, that meant she was responsible for “oversight, policy, procedure and development as it relates to software for customer service and flight tracking.” In other words, she was the administrator of PenAir’s Sabre database system, which the airline used for ticketing and reservations.

She didn’t leave empty-handed. A week before she retired, Kugler used her system privileges to create new, fake employee user accounts, plump with high-level privileges and without any authorization whatsoever.

Handy, that, for paying her ex-employer a little visit – or two, or three – post-departure.

On 5 May 2017, PenAir reported network and computer intrusions targeting the Sabre system to the FBI. Between April and May, Kugler wiped out a former colleague’s access permissions and erased station information – necessary for PenAir employees to get into Sabre – for eight airports. Without that access, they couldn’t book, ticket, modify or board any flight at those eight airports, until the stations were rebuilt by staff working through the night.

Then, on 3 May, Kugler wiped out three seat maps – used to assign seating – from the Sabre system. Without those maps, PenAir employees wouldn’t have been able to board or ticket passengers. Fortunately, the deletion of the seat maps was discovered three days before it would have disrupted flights. PenAir was able to restore the mapping by the time the flights were ready to board: a remediation effort that sucked up “considerable time and expense,” according to the plea agreement.

PenAir said its losses were less than $6,500 and more than $5,000.

On 27 July 2017, FBI agents from Anchorage, Alaska and California executed a search warrant on Kugler’s home in Desert Hot Springs, California. They found two laptops with the Sabre VPN software installed.

Oh, those telltale VPN logs!

Kugler isn’t the first crook to mistake VPN use for a way to cover her tracks. In October, a 24-year-old was arrested for allegedly harassing and cyberstalking his former roommate for over a year, in addition to a number of former high school and college classmates, using email, SMS, social media and phone apps to send death threats, rape threats, bomb threats and even child pornography.

According to the affidavit, Ryan Lin, like Kugler, hid behind a VPN – at least, that’s what he thought he was doing – to create accounts from which to send his poisonous messages.

VPNs hide your computer’s IP address. They encrypt traffic between you and your VPN provider, making it incomprehensible to anyone intercepting it. But your VPN provider isn’t “intercepting” it: your VPN provider gets to see right into that tunnel, witnessing everything passing through your network.

In other words, to quote from words of VPN wisdom that Lin, ironically enough, retweeted a few months before he was arrested:

There is no such thing as VPN that doesn’t keep logs. If they can limit your connections or track bandwidth usage, they keep logs.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/eow3URT8ppk/

NSA reveals how it beats 0-days

In the ongoing cat-and-mouse game between nation states and attackers, anyone with something to protect has less time than ever to shore up their defenses.

At this week’s RSA conference in San Francisco, Dave Hogue, technical director of the US National Security Agency (NSA), reviewed the organization’s best practices for defense – one of which is to “harden to best practices,” as the NSA often sees attacks against their systems within 24 hours of a new vulnerability being disclosed or discovered in the wild.

Within 24 hours I would say now, whenever an exploit or a vulnerability is released, it’s weaponized and used against us.

This gives the NSA defensive team a very short patching window, especially compared to average patching windows in the private sector, which can measure in weeks or months, certainly not days or hours. Hogue said in his RSA panel that phishing attacks and unpatched systems still account for the majority of attacks that they encounter at the NSA. Understandably, this is why the NSA says keeping their systems as updated as possible, as quickly as possible is “one of the best defense practices.”

This discipline has paid off for the NSA, as Hogue says they have not had any intrusions via a 0-day exploit in the last 24 months. So, while the bad news may be that attackers are moving faster than ever – or at least the ones targeting the NSA are – the good news is that attackers mostly still rely on their old tricks, simply because they’re easier to deploy and usually work.

In fact, the vast majority of the incidents that the NSA deals with aren’t the latest and greatest in APTs or cutting-edge 0-days – 93% of all security incidents in the last year at the NSA were found to be entirely preventable using best practices they already advocated. Attackers are depending on governments and organizations to lapse in the tried-and-true basic principles so they can rely on tried-and-true basic methods, and they don’t have to burn their best (and often more difficult to use) secrets and methods.

For all the headlines that the latest named vulnerability might get, the chances of being hit by this kind of threat are still lower than a phishing email or ransomware causing trouble.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/5JILu-lGL-E/

Google in hot water over privacy of Android apps for kids

A report accusing large numbers of child-centred Android apps of potentially breaking US law? It’s the sort of finding that even a company of Google’s almost unassailable power can’t ignore.

The trouble started a week ago when International Computer Science Institute researchers published Won’t somebody think of the children? Examining COPPA Compliance at Scale, a reference to the Children’s Online Privacy Protection Act of 1998 which protects under-13s.

After analysing 5,855 Android apps that claim to comply with the Google Play Store’s Designed for Families (DFF) program, researchers found what’s best described as a privacy and surveillance mess.

40% were transmitting personal information “without applying reasonable security measures” (SSL/TLS encryption), while another 18.8% were sharing data with third parties that could be used to identify children and their devices for profiling.

Almost one in twenty were sharing personal data, such as email addresses and social media profiles, with third parties without consent. The long and short of this:

Overall, roughly 57% of the 5,855 child-directed apps that we analyzed are potentially violating COPPA.

The underlying problem appears to be the Wild West of third-party software development kits (SDK) which have privacy-protecting settings turned off or ignored – even, in some cases, when the terms of service of SDKs prohibit such a thing in apps designed for children.

It appears Google’s much-vaunted DFF program is big on promises but weak on the kind of enforcement that might hold app developers to account. Making the matter worse…

Google already performs static and dynamic analysis on apps submitted to the Play Store, so it should not be hard for them to augment this analysis to detect non-compliant entities.

Not to forget that it’s just over a year since Google threatened to remove apps that breach its general privacy terms and conditions.

A few months ago, this report might have attracted a few headlines and then been submerged by a tide of new stories and quickly forgotten. However, its publication only weeks after Facebook found itself hauled up for its privacy design, means that’s unlikely to be the case.

It’s not as if this is the first bunch of apps researchers have found problems with in terms of privacy and security and yet, unusually, Google felt compelled to issue a holding statement:

Protecting kids and families is a top priority, and our Designed for Families program requires developers to abide by specific requirements above and beyond our standard Google Play policies.

We’re taking the researchers’ report very seriously and looking into their findings. If we determine that an app violates our policies, we will take action.

Google, then, is going to look into the issue of app compliance with DFF and perhaps how this affects COPPA too.

The problem with this response is that it all sounds a bit like Facebook’s way of dealing with years of privacy complaints – kick the problem down the road but leave the model that caused it – self-regulation – untouched.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Bl6WDZ25Btk/

Surprise! Wireless brain implants are not secure, and can be hijacked to kill you or steal thoughts

Scientists in Belgium have tested the security of a wireless brain implant called a neurostimulator – and found that its unprotected signals can be hacked with off-the-shelf equipment.

And because this particularly bit of kit resides amid sensitive gray matter – to treat conditions like Parkinson’s – the potential consequences of successful remote exploitation include voltage changes that could result in sensory denial, disability, and death.

musk

If you can’t beat AI, join it: Boffinry biz baron Elon Musk backs brain-machine interface biz

READ MORE

In a paper, Securing Wireless Neurostimulators, presented at Eighth ACM Conference on Data and Application Security and Privacy last month, the researchers described how they reverse engineered an unnamed implantable medical device, and how they believe its security can be improved.

They had help doing so from the device’s programmer, but argue that an adversary could accomplish as much, though not as quickly.

Beyond these rather dire consequences, the brain-busting boffins – Eduard Marin, Dave Singelee, Bohan Yang, Vladimir Volskiy, Guy Vandenbosch, Bart Nuttin and Bart Preneel – suggest private medical information can be pilfered.

That’s hardly surprising given that the transmissions of the implantable medical device in question are not encrypted or authenticated.

What is intriguing is that the researchers suggest future neurotransmitters are expected to utilize information gleaned from brain waves like P-300 to tailor therapies. Were an attacker to capture and analyze the signal, they suggest, private thoughts could be exposed.

They point to related research from 2012 indicating that attacks on brain-computer interfaces have shown “that the P-300 wave can leak sensitive personal information such as passwords, PINs, whether a person is known to the subject, or even reveal emotions and thoughts.”

Can the brain be a better defense?

To mitigate this speculative risk, the boffins propose a novel security architecture involving session key initialization, key transport and secure data communication.

Implants of this sort, the researchers say, typically rely on microcontroller-based systems that lack random number generation hardware, which makes encryption keys unnecessarily weak.

The session key enabling symmetric encryption for wireless networking between the implant and a diagnostic base station could be generated by a developer and inserted into implant. But the researchers contend there’s a risk of interception and potentially a need for extra security hardware that would make the implant bulkier.

They believe there’s an alternative: Using the brain as a true random number generator, a critical element for secure key generation.

“We propose to use a physiological signal from the patient’s brain called local field potential (LFP), which refers to the electric potential in the extracellular space around neurons,” the paper explains.

And to transmit the key to the external device, they suggest using an electrical signal carrying the key bits from the neurostimulator, a signal that can be picked up by a device touching the patient’s skin. Other modes of transmission, such as an acoustic signal, they contend could be too easily intercepted by an adversary.

The lesson here, the eggheads say, is that security-through-obscurity is a dangerous design choice.

Implantable medical device makers, they argue, should “migrate from weak closed proprietary solutions to open and thoroughly evaluated security solutions and use them according to the guidelines.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/18/boffins_break_into_brain_implant/

48 million personal profiles left exposed by data firm LocalBlox

US social network data aggregator LocalBlox has been caught leaving its AWS bucket of 48 million records – harvested in part from Facebook, LinkedIn and Twitter – available to be viewed by anyone who stopped by.

Security biz Upguard wandered by on February 18 and found the publicly accessible files in in AWS S3 storage bucket located at the subdomain “lbdumps.” There’s no evidence that anyone else stopped by for a peek, but it’s possible.

According to the company the S3 bucket contained a single 151.3GB compressed representation of a 1.2TB ndjson (newline-delineated json) file.

Upguard, in a blog post on Wednesday, said it informed LocalBlox on February 28th and the bucket was secured later that day.

Poorly configured AWS S3 buckets have been an source of shame for Amazon Web Services and its users. Last year, the cloud platform giant introduced a tool to warn customers about insecure storage setups and earlier this year made the business version of the tool free, to avoid embarrassment by association.

Still, the problem persists and the forecast continues to look bleak. Last year, Gartner research VP said Jay Heiser predicted that through 2020, “95 percent of cloud security failures will be the customer’s fault.”

According to Upguard, the data profiles appear to have been collected from multiple sources. They include names, street addresses, dates of birth, job histories scaped from LinkedIn, public Facebook profiles, Twitter handles, and Zillow real estate data, all linked by IP addresses.

Some of the data, the security company suggests, appears to have come from purchased databases and payday loan operators. Other data points – associated with queries like pictures, skills, lastUpdated, companies, currentJob, familyAdditionalDetails, Favorites, mergedIdentities, and allSentences – appear to have been scraped through searches of Facebook.

LocalBlox has posted samples of its data profiles on its website.

“The presence of scraped data from social media sites like Facebook also highlights an important fact: all too often, data held by widely used websites can be targeted by unknown third parties seeking to monetize this information,” Upguard said.

Facebook CEO Mark Zuckerberg recently acknowledged “we believe most people on Facebook could have had their public profile scraped” by “malicious actors.”

Zuckerberg, testifying before Congress in the wake of the Cambridge Analytica scandal, insisted Facebook users have control over their data. From this case it looks more like no one has much control over it.

LocalBlox did not immediately respond to a request for comment. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/19/48_million_personal_profiles_left_exposed_by_data_firm_localblox/