STE WILLIAMS

Engineer Arrested for Attempted Theft of Trade Secrets

Software engineer Dmitry Sazonov has been arrested for trying to steal valuable code from his employer, a financial services firm.

The FBI has announced the arrest of software engineer Dmitry Sazonov, who was taken into custody for allegedly trying to steal valuable computer code from the financial services organization where he worked.

Sazonov, a native of New York’s Rockland County, has been charged with one count of attempted theft of trade secrets, which carries a maximum sentence of 10 years in prison and maximum fine of $250,000, or twice the gross gain or loss from his crime.

The DoJ reports he allegedly took several steps to hide his theft of proprietary computer code for a trading platform, which his employer has been developing for five years.

“He researched and ultimately used the technique of steganography to hide the code within other PDF files like personal tax and immigration documents on his work computer,” says FBI assistant director-in-charge William F. Sweeney Jr. “He also uploaded encrypted zip files to a third-party website to complete his heist.”

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/engineer-arrested-for-attempted-theft-of-trade-secrets/d/d-id/1328635?_mc=RSS_DR_EDT

Sysadmin accused of crashing former employer’s Oracle database with logic bomb

A Massachusetts systems administrator is facing charges of breaking the Computer Fraud and Abuse Act, trespassing, and conversion – using other people’s property for a crime – after booby-trapping his former employer’s servers.

For 14 years, Nimesh Patel worked at high-performance computing component manufacturer Allegro MicroSystems as a system administrator, with particular responsibility for programming the shop’s Oracle financial database module. He resigned on January 8, 2016 but is accused of then trying to sabotage the company.

Over the course of his employment Patel was issued two laptops, which the company requested he return. Patel gave back one of the original laptops, and another unissued laptop, after completely wiping the hard drive.

The prosecution alleges the second work laptop was kept so that Patel could still access the company network and because it still contained a file with all the employees’ login data and passwords.

Court documents [PDF] claim that on January 31 Patel trespassed on company property to get within wireless range of the network, and then used the laptop to log into the network using the account of his subordinate staffer. He then uploaded malware into the Oracle financial module.

The code was to activate on the first day of Allegro’s financial year, April 1. The software was designed to delete key financial data headers and pointers from the Oracle files, rendering the module useless.

The software worked as designed, and two weeks into April the accounting department noticed something was wrong. Allegro called in investigators, who found the code on April 25, along with evidence that Patel had used the second laptop to access the network after he had left the job.

The company claims that the only other employee with the skills to write code for the Oracle database had left before Patel’s departure. It also claims he logged into the network using the subordinate’s ID before he quit the job.

Allegro called the police, who investigated and brought charges. The company claims that the software issues cost it over $100,000 and it is seeking to recover these costs from Mr Patel, in addition to any other penalties the court could impose should he be found guilty. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/14/sysadmin_crash_former_employers_oracle_db/

Linux remote root bug menace: Make sure your servers, PCs, gizmos, Android kit are patched

A Linux kernel flaw that potentially allows miscreants to remotely control vulnerable servers, desktops, IoT gear, Android handhelds, and more, has been quietly patched.

The programming blunder – CVE-2016-10229 – opens up machines and gizmos to attacks via UDP network traffic. Any software receiving data using the system call recv() with the MSG_PEEK flag set on a vulnerable kernel opens up the box to potential hijacking. The hacker would have to craft packets to trigger a second checksum operation on the incoming information, which can lead to the execution of malicious code within the kernel, effectively as root, we’re warned.

Exploitation of this security shortcoming appears to be non-trivial, luckily. Programs from the Nginx web server and wget to the Mirai botnet code and various others set the MSG_PEEK flag on some connections, leaving the underlying machine open to attack if the kernel is vulnerable. The bug can also be potentially exploited to kick off a local privilege escalation. Kernel versions below 4.5, all the way down to 2.6, are possibly at risk.

The issue was discovered by Google’s Eric Dumazet, and quietly dealt with at the end of 2015 with a small fix applied to the open-source kernel. Linux distros, such as Ubuntu and Debian, were distributing fixed builds of the kernel by February this year. Red Hat told us its flavors of Linux were never affected.

“The code was never included in the kernel that Red Hat ships,” Red Hat spokesman John Terrill confirmed to The Reg.

Then this month, Google issued a bunch of security fixes for Android, which contained an update addressing CVE-2016-10229 in smartphones, tablets and other gear. NIST also put out an updated advisory this week, and that’s when people started taking notice. The warning explains:

udp.c in the Linux kernel before 4.5 allows remote attackers to execute arbitrary code via UDP traffic that triggers an unsafe second checksum calculation during execution of a recv system call with the MSG_PEEK flag.

Those Google Android updates can be applied to Nexus gadgets. Samsung and LG have also issued patches for their handsets.

So, in short, yes, there is a remote kernel-level code execution vulnerability in Linux, which sounds like the worst of the very worst, but it is pretty much patched by now – and it appears to be tricky to exploit. It was silently addressed in the kernel source over a year ago, and fixed in updates to machines earlier this year, but only now has it come to wider attention.

If you stick to a regular update cycle, you should be OK: your devices are cured after installation of the fix and a restart. Just pray for people who don’t want to, or who can’t, install kernel-level fixes and reboot their machines – maybe because the machines are neglected home routers, or phones that can’t help security tweaks, or maybe for some other reason. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/14/new_critical_linux_kernel_flaw/

95% of Organizations Have Employees Seeking to Bypass Security Controls

Use of TOR, private VPNs on the rise in enterprises, Dtex report shows.

The insider threat issue is well-understood and something that countless surveys have shown poses almost as big a risk to enterprise data security as external attackers.

A report from Dtex this week offers a slightly different look at the problem by highlighting some of the clues that organizations should be looking for to detect and stop insiders engaged in malicious or negligent behaviors.

The Dtex report is based on an analysis of risk assessments conducted by a sample of its customer base. A stunning 95% of the assessments showed employees to be engaged in activities designed to bypass security and web-browsing restrictions at their organizations.

Examples included the use of anonymous web browsers such as TOR, anonymous VPN services, and vulnerability-testing tools such as Metasploit. The use of anonymous VPN services within organizations in fact doubled between 2015 and 2016, according to Dtex.

An overwhelming amount of data from customer assessments has shown that the use of such tools and services by employees is almost always a precursor to data theft or other malicious behavior. “Enterprises usually don’t expect to find such a high volume of employees actively trying to bypass security controls,” says Rajan Koo, senior vice president of customer engineering at Dtex.

Employees using private VPNs and Tor on an enterprise network are typically trying to hide their actions and do something that will not be detected by the organization’s security controls, he says. “Security bypass is the first step towards data theft or other destructive behavior,” Koo says.

For example, if a user threat assessment uncovers an employee using a TOR browser on the network, administrators should treat that as a red flag that the employee is engaging in prohibited or even potentially illegal behavior. Similarly, there’s a high chance that an employee who spends hours researching ways to get around security systems is trying to evade the controls within their own organizations.

“When an employee spends time researching how to bypass security controls, we often find that they are trying to exfiltrate data without being blocked by their DLP or without raising any flags on the network,” Koo says. Or they could be trying to save time by using their favorite tools that are being blocked by corporate security, he says.

Organizations should also not ignore the use of personal email accounts such as Gmail and Yahoo on corporate endpoint devices, Dtex noted in its report. About 87% of the companies, whose data Dtex analyzed, reported employees using personal web-based email on corporate devices though many of them had explicit measures in place to block such email use.

While the use of personal email by itself is not a red flag, organizations should not ignore the fact that personal email can be used to enable data theft, the report noted.

Ordinary emails, file attachments, and calendar entries are some of the more obvious ways that an employee with malicious intent can use to steal data. Users can also simply use email drafts to save and transfer corporate data out of the network without leaving an obvious trail, Dtex said.

More than half of the companies in the Dtex report also encountered potential data theft issues from people who were about to leave the organization. Leavers, for instance, tend to show higher than normal file aggregation activity in the two weeks before their scheduled departure. The kind of data at risk from such activity includes proprietary plans, client lists and even IP.

As numerous other surveys have shown, Dtex’s analysis of data too showed that malicious insiders are by far not the only insider threat. Fifty-nine percent of the organizations in the report, for instance, reported employees put them at risk via inappropriate Internet usage, such as viewing pornography or gambling at work.

“Insider breaches are a growing threat to virtually all organizations including mainframe users,” says John Crossno, product manager of Compuware’s security solutions group, which recently released a tool designed to mitigate the threat.

The increasing number of incidents where employees fall prey to phishing and other social engineering attacks and hand over authorized user credentials to attackers have made even otherwise secure mainframe environments vulnerable, he says. He points to the massive data breach at the U.S. Office of Personnel Management in 2015 as one example of how attackers are able to gain access to critical mainframe systems by acquiring the valid credentials to do it.

In the mainframe environment, “enterprises have traditionally relied on insufficient methods to identify threats including disparate logs and [system-level] data gathered by security products to piece together user behavior,” he says. What is needed is a much more comprehensive approach to monitor and analyze mainframe application user behavior to detect insider breaches.

“The best way to detect threats before they cause damage is by collecting and analyzing data from various sources which provide a baseline for behaviors and stressors most closely linked to insider threats,” says Thomas Read, vice president of security analytics at Haystax Technology, in recent comments to Dark Reading.

Often, organizations focus their insider threat mitigation efforts on the end point but do little to understand the likelihood of an insider going rogue or causing a data breach because of a lack of training.

Harold Martin – the contractor for the NSA found with stolen classified files – had a history of bad behaviors that were never flagged by insider threat controls,” he says as one example. “He also had access to the information as part of his job, and walked off the NSA site with the files. Network controls never would have detected this.”

Related stories:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/95--of-organizations-have-employees-seeking-to-bypass-security-controls/d/d-id/1328628?_mc=RSS_DR_EDT

Got an Industrial Network? Reduce your Risk of a Cyberattack with Defense in Depth

If an aggressive, all-out cyberdefense strategy isn’t already on your operational technology plan for 2017, it’s time to get busy.

Designing and building the kind of mission-critical cyber protection systems needed in today’s vulnerable industrial environments are, in many ways, similar to the ways castles were designed and built in medieval times.

Barriers to entry were placed from the perimeter all the way into the core of the castle to stop invaders and give those inside the castle walls time to protect what needed to be protected. Moats, drawbridges, and iron gates all presented obstacles to anyone trying to breach the walls and entry points with malicious intent.

The modern-day equivalent of a fortress is known as the “defense in depth” model. The model is based on multiple, overlapping layers of protection for critical infrastructure.

Defining policies and procedures based on an integrated view of physical, network, computer, and device security, defense in depth is the best way to manage both external and internal threats. The model draws on three concepts to ensure fast detection, isolation, and control, ultimately limiting the impact of an error or breach, regardless of where or how it happens:

1.  Multiple layers of defense: If one is bypassed, another layer is able to provide defense. 

2.  Differentiated layers of defense: If an attacker finds a way past the first layer, they can’t get past all the subsequent defenses, since each layer is slightly different than the one before it.

3.  Threat-specific layers of defense: Designed for specific risks and vulnerabilities, these solutions defend against a variety of security threats the control system is exposed to, such as computer malware, angry employees, denial-of-service (DoS) attacks, and information theft.

[Check out the two-day Dark Reading Cybersecurity Crash Course at Interop ITX, May 15 16, where Dark Reading editors and some of the industry’s top cybersecurity experts will share the latest data security trends and best practices.]

In light of the escalating frequency of hacking events, it might seem necessary to lock everything down and throw away the keys. But business still has to be done. Before you begin investing in hardware, software, and training, look at your operations and identify the critical assets, vulnerabilities, and risks presented by a cyberattack. Understand how communication flows across the organization, both internally and externally. Identify the functions that are most critical to ensuring that business gets done, and what the tolerance in those areas is for downtime. Set priorities and then move on to executing your plan. And, lastly, understand how improving your cybersecurity posture can not only make it more secure but also make it more reliable and robust.

Firewalls Defense in Depth
Implementing a defense in depth strategy requires a combination of tools and techniques that support the vision of a layered approach to protection. Five categories of security offer the comprehensive defense needed to significantly reduce the risk of a breach, as well as mitigate the impact of a breach should one occur. These include:

1.  Preventative security: Intended to prevent incidents from occurring and reduce the number and type of risks and vulnerabilities. Examples include strong password policies and disabling unused ports on switches to prevent access from unauthorized devices.

2.  Network design security: Minimizes vulnerabilities and isolates them so an attack doesn’t affect other parts of the network. A “zones and conduits” method can help limit the number of connections between network zones, lowering the risk of an attack spreading across the network.

3.  Active security: Active measures and devices block traffic or operations that aren’t allowed or expected on a network. Examples include encryption, protocol-specific deep packet inspection, Layer 3 firewalls, and antivirus use.

4.  Detective security: Identifies an incident in progress, or after it occurs, by evaluating activity registers and logs, including log file analysis and intrusion detection system monitoring.

5. Corrective security: Aims to limit the extent of any damage caused by an incident, such as configuration parameter backup policy, and firewall and antivirus updates.

Firewalls are an especially important and common tool for ensuring network security in an industrial environment, as they can play various roles in partitioning networks and protecting against outside threats and propagation of internal errors. Firewalls do this by permitting only certain types of communication between devices to protect against malicious attacks and device or operator errors. On a technical level, a firewall’s function is to filter packets. After inspecting each packet to determine whether it corresponds to an approved traffic pattern, firewalls filter or forward packets that match these rules.

Different kinds of firewalls offer different levels of packet filtering. Stateless firewalls determine the individual devices or applications with which they can communicate, while stateful firewalls monitor the communication process and use recorded information, such as the initiation or termination of the connection, as an additional decision metric for packet filtering. Deep packet inspection firewalls, an extension of stateful packet inspection, examine the full packet to find malformed industrial control system (ICS) messages, or highly specialized attack patterns hidden deep within the communication flow.

It’s also important to categorize and consider firewalls based on network location. Firewalls in a wide local area network (WLAN) restrict the forwarding of messages between WLAN clients at the WLAN access point to increase the overall security of the network. Those at the field level address threats that may lie within the network, and firewalls in a small cell or external site control the flow of network traffic going in and out of the external site’s local network. This creates a border between the company’s own network and an external network, such as the Internet.

Daily headlines remind us of the intensity of cyberattacks. Ignoring this business reality isn’t an option. For industrial operations, understanding the role firewalls play in a network security strategy and moving quickly to deploy the multi-layered approach afforded by defense in depth can mean the difference between investing millions to recover from the impact of breach on uptime, or the business continuity needed to serve customers and shareholders.

Editor’s Note: Tobias Heer and Oliver Kleineberg also contributed to this column. Tobias has been with Belden since 2012 and specializes in topics that revolve around security and wireless in industrial control systems. Oliver joined Belden in 2007, and is responsible for advance development within Belden’s Industrial IT platform.

Related Content:

 

Jeff Lund is a senior director of product line management in Belden’s industrial IT group. He is responsible for Belden’s vision and product initiatives related to the industrial Internet of Things, as well as for coordinating and driving cybersecurity and wireless product … View Full Bio

Article source: http://www.darkreading.com/endpoint/got-an-industrial-network-reduce-your-risk-of-a-cyberattack-with-defense-in-depth/a/d-id/1328623?_mc=RSS_DR_EDT

95 Percent of Organizations Have Employees Seeking to Bypass Security Controls

Use of TOR, private VPNs on the rise in enterprises, Dtex report shows

The insider threat issue is well-understood and something that countless surveys have shown poses almost as big a risk to enterprise data security as external attackers.

A report from Dtex this week offers a slightly different look at the problem by highlighting some of the clues that organizations should be looking for to detect and stop insiders engaged in malicious or negligent behaviors.

The Dtex report is based on an analysis of risk assessments conducted by a sample of its customer base. A stunning 95% of the assessments showed employees to be engaged in activities designed to bypass security and web-browsing restrictions at their organizations.

Examples included the use of anonymous web browsers such as TOR, anonymous VPN services, and vulnerability-testing tools such as Metasploit. The use of anonymous VPN services within organizations in fact doubled between 2015 and 2016, according to Dtex.

An overwhelming amount of data from customer assessments has shown that the use of such tools and services by employees is almost always a precursor to data theft or other malicious behavior. “Enterprises usually don’t expect to find such a high volume of employees actively trying to bypass security controls,” says Rajan Koo, senior vice president of customer engineering at Dtex.

Employees using private VPNs and Tor on an enterprise network are typically trying to hide their actions and do something that will not be detected by the organization’s security controls, he says. “Security bypass is the first step towards data theft or other destructive behavior,” Koo says.

For example, if a user threat assessment uncovers an employee using a TOR browser on the network, administrators should treat that as a red flag that the employee is engaging in prohibited or even potentially illegal behavior. Similarly, there’s a high chance that an employee who spends hours researching ways to get around security systems is trying to evade the controls within their own organizations.

“When an employee spends time researching how to bypass security controls, we often find that they are trying to exfiltrate data without being blocked by their DLP or without raising any flags on the network,” Koo says. Or they could be trying to save time by using their favorite tools that are being blocked by corporate security, he says.

Organizations should also not ignore the use of personal email accounts such as Gmail and Yahoo on corporate endpoint devices, Dtex noted in its report. About 87% of the companies, whose data Dtex analyzed, reported employees using personal web-based email on corporate devices though many of them had explicit measures in place to block such email use.

While the use of personal email by itself is not a red flag, organizations should not ignore the fact that personal email can be used to enable data theft, the report noted.

Ordinary emails, file attachments, and calendar entries are some of the more obvious ways that an employee with malicious intent can use to steal data. Users can also simply use email drafts to save and transfer corporate data out of the network without leaving an obvious trail, Dtex said.

More than half of the companies in the Dtex report also encountered potential data theft issues from people who were about to leave the organization. Leavers, for instance, tend to show higher than normal file aggregation activity in the two weeks before their scheduled departure. The kind of data at risk from such activity includes proprietary plans, client lists and even IP.

As numerous other surveys have shown, Dtex’s analysis of data too showed that malicious insiders are by far not the only insider threat. Fifty-nine percent of the organizations in the report, for instance, reported employees put them at risk via inappropriate Internet usage, such as viewing pornography or gambling at work.

“Insider breaches are a growing threat to virtually all organizations including mainframe users,” says John Crossno, product manager of Compuware’s security solutions group, which recently released a tool designed to mitigate the threat.

The increasing number of incidents where employees fall prey to phishing and other social engineering attacks and hand over authorized user credentials to attackers have made even otherwise secure mainframe environments vulnerable, he says. He points to the massive data breach at the U.S. Office of Personnel Management in 2015 as one example of how attackers are able to gain access to critical mainframe systems by acquiring the valid credentials to do it.

In the mainframe environment, “enterprises have traditionally relied on insufficient methods to identify threats including disparate logs and [system-level] data gathered by security products to piece together user behavior,” he says. What is needed is a much more comprehensive approach to monitor and analyze mainframe application user behavior to detect insider breaches.

“The best way to detect threats before they cause damage is by collecting and analyzing data from various sources which provide a baseline for behaviors and stressors most closely linked to insider threats,” says Thomas Read, vice president of security analytics at Haystax Technology, in recent comments to Dark Reading.

Often, organizations focus their insider threat mitigation efforts on the end point but do little to understand the likelihood of an insider going rogue or causing a data breach because of a lack of training.

Harold Martin – the contractor for the NSA found with stolen classified files – had a history of bad behaviors that were never flagged by insider threat controls,” he says as one example. “He also had access to the information as part of his job, and walked off the NSA site with the files. Network controls never would have detected this.”

Related stories:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/95-percent-of-organizations-have-employees-seeking-to-bypass-security-controls/d/d-id/1328628?_mc=RSS_DR_EDT

Priorities clash over the call to encrypt the whole internet

Encryption isn’t just a nice-to-have for privacy-conscious web users – it’s essential to the growth of the online economy, and should be used across the entire internet as standard practice. That’s the view of the Internet Society, a global nonprofit focusing on internet policy – and it says that law enforcement agencies worried about tracking the bad guys are just going to have to deal with it.

In a blog post titled Securing Our Digital Economy, ISOC president Kathryn Brown argues for universal encryption on the internet to preserve growth of the online economy. She warns:

This cannot happen without a serious commitment by all parties to security and privacy. The truth is that economies can only function within a secure and trusted environment.

As such, strong encryption is crucial, and ISOC wants it to be the norm for all online transactions. “It should be made stronger and universal, not weaker,” she continues.

The spanner in the works is of course online crime and terror. Law enforcement and politicians consistently call for encryption loopholes that would enable them to read private data, on the basis that they need to know what the bad guys are doing.

Brown acknowledges this, and argues that we need to “deconstruct the issues faced by law enforcement and policymakers and agree together how we can achieve a trusted digital economy underpinned by encryption”. She doesn’t necessarily come up with solutions to the concerns of law enforcement, though.

Brown suggests three measures for G20 countries. First, acknowledge that encryption is an important technical foundation for trust, and get everyone to use it. Second, collaborate on securing the digital economy, and third, put users’ rights at the heart of any decisions made. Everything should be done in their interest.

To this end, ISOC calls for “ubiquitous encryption for the internet”. How would that work, then?

Encrypting the internet

There have already been some movements towards an encrypted internet, in the form of HTTPS. This secure version of the hypertext transfer protocol – the language that lets web servers send you pages – is based on the encryption standard TLS. When you’re surfing using HTTPS, snoopers can still figure out the domain name you’re accessing, but the data that you’re exchanging with the website (including specific URLs, content, and your login credentials) is scrambled.

Tech companies have been busily ushering everyone into an HTTPS world. Google began putting warnings on non-HTTPS sites last September, and Apple vowed last year to require HTTPS access for its apps. That’s important, given the rise in mobile traffic that we’ve seen on the web. WordPress, one of the web’s linchpins, also added encryption for all customer sites, which goes a long way towards making encryption a de facto standard.

Efforts to encourage end-to-end encryption in this way seem to be working. Mozilla spotted in October that the users who feed data to the company about their surfing habits had loaded more than 50% of their pages via HTTPS, which was a first.

Still, this doesn’t make encryption an actual standard that must be followed. There are still plenty of sites that don’t offer it. There was some hope for a mandatory standard in the form of HTTP/2, an update to the existing HTTP standard. There was a concerted effort to build encryption into this by default, but it was overruled.

There is now an effort to build what’s known as opportunistic encryption into HTTP/2. This automatically upgrades HTTP connections use TLS by default, if both sides support it. Otherwise, the connection uses plaintext communications instead. Email servers already have a special command called STARTTLS that does a similar thing for SMTP (simple mail transfer protocol) communications.

While encryption isn’t a compulsory part of the HTTP/2 spec, the browser vendors may well end up enforcing what the standards bodies won’t – as it stands, they are mandating that their implementations of HTTP/2 be used over an encrypted link. No one is enforcing the use of HTTP/2 as a communications protocol, though.

Governments won’t like it

What isn’t clear is what this means at a nation-state level. In the UK, the Investigatory Powers Act – often referred to as the “Snooper’s Charter” –  theoretically gives the government the right to require internet providers to build decryption capabilities into their services. In France, presidential candidate Emmanuel Macron is pushing for the same thing. But deliberately forcing providers to build “in-the-middle” surveillance features into what is supposed to be end-to-end encryption is something of a fool’s errand, for reasons we’ve outlined before.

ISOC has been a consistent supporter of encryption. It has written about how deliberately weakening encryption decreases trust in the internet without really deterring bad guys, who will presumably use encryption of their own rather than relying on what’s in their devices. Like many privacy advocates, ISOC points out that law enforcement officials won’t be the only ones to benefit from weakened cryptography. This is the internet, and someone else is going to find and exploit any deliberately-introduced holes.

(Sophos, for that matter, agrees strongly with the position that you can’t advance security by weakening it.)

The bottom line is that there are already standards to encrypt the internet, and it will be difficult for governments to force surveillance points into them if they’re used properly. But that means getting the internet to use them – and that could be the most difficult challenge of all.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/CWCwFDOGYzs/

Smartphone sensors offer hackers a way past security PINs

The motion sensors embedded in smartphones could offer attackers a way to infer security PINs, researchers at Newcastle University have discovered.

Today’s smartphones come stuffed with these sensors, which range from well-known ones such as GPS, camera, microphone and fingerprint readers, but include also accelerometers, gyroscopes, ambient light sensors, magnetometers, proximity sensors, barometers, thermometers, and air humidity sensors – to name only a few of the estimated 25 in the best-equipped models.

That’s a lot of data for a rogue app or malicious website to aim at, much of which is not covered by any consistent permissions or notifications system.

The Newcastle University study focused on the sensors that record a device’s orientation, motion and rotation which, the team theorised, could be used to reveal specific touch actions.

The methodology involved 10 smartphone users entering 50 four-digit test PINs five times each on a webpage, which provided data to train the neural network used to guess the PINs.

In the event, the network guessed the correct PIN on its first try an impressive 70% of the time. By the fifth guess, the success rate reached 100%.

For comparison, the team reckons a random guess of a four-digit PIN (from 10,000 possibilities) would have a probability of being right only 2% of the time on the first occasion and 6% of the time by guess three.

That’s an impressive guessing rate – so should smartphone users be worried?

In the short term, not really. Training the neural network to reach this level of accuracy required a large amount of training data – 250 PINs per targeted user – on which to base its inferences about which keys individuals had touched.

Gathering each of those PINs could only be achieved under specific conditions, such as if an attacker were running a rogue app or had lured the user to a website running malicious JavaScript code in a tab that remained open while they entered a PIN in another site.

Under real-world conditions this would be pretty hard to pull off. In any case, the team points out, up to a quarter of smartphone users choose PINs from a predictable set of 20 common sequences such as 1234, 0000, or 1000, so advanced neural PIN guessing might be overkill.

What the study tells us is that how someone holds, clicks, scrolls and taps on a smartphone generates data that is not as indecipherable or random as people probably think it is.

Said the study’s lead author, Dr Maryam Mehrnezhad:

We all clamour for the latest phone with the latest features and better user experience but because there is no uniform way of managing sensors across the industry they pose a real threat to our personal security.

One solution would be to extend sensor permissions so that users can see when a malicious site or app is accessing them. But there are now so many of them inside smartphones this might lead to notification overload.

The team’s other suggestions – change PINs regularly, check app permissions before installation, close background tabs and apps – are sound but unlikely to make much impression on the average smartphone user if the history of security advice is anything to go by.

Alternatively, people could simply use longer PINs or, better still, the industry could ditch them altogether (as is being done elsewhere) in favour of better security options. Users like PINs, but as the punchline goes, their days are surely numbered.


 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/TgceaIJ5unk/

Google joins the efforts to halt the spread of fake news

Earlier this week, Google rolled out a new tool to help users around the globe differentiate between real and fake news. Tested on news stories in a handful of countries over recent months, the “Fact Check” tag adds additional information to some of the technology giant’s search and news results. As a Google blog post explains:

This label identifies articles that include information fact checked by news publishers and fact-checking organisations… so people can make more informed judgements

The snippet that’s added reveals the claim and who made it, along with the name of the organisation that checked it and what they concluded.

It’s possible that some news article or search result could include more than one tag. That’s because Google won’t be checking out the facts itself; the checking is down to the 115 organisations that make up the fact-checking community. Google believes that readers will find the different opinions useful:

Even though differing conclusions may be presented, we think it’s still helpful for people to understand the degree of consensus around a particular claim and have clear information on which sources agree.

Conversely, not every news article or search result will include the extra information. Not every article will be checked and Google won’t publish every check completed. The content, publisher and fact check claim must meet the search giant’s standards and adhere to its policies for the fact-check label to appear.

While Google’s announcement was generally well received as a tool to help in the fight back against fake news, some felt it didn’t go far enough. As TechRadar commented:

It doesn’t do anything to affect the ranking of the articles. If the fact check article would normally show up on page four, it will still show up on page four.

Facebook, on the other hand, is actively pushing potentially fake news down its news ranking. In a recent blog post, the social media giant explained that, since a lot of fake news is financially motivated, one of the most effective approaches is “removing the economic incentives for traffickers of misinformation”. In other words, making it less visible. To do that it is actively:

  • Testing for signals that an article has misled people, so it can improve News Feed ranking algorithms and reduce the prevalence of false news content.
  • Testing ways to make it easier for readers to report a false news story, with stories flagged as false by users showing up lower in the News Feed.
  • Working independent third-party fact-checking organisations to flag potential fake news and reduce their ranking in the News Feed.

It’s early days and we may see some of the other tech companies offer tools to help users spot and flag up fake news. For now, we’ll just have to wait and see whether these two efforts hold back the spread of fake news, or simply provide the reader with some additional information to help them make a more informed decision.

In reality, however, the article from MIT has probably hit the nail on the head by saying “Fact-check alerts and handy tips to help spot misinformation are useful, but they place the onus firmly on users.”

With the onus, it seems, likely to stay with each and every one of us for the time being, we recently shared some advice on how you can help in Fake news: what can we all do to play our part in combating it?

 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0xksl2Xp4mk/

Android malware creators throw up a roadblock to thwart the good guys

Emulation testbeds have been considered by security practitioners to be a useful tool to conduct operational security exercises and a variety of research. For almost as long, malware writers have sought to thwart such tools.

SophosLabs has come across some fresh examples of this – specifically, anti-emulation Android malware. The findings are in a Sophos Blog write up by Android specialists Chen Yu, William Lee, Jagadeesh Chandraiah and Ferenc László Nagy.

In it, they explain how Android malware is copying the anti-emulation techniques that have served Windows malware writers so well.

First, let’s look at what an emulator is. Most online definitions describe it as hardware or software that allows one computer (the host) to imitate another computer (the guest). It typically allows the host system to run software or use peripheral devices designed for the guest system. In security, it’s a handy way to test malware behavior or larger security operations readiness.

Anti-emulation techniques are found in many different Android malware families, one being the recent Android Adload adware found in Google Play.

Six techniques

SophosLabs researchers identified many anti-emulator techniques when they looked at such malware. The Sophos Blog post describes six of them in detail. Here’s a summary of each:

  1. Checking telephony services informationEmulator detecting is all about spotting the difference between the environment that the emulator and real device provide. First, the deviceID, phone number, IMEI, and IMSI would be different on an emulator than on a real device. The Android.os.TelephonyManager class provides methods to get the information.
  2. Checking the build info: Researchers found multiple malware families checking the build information on a system to determine if it’s running on an emulator.
  3. Checking system properties: Some system properties on an emulator are different from those on real devices. For example, device brand, hardware and model. 
  4. Checking for the presence of emulator-related files: In this case, the malware checks to see if QEMU (Quick Emulator) or other emulator-related files exist.
  5. Checking the debugger and installer: This one is not an anti-emulator but its purpose is also to obstruct the dynamic analysis. Like the Skinner adware, it uses Debug.isDebuggerConnected() and Debug.waitingForDebugger() to check if a debugger exists.
  6. Time bomb: This is another way many malware/adware families hide themselves from dynamic analysis. After installation, they await a certain time until they start their activities.

Defensive measures

Malicious code with anti-emulation features is just the latest example in what has been a surge in Android malware activity. The average Android user isn’t going to know what techniques the malware used to reach their device’s doorstep, but they can do much to keep it from getting in – especially when it comes to the apps they choose:

  • Stick to Google Play. It isn’t perfect, but Google does put plenty of effort into preventing malware arriving in the first place, or purging it from the Play Store if it shows up. In contrast, many alternative markets are little more than a free-for-all where app creators can upload anything they want, and frequently do.
  • Consider using an Android anti-virus. By blocking the install of malicious and unwanted apps, even if they come from Google Play, you can spare yourself lots of trouble.
  • Avoid apps with a low reputation. If no one knows anything about a new app yet, don’t install it on a work phone, because your IT department won’t thank you if something goes wrong.
  • Patch early, patch often. When buying a new phone model, check the vendor’s attitude to updates and the speed that patches arrive. Why not put “faster, more effective patching” on your list of desirable features, alongside or ahead of hardware advances such as “cooler camera” and “funkier screen”?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cyy9FSZ1uV4/