STE WILLIAMS

Run this in April: UPDATE Azure SET SQLthreat_detection = ‘generally available’

Microsoft says it will fully power up its Azure SQL Database Threat Detection service this spring.

This technology, which has been in preview mode for the past year or so, monitors for suspicious database activities, and raises the alarm if malicious access is detected. It has been two years in the making, and will enter general availability in April, we’re told.

This feature is supposed to reassure Azure subscribers that their information is protected on remote servers. Microsoft is keen to stress to businesses that it is safe to run their applications on its infrastructure. As everyone knows, running software in the cloud just means running software on someone else’s computers, and biz execs don’t like the idea of being sued if something horrible was to happen to their faraway data, especially if it was outside of their control.

These cloud migrations are going to dominate the enterprise technology world for the next twenty years, but for that time on-premises system will remain. The highest-value data remains within many companies’ own servers, and while the threat detection service will only be made generally available to cloud users, on-premises SQL Server customers may see some form of it in the future.

The Azure service will continuously monitor and profile customers’ application behavior to detect suspicious database activities and identify potential mischief, using machine learning algorithms written in R, which is now supported by SQL Server 2016 running on Azure’s back-end.

When malicious attempts to access, breach or exploit sensitive data are spotted, security officers and designated administrators get immediate notifications and will be able to view the alerts in the Azure Security Center, along with recommendations for how to mitigate the threats.

Speaking to The Register, Rohan Kumar, general manager of database systems, explained how Microsoft’s telemetry helped it inform the machine learning algorithms that have contributed to the database threat detection service.

“We get 300 billion authentication requests each month,” Kumar explained, all of which contribute to Microsoft’s intelligence security graph, which collects the signals based on access patterns “to create the graph and help us leverage hundreds of gigabytes of telemetry every second, including a lot on Azure.”

There are 1.3 billion calls to Azure Active Directory daily, and Microsoft scans more than 200 billion emails for malware and phishing attacks each month. Microsoft collects between 600 and 700TB of telemetry data on a daily basis, not all of which is collected for security purposes, but “a significant proportion” of which is used to create the models that will be used with the threat detection service.

The licensing terms for Azure SQL Database Threat Detection have not yet been disclosed. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/10/microsoft_adding_security_for_sql_databases/

Google set to purge Play store of apps lacking a privacy policy

Google is warning developers that it plans to purge its Play store of apps that don’t have privacy policies.

The move could affect millions of apps that don’t spell out what they do with user data.

Developers worldwide have been receiving notices from Google this week, informing them that the company plans to “limit visibility” or to entirely remove such apps, according to The Next Web.

That could be a whole lot of apps. Google Play store had some 2.6m apps as of December 2016, according to statistics portal Statista.

Studies have found that mobile apps in general – both those in Google Play and in Apple’s app store – have been pretty dismal when it comes to privacy.

In 2014, a coordinated study of apps run by a group of national privacy and data protection bodies from all around the world found that some 85% were failing to provide adequate information on the privacy implications of using the app.

Google Play store app developers have until March 15 to do one of two things: either link to a “valid” privacy policy on their app store listing page and within their app, or stop asking for sensitive user data.

According to Google’s User Data policy, developers must be “transparent” in how they collect, handle and share user data.

That’s the bare minimum requirement, though. Requirements get stiffer still for personal and sensitive information.

That includes personally identifiable information (PII), financial and payment information, authentication information, phonebook or contact data, microphone and camera sensor data, and sensitive device data. If an app collects that kind of user data, it’s required by the User Data policy to…

  • Post a privacy policy in both the designated field in the Play Developer Console and from within the Play distributed app itself.
  • Handle the user data securely, including transmitting it using modern cryptography (for example, over HTTPS).

This is a good move both for the sake of user privacy and for the developers of (privacy-respecting) apps who’d like to get a bit more elbow room in the cluttered Google Play store.

One developer, Jack Cooney, creator of the app Hip Hop Ninja, was singing the praises of the purge. TNW quoted him:

I think it’s fantastic, this will clear the Google Play store of so many junk and zombie apps that our games will find increased visibility on the store as the search terms will become much less cluttered.

This will make it easier for people to be able to find our app’s [sic] like Hop Hop Ninja! with better keyword searches like ninja or Nerd Agency and find much more relevant results. (A previous pain point of developing for Android).


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SVow1fnUVBE/

RSA Conference 2017: your chance to get to grips with ransomware

You know a security threat has reached epic proportions when RSA Conference decides to make it the focus of an all-day seminar. Such is the case with ransomware.

This scourge will be under the microscope Monday from 9am until 5 pm in room 2024 at Moscone West, part of the sprawling Moscone Center complex where RSA Conference 2017 takes place next week.

Ransomware is an old topic in information security circles. Attackers have been hijacking computers and holding files hostage for years now, typically demanding that ransom be paid in bitcoins. Some might expect that a majority of people are well aware of the threat by now and that they’re taking the appropriate precautions. It’s therefore reasonable to assume that online thieves have moved on to new tactics.

Unfortunately, that’s hardly the case, said Andrew Hay, CISO of DataGravity and one of the seminar organizers.

Ransomware is one of the most prominent threats facing organizations and their end-users, partners, and customers. RSA brings together many of the best and brightest security minds in the industry, many of whom spend countless hours researching ransomware. In addition to security professionals, numerous organizations send their own security and technology employees to gain a better understanding of ransomware and effective mitigation techniques.

Mitigation strategies will be at the heart of this seminar. Attendees can expect a day of exploring ransomware’s multifaceted implications across technical, policy, compliance and financial response, Hay said.

Sessions will focus on innovative research, case studies on response and recovery efforts, and debate on if – and when – the victim should pay the ransom.

The struggle is real

Ransomware has been a major focus for Naked Security in the last couple of years. Last month, for example, we wrote about how thousands of unsecured MongoDB databases were hit by an attacker demanding a 0.2BTC ransom ($220) to return the data he was holding hostage. The attacker, going by the online handle Harak1r1, hit servers across the globe.

When it comes to the question of whether or not to pay a ransom, we’ve shied away from moralizing about whether it’s always unacceptable to support criminality by paying up, even if you are in a difficult position. But in Ransomware – should you pay? we made two suggestions:

  1. Don’t pay if you can possibly avoid it, even if it means some personal hassle.
  2. Take precautions today (eg backup, proactive anti-virus, web and email filtering) so that you avoid getting into a position where you ever need to pay.

The trick, of course, is to keep from getting put in this no-win situation in the first place. We’ve regularly offered advice on preventing (and recovering from) attacks by ransomware and other malware, and continue to offer the following resources:

Sophos also recommends reading this guide:

Seminar agenda

For those attending RSA, the full ransomware seminar agenda is as follows:

 


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/opDdtKMgh3A/

Scammers slip fake Amazon ad under Google’s nose

Last year, Google says it took down 1.7bn bad ads. Well, it missed a whopper on Wednesday: a bad ad perfectly spoofed to look like a legit Amazon ad. Anybody who clicked on it was whisked to a Windows support scam, according to ZDNet.

ZDNet’s Zack Whittaker reports that this bogus ad – perched at the top of search results, labelled sponsored ad served by Google – didn’t infect visitors with malware.

That’s a thin silver lining, but it doesn’t mean that the scammers didn’t try to swindle visitors.

ZDNet used a tracer tool to examine the fake ad, which was served up through Google’s own ad network. It apparently resolved fully to Amazon.com – probably as a way to trick Google’s systems into accepting it.

Once visitors clicked on the “Amazon” ad, though, they were hijacked, sent to a page that detected what platform their systems were running on. If the page detected a visitor using Windows, it would present a Microsoft-branded blue screen of death. Mac users were told that their systems had been seized by crypto-ransomware.

Visitors who tried to get the heck out of there by exiting the page would get a popup with a script that added random characters to the web address. In some cases, it was freezing both the browser and the computer.

As of Thursday morning, the fake Amazon ad was no longer appearing, but the website hosting the scam was still active. ZDNet chose not to link to that site.

Google declined to comment, while Amazon hadn’t responded to ZDNet’s inquiry by the time the story posted on Thursday.

Would this have happened if that spoofed Amazon ad had appeared on the Bing search engine, given that Bing imposed a blanket ban on online tech support ads in May 2016?

The search engine changed its advertising policy to block all online tech support ads, including both the legitimate tech support companies and all the swindlers. Bing did so because the sheer volume and audacity of the crooks had spoiled it for everyone.

Bing’s blanket ban might not have picked up on the bogus Amazon ad, though. After all, these wolves apparently pulled on a pretty convincing sheepskin, managing to let them slip through Google’s safeguards.

We’ve written quite a bit about support scams. It used to be that these fake tech support scammers would call us, but nowadays, as more and more people refuse to take calls from unknown numbers, the crooks have been adapting.

Instead of them calling you, it’s increasingly common that they’ll use a web ad or popup that simply runs the scam in reverse: the crook will display a warning and advise you to call them, typically on a toll-free number.

What to do?

  • If you receive a cold call about accepting support, just hang up.
  • If you receive a web popup or ad urging you to call for support, ignore it.
  • If you need help with your computer, ask someone whom you know, and like, and trust.
  • When searching for Amazon, remember that you don’t need to use Google. Simply go straight to Amazon.com.

 

DEALING WITH FAKE SUPPORT CALLS

Here’s a short podcast you can recommend to friends and family. We make it clear it clear that these guys are scammers (and why), and offer some practical advice on how to deal with them.

(Originally recorded 05 Nov 2010, duration 6’15”, download size 4.5MB)


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jrfYYI-PXcs/

Scottish court issues damages to couple over distress caused by neighbour’s use of CCTV

A Scottish couple have been awarded damages of more than £17,000 in total for the “extreme stress” they suffered as a result of the “highly intrusive” use of CCTV systems by the owner of a neighbouring property.

Debbie and Tony Woolley were awarded £8,634 each after Sheriff Ross, in a ruling issued at Edinburgh Sheriff Court, said that the processing of personal data gathered from Nahid Akram’s video and audio recording equipment was “intrusive, excessive and unjustified” and “unnecessary in relation to any legitimate purpose”.

He said Akram, in her capacity as data controller for her guest house business, was responsible for a number of breaches of the Data Protection Act.

It is thought to be the first time that a court in the UK has awarded damages to account purely for the distress caused by a breach of UK data protection laws.

According to the ruling, a dispute broke out between the Woolleys and Akram over the use of Akram’s property as a guest house, which she runs as a business and which her husband manages. The Akram guest house is downstairs from the Woolley’s flat. Both the Woolleys and Akram subsequently installed CCTV systems outside their respective properties.

While the Woolley’s equipment “records images of their own external property only”, Akram installed “video and audio recording equipment” which allowed her, and her husband, to monitor comings and goings at the Woolley’s property and to listen in to conversations in their private garden, according to the ruling. The equipment used by Akram was capable of storing five days’ worth of data at any one time.

The Sheriff described “the regime of surveillance” that the Woolleys were subjected to as “extravagant, unjustified and highly visible” and as “an effort to oppress”. He said that the Woolleys and their family had “suffered considerable distress” since Akram’s equipment had been installed in about October 2013 and that it is “difficult to conceive” a more intrusive case of surveillance.

“They have all been severely restricted in the use and enjoyment of their own home,” Sheriff Ross said. “They voluntarily restrict their external movements. They restrict their conversations, both inside and outside their home, as they are aware that they are being recorded and do not know the extent of the coverage. They require to warn visitors about the coverage. They cannot use their rear garden at all, as they do not want their activities to be recorded. They have suffered extreme stress as a result of [Akram’s] unfair processing of their personal data.”

Sheriff Ross said Akram had breached rules set out in the Data Protection Act that require personal data to be processed fairly and lawfully, as well as those that require personal data to be “adequate, relevant and not excessive in relation to the purpose or purposes for which they are processed”. She also breached the part of the Act that requires personal data to be retained for no longer than is necessary for the purpose or purposes for which it has been processed, according to the judgment. In his ruling, Sheriff Ross referred to a 2015 judgment issued by the Court of Appeal in London, the Google v Vidal-Hall case, which he said gave the Woolleys a right to claim compensation.

Until that 2015 ruling, it was the generally accepted position that people who did not incur a financial loss from a breach of the Data Protection Act were not eligible for compensation by way of remedy for that breach. However, the Court of Appeal in London said that position was not consistent with EU law.

It now means that, under the Data Protection Act, data subjects have a right to claim compensation if they suffer damage or distress as a result of violations of a section of the Act by organisations that hold their personal data. Organisations do have a defence to this right to compensation if they can “prove that [they] had taken such care as in all the circumstances was reasonably required to comply with the requirement [that it is alleged to have breached]”.

Sheriff Ross accepted a method for calculating the damages to be paid in this case which had been suggested by the Woolleys. Compensation was granted on the basis of £10 for each day that the Woolleys’ data had been processed in breach of the Data Protection Act, with a deduction being made for one month’s worth of days per year to account for days where the Woolleys were “likely to be absent from the property, for example on holiday”.

The Sheriff said, though, that it would be beneficial for an “authoritative decision” to be issued on the correct method for calculating damages in similar cases in future.

Dispute resolution specialist Jim Cormack of Pinsent Masons, the law firm behind Out-Law.com, said that there is an “alternative and, arguably, legally better approach” to calculating damages for distress than the one the Woolleys adopted in this case.

“In my view, it would have been open for the pursuers to sue for a lump sum, much as would be done for the relevant element of a personal injury case, and for the Court to make a broader assessment of the figure to be awarded, representing the positive view of the Court as to the appropriate figure for compensation overall,” Cormack said. “Damages for distress arguably do not need to be quantified with the same level of precision as other heads of damage and indeed it may be said that the nature of damages for distress is such that only a broader assessment can be made. It will be interesting to see if the approach of a daily figure adopted in this case is followed in future cases.”

Copyright © 2016, Out-Law.com

Out-Law.com is part of international law firm Pinsent Masons.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/10/scottish_court_issues_damages_to_couple_over_distress_caused_by_neighbours_use_of_cctv/

Trump cybersecurity order morphs into 2,200-plus-word extravaganza

The latest draft of a cybersecurity executive order to be signed by President Trump has become an unusually precise, report-ordering extravaganza.

Executive orders – even those signed by Trump – tend to be relatively short and quite vague, with general policy goals listed and expected to be interpreted by others.

The new cybersecurity order is none of those. At over 2,200 words it is very long. It is also very precise, listing individuals and giving them specific tasks. Rather than focus on a particular goal – the creation of a new taskforce or the development of a singular report – the order calls for the production of no fewer than 10 reports, six of which will go direct to the President, on a range of aspects of cybersecurity.

(By comparison, even though President Obama put out a very lengthy executive order on cybersecurity, running to 3,000 words, it only asked for three reports to be created.)

To understand how what was originally a restatement of US policy toward cybersecurity with a call for a single report has evolved into an extensive work plan, you need to look at the unusual events of nine days ago.

Trump was expected to sign the cybersecurity order on January 31. To that end, a series of meetings were held at the White House during the day and it was supposed to end with the signing in the Oval Office in the late afternoon. But at the last minute, without explanation, the decision to sign was pulled.

Ban the bomb

That decision, we now know, was as a direct result of the disastrous rollout of the immigration ban that caused chaos at airports nationwide. Such was the fallout that President Trump reportedly ordered that all new executive orders go through an expanded process that sought broader input from more government departments.

It appears as though due to that process, the cybersecurity order was passed around for additional input and resulted in a bloated document that looks set to create a mountain of work with uncertain outcomes.

In order of listing, the reports are:

  1. A risk management report from every agency head to the director of the office of management and budget (OMB) and the secretary of homeland security (DHS) within 90 days describing how they are implementing the NIST cybersecurity guidelines.
  2. A report to the President from Commerce, the DHS, the OMB and the General Services Administration, within 150 days, covering modernization of the federal government’s IT systems.
  3. A report to the President’s counterterrorism advisor from the defense secretary and director of national intelligence (DNI), within 150 days, covering how they will move toward a consolidated network architecture.
  4. A report to the President through the DNS, within 180 days, covering how the federal government can support critical infrastructure companies: so-called section 9 entities.
  5. A report to the President from the DHS and Commerce, within 90 days, looking at the transparency and risk management practices of section 9 entities.
  6. A report to the President from the DHS and Commerce, within 240 days, having spoken to Defense, the Attorney General, the FBI, the FCC and the FTC, on how to deal with denial of service attacks and botnets.
  7. An assessment to the President’s counterterrorism advisor, within 90 days, on the risks of a cyberattack on the nation’s electricity grid.
  8. A report to the President from Defense, the DHS, the DNI and the FBI, within 90 days, on cybersecurity risks facing defense and military systems.
  9. A report to the President from Treasury, Defense, the Attorney General, Commerce, the DHS and the DNI, within 90 days, covering strategic options for “deterring adversaries and better protecting the American people.”
  10. A report to the President from State, Treasury, Defense, Commerce, the DHS and Attorney General, within 180 days, covering how supporting the multi-stakeholder decision-making process can keep the internet free and open.

In short, while well intended, the executive order has become bloatware, as people who obviously do not have experience with executive orders have been given the opportunity to create a wish list of all the reports they would want from all the people they would want to hear from.

Assuming this draft makes it through unedited (which, in itself, would be a little concerning), we can’t see how these long series of reports requiring massive cross-department coordination will ever see the light of day. Even if they did, imagining that the president would deal with no fewer than six reports on cybersecurity is fantasy.

The end result will likely be stasis in place of the obviously intended big leap forward. The Trump Administration still has a lot to learn. ®

PS: The White House’s Chief Information Security Officer Cory Louie, who was installed by President Obama, has been forced to resign by Team Trump with no immediate successor. One of Louie’s duties was the almost impossible task of managing the security of the tweet-happy president’s mobile devices.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/09/trump_cybersecurity_order_becomes_report_extravaganza/

Explained: Apple iCloud kept ‘deleted’ browser histories for over a year

Apple appears to have fixed a flaw in iCloud that retained a copy of deleted Safari browsing history data synced from local devices for more than a year.

On Thursday, Russian computer forensics software biz Elcomsoft said that its forensic software was able to recover Safari browser history records that had been stored in iCloud and erased, including the date the URLs were last visited and when the deletion occurred.

Vladimir Katalov, CEO of Elcomsoft, in a blog post highlighted the forensic value of browsing data. Because iCloud sync works continuously, if enabled by the user, it’s particularly useful for surveillance and investigations, he suggested.

Shortly after the publication of Elcomsoft’s findings, Apple took undisclosed steps to address the issue. According to Katalov, Safari browser history data stored in iCloud now only dates back two weeks, deleted or not.

Apple did not respond to a request for comment.

The speed with which Apple chose to address the issue suggests something of the information’s potential value. Apple presumably wanted to avoid the burden of complying with a surge of demands for data assumed to be lost.

Regardless, the company had to act to meet its data retention commitment in its Legal Process Guidelines: “Apple does not retain deleted content once it is cleared from Apple’s servers.”

Apple says it retains iCloud connection logs up to 30 days and iCloud mail logs for up to 60 days.

This isn’t the first time Apple’s data retention reality hasn’t matched its policy. In November, Elcomsoft found that iCloud was storing iPhone call histories without notification or consent. Apple has since fixed that issue.

The previous availability of deleted Safari browsing data isn’t all that significant in the range of possible privacy failures. First, someone using Elcomsoft’s software would have to know the user’s account name and password to access iCloud data, or would have to be in a position to demand account information from Apple through the established legal process.

In the context of an investigation, Google and the suspect’s ISP are likely to have a more comprehensive picture of online activities than iCloud backups.

What’s more, the user would have had to opt in to having Safari data synchronized to iCloud.

As to the technical nature of the flaw, we can only speculate, since Apple did not respond to a request for an explanation.

One possible cause might be a client-side bug with the way Safari or the iCloud sync software handles data. Safari stores history data in an SQLite database. A Stack Overflow post from more than a year ago suggests that deleting these files doesn’t always work. So perhaps a local browser storage flaw – using a cached file when the original can’t be found, for example, or a file lock that prevents removal – keeps deleted records around and syncs them.

It’s equally possible the issue could reflect an iCloud bug.

If privacy is a serious concern, avoid iCloud and cloud services in general. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/10/apple_icloud_kept_dead_browser_histories/

4 Signs You, Your Users, Tech Peers & C-Suite All Have ‘Security Fatigue’

If security fatigue is the disease we’ve all got, the question is how do we get over it?

 More on Security Live at Interop ITX

There’s been a lot of talk about security fatigue lately, in the press and in my office. It’s a term that people get right away, and it feels like one of the classic social phenomena of our era, like multitasking or that phantom buzz in your pocket.

If security fatigue is the disease we’ve all got (even security pros!), the question is how do we get over it? To help answer that question, let’s take a look at four signs that identify the symptoms, along with recommendations that will put you and your users on the road to recovery.

Sign 1: You Reuse Passwords
Symptom: You want that 10% off coupon for creating an account on a new website—so you use your email address and that one password you use for all those “minor” accounts you’ve created.

What’s the Worst That Could Happen? At the least, hacking that single password might give criminals access to your personal card data. That’s a pain, but most credit card purchases are protected. But if the same hacker starts trying that password on other accounts, and those accounts include more personal information or are used for work credentials, and you could quickly move from identify theft to a data breach. Ouch!

Cure: The best cure is a password manager, which is the easiest way to create unique, lengthy, and difficult-to-crack passwords for every login. Sadly, most people aren’t ready to take this strong medicine. So they fall back on a variety of other schemes to introduce some level of complexity to their password. There are a million of these schemes, and combine them with multi-factor authentication you just may be okay. But you can’t be sure. So use a password manager!

Sign 2: You Forget to Connect to VPN
Symptom: You’re doing some work from home, and you just jump right in and go—completely ignoring the step of setting up a VPN connection. You’re just catching up on e-mail after all.

What’s the Worst That Could Happen? If your home WiFi is password-protected, and you’re just sending email, the risk is pretty low. But let’s say you connect to an insecure website and it tries to download malware—you’re exposed. And you’re not always on password-protected networks or just doing email, right? The truth is, if you’re connecting to the Web or sending sensitive documents, you’re exposed without VPN.

Cure: It’s not establishing a VPN connection that’s hard. The hard part is remembering. It’s a matter of making it a habit, like snapping on your seat belt. I’ve put a reminder on my startup screen that I see every time I log in, and it really helps. We all have the capacity to trigger electronic reminders these days, so set one up for VPN usage today.

Sign 3: You Click on an Email Link — Even Though You’re not Sure
Symptom: It’s been a long day, but you’re determined to churn through a few emails before you bail out. Hmm, you think, you wouldn’t mind winning a new Amazon Alexa offered in one email, so you click the enter automatically link

What’s the Worst That Could Happen? There’s a brief pause as you go to the innocuous-looking site. That pause, unfortunately, indicates the site is downloading a nasty piece of ransomware that will infect your network and bring work to a grinding halt. Cybercriminals have so many different ways to hook you, but they all begin by you visiting a site or downloading a file (or plugging in a USB drive), because you didn’t take that extra second to make sure you were taking the safest action.

Cure: Phishing sucks! It’s the most common form of cybercriminal attacks on employees, and it can be VERY difficult if not impossible to detect. But you can resist phishing with a few simple tricks. First, turn your baloney detector on high and quickly delete anything that sounds too good to be true or comes out of left field. Second, recognize that you should never act on emails when you’re in a hurry (unless it’s to delete them). Third, if you get a lot of commercial email (I sure do), use rules to move it all to a folder, and then take a little time a few times each week to go through and identify the stuff you want to act on—deleting everything else. Remember, you’re in control of your actions when it comes to your email, so make it a personal challenge to never get caught.

Sign 4: You Don’t Report Something that Seems Off
Symptom: Stopping in the kitchen for a cup of coffee, you notice a folder on the counter, with a sticky that says “Vendor Contracts, First Quarter.” The person who left it will probably be right back, you reason, so you fill your cup and get off to that meeting.

What’s the Worst That Could Happen? Remember that stranger you let in the door earlier? She could find a gold mine in that folder. Or the disgruntled employee who came in as you headed out. Perhaps he could use what’s in that folder to embarrass the company. The truth is, unreported suspicions can blossom into leaks of proprietary information or malware infections all too easily.

Cure: Reporting suspicious incidents or observations is inconvenient! But so is stopping at red lights, washing your hands before you eat, and a whole lot of other things that we go ahead and do because we care about our fellow man and want to make the world a decent place to live. So report suspicious behavior.

Do you note the similarities in all the cures to security fatigue? They all come back to the need to adopt a new mental model about security and to develop new habits that support that mental model. If you care about protecting yourself and your company from cybercrime, developing those new habits will not be hard. First care, then act. It’s that easy.

Related Content:

 

Tom Pendergast is the chief architect of MediaPro’s Adaptive Architecture(tm) approach to analyze, plan, train, and reinforce to deliver comprehensive awareness programs in the areas of information security, privacy, and corporate compliance. Tom has a Ph.D. in American … View Full Bio

Article source: http://www.darkreading.com/endpoint/4-signs-you-your-users-tech-peers-and-c-suite-all-have-security-fatigue/a/d-id/1328103?_mc=RSS_DR_EDT

Hacking The Penetration Test

Penetration testers rarely get spotted, according to a Rapid7 report analyzing its real-world engagements.

It’s not a good sign when an organization undergoing a penetration test can’t detect the operation probing and infiltrating its systems and network.

In a new report by Rapid7 that pulls back the covers on penetration test engagements the company has executed, two thirds of these engagements weren’t discovered at all by the organization being tested. That’s especially concerning because pen tests tend to be short-term, rapid-fire – and sometimes loud – operations, unlike the low-and-slow attacks by seasoned cyberattackers.

Tod Beardsley, research director at Rapid7, says pen tests typically run a week to 10 days, so researchers on the case basically throw as much as they can at the target fairly quickly, so it’s more likely they’d be detected by the client’s security tools and team. “It’s kind of like you run in and break everything you can. That’s the nature of the business, you have a week or 10 days,” he says. “But there’s not even detection [of a pen test] a third of the time which is bad.”

“If you can’t detect a penetration test, it seems it would be impossible to detect a real cybercriminal or cyber espionage” attack, Beardsley says.

Part of the problem is that organizations typically can’t and don’t daily track their event logs closely, he says, and don’t necessarily have a handle on what’s normal network activity. “It’s kind of a UI failure. We have security tools that are hard to use in the security industry; I don’t think it’s a matter of instrumentation. It’s more a matter of knowing what’s the norm for your network.”

Rapid7 took the results of 128 penetration tests it launched in the fourth quarter of 2016 in order to “demystify” penetration testing and to gauge just how much pen testers are getting away with due to security woes in organizations.

Penetration testing is gradually evolving. The rise in bug bounty programs in some cases has overshadowed and even shaped the nature of some pen testing, but even bug bounty proponents maintain that pen testing isn’t going anywhere.

Alex Rice, co-founder and CEO of bug bounty firm HackerOne, says many organizations with bug bounty programs end up shifting the focus of their pen tests. “They start doing more penetration tests, with more narrow scope,” Rice said in a recent interview with Dark Reading. “They learn and apply resources to areas lit up by a bug bounty program.”

He says most veteran pen testers prefer the more focused and challenging engagements, anyway. “We find most of the good ones would rather spend the entire engagement focusing on very hard security problems to solve,” Rice says. “It’s a $300-an-hour waste of their talent and ability if” those pen testers aren’t working on specific and tougher security issues, he says.

Almost Too Easy

Surprisingly, Rapid7’s pen testers in most cases didn’t have to look too deeply for holes to exploit: two-thirds of the time, pen testers were able to find and exploit vulnerabilities in the client’s systems. And some 67% of the clients sported network misconfiguration issues. All in all, the pen testers were able to successfully “hack” their clients 80% of the time, either via unfixed vulnerabilities or configuration mistakes. Among the bugs they found were the usual suspects: cross-site request forgery (22.7%), SMB relaying (20.3%), (cross-site scripting (18.8%), broadcast name resolution (14.8%) as well as a some SQL injection, denial-of-service, and other web-type flaws, the report says.

In one pen test of a healthcare firm, Rapid7’s team was able to exploit unrelated Web application flaws together to infiltrate the client’s internal, back-end systems: first a CSRF flaw in a public Web application, giving them an entrée to create an account on the server. They then found a persistent XSS flaw that they employed to steal the administrator’s session token and impersonate him. That led them to find in an insufficient validation flaw in the Web app that allowed them to gain access to the Web server’s operating system and ultimately get full shell access on the server and internal network.

“That they were leveraging cross-site scripting, CSRF [and another flaw] to get internal network access: that was shocking to me,” Beardsley says. “I was surprised to see vulnerabilities play such a large part of pen testing.”

Related Content:

 

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/hacking-the-penetration-test-/d/d-id/1328105?_mc=RSS_DR_EDT

Google’s neural networks turn pixelated faces back into real ones

Researchers at Google Brain have come up with a way to turn heavily pixelated images of human faces into something that bears a usable resemblance to the original subject.

In a new paper, the company’s researchers describe using neural networks put to work at two different ends of what should, on the face of it, be an incredibly difficult problem to solve: how to resolve a blocky 8 x 8 pixel images of faces or indoor scenes containing almost no information?

It’s something scientists in the field of super resolution (SR) have been working on for years, using techniques such as de-blurring and interpolation that are often not successful for this type of image. As the researchers put it:

When some details do not exist in the source image, the challenge lies not only in “deblurring” an image, but also in generating new image details that appear plausible to a human observer.

Their method involves getting the first “conditioning” neural network to resize 32 x 32 pixel images down to 8 x 8 pixels to see if that process can find a point at which they start to match the test image.

Meanwhile, a second “prior” neural network compares the 8 x 8 image to large numbers of what it deems similar images to see if it can work out what detail should be added using something called PixelCNN. In a final stage, the two are combined in a 32 x 32 best guess.

Being neural nets, the system requires significant datasets and training to be carried out first, but what emerges from the other side of this approach is intriguing given the unpromising starting point.

To test the ability of the method to produce lifelike faces, researchers asked human volunteers to compare a true image of a celebrity with the prediction made by the system from the 8 x 8 pixelated version of the same face.

To the question “which image, would you guess, is from a camera?”, 10% were fooled by the algorithmic image (a 50% score representing maximum confusion).

This is surprising. The published images shown by Google Brain’s neural system bear a resemblance to the real person or scene but not necessarily enough to serve as anything better than an approximation.

The obvious practical application of this would be enhancing blurry CCTV images of suspects. But getting to grips with real faces at awkward angles depends on numerous small details. Emphasise the wrong ones and police could end up looking for the wrong person.

In fairness, the Google team makes no big claims for possible applications of the technology, preferring to remain firmly focused on theoretical advances. Their achievement is to show that neural systems working with large enough datasets can infer useful information seemingly out of junk.

In the end, this kind of approach is probabilistic – another way of saying it’s a prediction. In the real world, predictions sound better than nothing, but a more reliable end for CCTV might be achieved simply by improving the pixel density of security cameras.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/EBDy4-cUaJ4/