STE WILLIAMS

AI could be better than your doctor at predicting a heart attack

Heart disease is the world’s leading killer. In 2012, between heart attacks, strokes and blocked arteries, cardiovascular disease claimed a total of 17.5m lives.

Doctors know the risk factors – high blood pressure, high LDL cholesterol, smoking, diabetes, and age, among others. But even so, they don’t have crystal balls. The human body is complex. Doctors don’t always get it right when they try to predict who’s going to have a heart attack.

But thanks to researchers at the University of Nottingham in the UK, we might be closer than ever to having that crystal ball.

Using machine learning, they’ve developed an algorithm that outperforms – by 7.6% – medical doctors when it comes to predicting heart attacks. Experts say that the heart attack-predicting AI could save thousands – perhaps even millions – of lives every year.

As the scientists explain in a recently published paper, prediction modeling for heart attacks involves complex, non-linear interactions between variables – something that algorithms can handle better than humans.

Science Magazine quoted Stephen Weng, an epidemiologist at the University of Nottingham:

There’s a lot of interaction in biological systems.

Some of those interactions make more sense than others, he said:

That’s the reality of the human body. What computer science allows us to do is to explore those associations.

For their study, Weng and his team used an array of machine-learning algorithms: logistic regression, random forest, gradient boosting machines, and neural networks. They compared the algorithms’ predictions against a model based on guidelines from the American Heart Association/American College of Cardiology (ACC/AHA).

After the AI algorithms trained themselves with existing data to look for patterns and to create their own rules, all four of the AI algorithms outperformed the ACC/AHA guidelines, the scientists reported: “significantly” better, in fact.

Out of a sample size of around 83,000 patient records, the machine-learning algorithms accurately predicted 355 more patients who developed cardiovascular disease than did the ACC/AHA guidelines. That’s 355 more people whose lives could have been saved.

The algorithms also reported 1.6% fewer false positives, meaning that they were better than doctors when it came to the prospect of sparing patients from unnecessary treatment.

From the paper:

Machine-learning significantly improves accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment, while avoiding unnecessary treatment of others.

Elsie Ross, a vascular surgeon at Stanford University, in Palo Alto, California, wasn’t in on the study, but she told Science that it’s clear that machine learning should be employed by the medical community:

I can’t stress enough how important it is, and how much I really hope that doctors start to embrace the use of artificial intelligence to assist us in care of patients.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nqcrGk7-Qjw/

Ambient light sensors can steal data, says security researcher

Security researcher Lukasz Olejnik says it is possible to slurp sensitive data with the ambient light sensors installed in many smartphones and laptops.

The sensors are there so that devices can automatically change the brightness of screens, a handy trick that save scrambles to change settings.

But Olejnik says such sensors are dangerous because the World Wide Web Consortium (W3C) is considering “whether to allow websites access the light sensor without requiring the user’s permission.” That discussion is taking place in the context of giving web pages the same access to hardware that native applications enjoy.

If web pages can do so, the sensor can be made to detect variations in brightness on a device’s screen so, for instance, the sensors could “read” a QR code presented inside a web page, Olejnik says. And seeing as QR codes are sometimes published as an authentication tool for chores like password changes, Olejnik thinks that’s a worry.

Youtube Video

As readers are doubtless aware, many sites change the colour of links when a user has visited them. Olejnik has used the ambient light sensor to detect that change and therefore infer a user’s browsing history.

There’s some good news in the revelation that the attack is slow: it took Olejnik 48 seconds to detect a 16-character text string, and three minutes and twenty seconds to recognise a QR code. Few users would keep a QR code on screen that long, but it’s still unsettling to know the sensors are an attack vector.

Olejnik proposes a pleasingly simple fix. If the API limits the frequency of sensor readings, and quantized their output, the sensors could still do their job of shining a light on users but would lose the accuracy needed to do evil.

This is not the first time an API has been shown to enable invasions of privacy and/or security worries. Apple and Mozilla recently disabled a battery-charge-snooping API that Olejnik thinks Uber used to figure out the state of customers’ phones so it could charge them more for rides when their batteries are close to expiring. Chrome has also adopted a Bluetooth-sniffing API, sparking calls for users to be offered a chance to disable it. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/20/ambient_light_sensors_can_steal_data_says_security_researcher/

Microsoft shrugs off report that Edge can expose user identities from Fetch requests

An independent security researcher claims to have uncovered a security flaw in Microsoft Edge.

The issue enables any website to identify a user by his username from another website, according to Ariel Zelivansky. More specifically the researcher alleges that Edge exposes the URL of any Fetch response, in contradiction to the specification. This is a problem because it’s possible to identify users by crafting a Fetch request to a URL that will redirect to a URL with the user’s username (e.g. https://facebook.com/me to https://facebook.com/username).

Zelivansky approached Microsoft but the software giant dismissed the issue. El Reg requested a comment only to be told that Redmond had nothing to add beyond its response to Zelivansky.

The security researcher went public with his findings and contacted The Reg after Redmond decided the issue didn’t merit patching earlier this month. The issue has spawned a discussion thread on Reddit. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/20/ms_edge_vuln_dispute/

Phishing with ‘punycode’ – when foreign letters spell English words

The curiously-named system known as punycode is a way of converting words that can’t be written in ASCII, such as the Ancient Greek phrase ΓΝΩΘΙΣΕΑΥΤΟΝ (know yourself) into an ASCII encoding, like this: xn--mxadglfwep7amk6b.

This makes it possible to encode so-called International Domain Names (IDNs) – ones that include non-ASCII characters – using only the Roman letters A to Z, the digits 0 to 9 and the hyphen (-) character.

That’s handy, because the global Domain Name System (DNS), responsible for turning human-friendly server names into computer-friendly network numbers, is restricted to that limited subset of ASCII characters in domain names.

(Back when DNS was codified, storage and network bandwidth were much more precious resources than today, so that limits on the maximum size of everything from character sets to network packets are typically much more restrictive in older protocols.)

Homographs – when two words look alike

If you were to register the domain…

XN--MXADGLFWEP7AMK6B.EXAMPLE.COM.

…some modern apps may recognise the punycoding, and automatically convert the name for display as…

ΓΝΩΘΙΣΕΑΥΤΟΝ.EXAMPLE.COM.

You can see where this is going.

Some letters in the Roman alphabet are the same shape (if not always the same sound) as letters in the Greek, Cyrillic and other alphabets, such as the letters I, E, A, Y, T, O and N in the example above.

So you may be able to register a punycode domain name that looks nothing like a well-known ASCII company name, but nevertheless displays very much like it.

For example, consider the text string consisting of these lower-case Greek letters: alpha, rho, rho, iota, epsilon.

In punycode you get xn--mxail5aa, but when displayed (depending on the fonts you have installed), you get: αρριϵ.

Punycode considered harmful

A security researcher called Xudong Zheng recently wrote an article describing how different browsers take different approaches to homograph problem.

He registered the domain xn--80ak6aa92e.com, which is a Cyrillic version of the above Greek apple trick – an unlikely Cyrillic domain name that just happens to come out as аррӏе when converted back from punycode to “Russian” text.

Interestingly, many browsers take an aggressive stance against this sort of jiggery-pokery.

Safari and Edge, for example, just display it as plain old xn--80ak6aa92e.com, at least if your system settings don’t include any Cyrillic languages:

After all, if you can’t read Cyrillic text in the first place, you don’t lose anything by seeing the domain name in its punycode format – in fact, you gain a lot by not seeing it as misleading faux-English text.

Likewise, Chrome and Firefox won’t automatically decode punycode URLs if they mix multiple alphabets or languages, on the grounds that such text strings are highly unlikely in real life and therefore suspicious.

But both Chrome and Firefox will autoconvert punycode URLs that contain all their characters in the same language, like this:

Preventing “confusables”

Apparently, Chrome will be adding additional browser protection to prevent this autoconversion, starting in the next version (Chrome 58), even though there’s a risk that some genuine non-ASCII domains might subsequently appear in the browser as punycode URLs.

Firefox programmers, on the other hand, are arguing strongly that because the Mozilla Foundation’s desire is to avoid favouritism, and to treat all languages equally, this sort of protection is culturally insensitive and technically undesirable.

They say that the browser isn’t the place for deciding when ASCII should take “first class status” over some other system of writing. (ASCII, by the way, stands for American Standard Code for Information Interchange.)

Some of the Mozilla team suggest, not unreasonably, that the responsibility for preventing “confusable” domains, such as the one used by Xudong in his blog article, lies with the registrars of each top-level domain.

If registrars are, in general, supposed to stop fraudulent or deliberately misleading domain registrations, Mozilla says, then they should be stopping “confusables”, too, in the same way that countries expect their Motor Registries to avoid issuing personalised number plates with potentially offensive or B16OTED combinations of letters and numbers.

Not all of the Mozillans agree, however, pointing out that the risk of appearing “culturally insensitive” in respect of a small number of non-ASCII domain names is a small price to pay for making life harder for phishers and scammers in real life.

After all, deciding whether to allow or disallow a “confusable” domain name in the first place is itself a culturally subjective exercise.

Oh, what a tangled web we weave…

What to do?

Xudong has two good suggestions, to which we’ve added a third of our own:

  • Use a password manager, which helps reduce the risk of pasting passwords into any incorrectly-named site. The password manager won’t match your Apple-in-ASCII password with the Apple-in-Cyrillic domain name, no matter what character encoding system is used.
  • Force Firefox always to display punycode names. If you don’t (or can’t) read any non-Roman alphabets or writing systems, you lose nothing by going to the about:config page and setting network.IDN_show_punycode to true.
  • Click on the padlock to display the HTTPS certificate.. This shows the domain name for which the certificate was issued using the DNS-friendly, ASCII-only format, so if the name starts xn-- then you are looking at a punycode domain, whatever it may look like in the address bar. (Note. Drill right down to the [View Certificate] option.)


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PHCvLVCKdh4/

If you’ve stayed at a Holiday Inn you may have lost more than a good night’s sleep (like maybe your bank card)

In February, Intercontinental Hotels Group alerted customers that some of its US locations had been infected with credit-card-stealing malware. Now it has admitted the cyber-outbreak is much worse than first thought.

IHG, which owns brands like Holiday Inn and Crown Plaza, has warned that around 1,200 of its hotels across the US and Puerto Rico have been hit by the same sales terminal malware – which grabs card data from the computers’ memory as payments are made. This information is then siphoned off to crooks to use online and create cloned cards. The infections were spotted on September 29, 2016 but the infections weren’t cleared up until March 2017, and some hotels might still have a problem.

“The malware searched for track data (which sometimes has cardholder name in addition to card number, expiration date, and internal verification code) read from the magnetic stripe of a payment card as it was being routed through the affected hotel server,” IHG said today. “There is no indication that other guest information was affected.”

The hotelier said that many of its locations were unaffected because they had installed a security mechanism called Secure Payment Solution that blocked the spyware from reading off sensitive card data – however, many hotels hadn’t gotten the system up and running in time.

Since it is a franchise operation it’s up to the hotel owner to install the more secure system, and there are worries that not all of them have the system installed even now.

IHG has set up a web page with a full list of affected hotels, and it’s a very long list. The conglomerate isn’t offering any kind of identity theft support, as is usual in such cases. Instead it’s just telling customers to check their credit card statements.

That lack of customer support could turn around and bite IHG in the backside if the expected credit card fraud is widespread. The US is, after all, the land of the lawsuit, and lawyers are no doubt salivating at the chance to launch a class action suit against some of the best-known hotel brands in the country. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/19/intercontinental_hotels_group_malware/

We’re spying on you for your own protection, says NSA, FBI

A new factsheet by the NSA and FBI has laid bare ludicrous contradictions in how US intelligence agencies choose to interpret a law designed to prevent spying on American citizens, but which they use to achieve exactly that end.

  • While noting that the law specifically bans the gathering of information on US citizens, it then defends both the gathering and retention of information on US citizens.
  • While claiming that its procedures severely limit the amount of information that is gathered on individual US citizens, it claims to be unable to provide even an estimate as to how many US citizens’ records are in its database.
  • While noting it is illegal to specifically target US citizens using their personally identifiable information without a warrant, it then argues why it should be allowed to continue searching US citizens’ personally identifiable information without a warrant.
  • And while claiming that it does not use the law to undertake mass surveillance or bulk collection of information, it defends tapping the internet’s backbone and gathering information where the claimed target of surveillance is neither the sender nor the receiver of the information.

The document even claims that it is surveilling US citizens for their own protection while at the same time claiming that it is not doing so.

The obvious and painful contradictions within the 10-page document [PDF] are testament to the very reason why the factsheet had to be prepared in the first place: Congress is threatening not to renew the legislation due to the intelligence agencies’ willful misrepresentation of the law to perform the very activities it was designed to prevent.

Come again?

FISA – the Foreign Intelligence Surveillance Act – was enacted in 1978 and authorizes US intelligence agencies to carry out electronic surveillance of foreign persons outside the US. It specifically prohibited surveillance of US citizens and foreign persons within US borders.

But in 2008, the FISA Amendments Act (FAA) was passed to recognize the modern realities of internet communications: that foreign intelligence targets were using networks based in the United States to communicate. The law gave the intelligence agencies the right to demand that US companies hand over their communications in the search for foreign intelligence.

In an effort to ensure that those searches were restricted to non-US citizens however, the FAA – which was re-authorized in 2012 and now needs to be re-authorized again before the end of 2017 – included various procedures, and checks and balances.

Somewhat inevitably however, those procedures – which remain almost entirely secret – and the check and balances – which have been shown to be ineffective at best – have been slowly undermined by the intelligence agencies to the extent that the FBI now routinely uses personally identifiable information of US citizens, such as an email or phone number, to search a huge database of gathered information if it suspects them of a crime carried out in the US.

That reality is the diametric opposite of what the law was intended to do – hence the ludicrous contradictions between what the intelligence agencies say the law authorizes and the everyday realities that they argue must be retained.

Walk me through it

The first eight pages of the 10-page document are largely accurate, giving a rundown of the law, its history and intentions, and the procedures and checks introduced. In fact, it is a useful and largely objective rundown of the issue.

On page four, the document gives some examples of where use of Section 702 have proven effective: gathering insights into the minds of high-level Middle Eastern government ministers; checking up on sanctions; identifying both terrorists and terrorist sympathizers and alerting other governments to them.

Of the five examples given (of course it’s impossible to know how many real-world examples there are), only one covers an arrest on US soil: the case of Najibullah Zazi who was tracked after he sent an email to an al-Qaeda operative in Pakistan asking for help in making bombs. Zazi planned to bomb the subway in New York City but was arrested in 2009 before he had the opportunity to do so. He pled guilty in 2010 and was sentenced to life in prison in 2012. (It is worth noting, however, that Zazi was already under surveillance from US intelligence agencies thanks to his visits to Pakistan, so it’s unclear what role the Section 702 data really played.)

The document carefully words some sections covering concern over how the law was being interpreted. As a result of Edward Snowden’s revelations, lawmakers and civil society groups started asking precise questions and that resulted in the intelligence agencies releasing limited information about the process it goes through to obtain the rights to spy on people. The document paints the provision of that information as the intelligence agencies’ “commitment to furthering the principles of transparency,” when nothing could be further from the truth.

It also tries to paint a report by the Privacy and Civil Liberties Oversight Board (PCLOB) into US spying in positive terms. The independent board, the document claims, largely exonerated the intelligence agencies and “made a number of recommendations” that have “been implemented in full or in part by the government.”

In reality, the board’s report was a damning indictment of the agencies’ effort to reinterpret the law to be able to spy on just about anyone. The recommendations that have been implemented “in part” cover the most important improvements, in particular the publication of the procedures that the agencies use in reaching determinations. These critical documents remain entirely secret.

The PCLOB also paid a high price for standing up to the NSA and FBI: they had their authority cut out from under them, the budget was slashed, and all but one of its five board members have either resigned or have not had their terms renewed. It is a shell of an organization that doesn’t even answer its phone or emails.

The issues

It is on pages nine and 10 that the real issues appear however – where it addresses “702 issues that are likely to arise in the re-authorization discussion.”

These are:

  1. Information gathered on US citizens
  2. Searches carried out on that database
  3. Internet backbone tapping

Despite the law specifically noting that US citizens and people within US borders cannot be spied on through Section 702, in reality the intelligence agencies do exactly that.

The explanation is that this information is “incidental” and is hoovered up as the NSA and others are gathering intelligence on others. The intelligence agencies claim that it affects very few US citizens and so Congress has persistently asked what that number is: how many US citizens are included in the 702 database?

The US House Judiciary Committee first asked that question a year ago – April 2016. There is still no answer.

This latest document notes: “The IC (intelligence community) and DoJ (Department of Justice) have met with staff members of both the House and Senate Intelligence and Judiciary Committees, the PCLOB, and advocacy groups to explain the obstacles that hinder the government’s ability to count with any accuracy or to even provide a reliable estimate of the number of incidental US person communications collected through Section 702.”

It says that the agencies are “working to produce a relevant metric” to inform discussions.

This is a transparent attempt to prevent a figure on the number of US citizens in the database from being revealed, because it would almost certainly undermine the core contention of the intelligence agencies: that their procedures prevent the unnecessary gathering of information on US citizens.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/19/nsa_fbi_spy_on_us_for_our_protection/

Snowden Says Mass Surveillance Programs ‘Are About Power’

Edward Snowden shared his views of the implications of mass surveillance programs and the government’s objective in implementing them.

There’s a lot of uncertainty and debate around mass surveillance programs. Why do they exist? Who is interested in all of this data, and what do they want to do with it? These are a few of the questions explored during an event entitled “Democracy Under Surveillance: A Conversation with Edward Snowden,” held yesterday at the College of William Mary .

The discussion was moderated by Col. Lawrence Wilkerson, former Chief of Staff to US Secretary of State Colin Powell, and distinguished professor of government and public policy at WM.

“Surveillance technologies have outpaced democratic controls,” said Snowden, who joined the event via satellite. “A generation ago, surveillance was extremely expensive … there was a natural limitation because governments had to spend extraordinary sums to track individual people.”

Today, the dynamic is reversed. One person in front of a monitor can track “an unimaginably large” number of people, he continued. The NSA’s surveillance program, deployed in secret and with “serious constitutional implications,” he said, is an example.

To illustrate the sheer amount of data the NSA has gathered, Snowden – who is in exile in Russia after copying and leaking classified information from the spy agency – showed a photo of the organization’s Mission Data Repository, originally named the Massive Data Repository. The troves of data garnered through surveillance is held “just in case.”

While the US government and others view such surveillance measures as necessary for security, Snowden offered the flip-side argument.

“Perhaps this is true,” Snowden said. “But we should always be aware that we may not get to choose what it is we’re actually being protected from.” He urged helathy skepticism of government efforts. As part of his discussion on mass surveillance programs, and their infringement on constitutional rights, he posed the question: do these programs really protect people from harm? His answer: mass surveillance in the US has never made a concrete difference in saving lives.

“These programs are about power,” he argued during the event. For more than a decade, he claimed, mass surveillance has not countered terrorism, despite being justified on that premise.  

When asked by a WM student whether increased surveillance could ever be justified, Snowden said he is less critical of targeted surveillance, in the event those watching use “the minimum amount of surveillance needed to achieve goals.”

Targeted surveillance, he explained, has a “centuries-long track record” of saving lives. If someone has, for example, been associated with a terrorist group and demonstrated efforts to plan attacks, it’s worth gathering information, Snowden said.

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/snowden-says-mass-surveillance-programs-are-about-power/a/d-id/1328678?_mc=RSS_DR_EDT

Google Won’t Trust Symantec and Neither Should You

As bad as this controversy is for Symantec, the real damage will befall the company and individual web sites deemed untrustworthy by a Chrome browser on the basis of a rejected Symantec certificate.

News that Google may be imposing a series of restrictions in Chrome against digital certificates issued by Symantec is but the latest and most remarkable salvo in a dispute that stretches back years. Google is leveraging its prominence to force companies to confront their cyber risk – a vital advance in fostering proactive digital resilience. How Symantec responds will have relevance far beyond any one corporate conflict.

Claiming Symantec was far too lax and borderline negligent in issuing its certificates, Google recently announced it would begin gradually rejecting them, as well as any authorities tied to the Symantec root certificates. Any certificate authorities that derive their key chain from Symantec’s root will also face the same restrictions; some major names fall into this category, such as VeriSign and Thawte.

Despite the nonchalance of Symantec’s response, this is a potentially monumental step taken by Google. Browsers gatekeep the Internet to some degree: Chrome, Firefox, Safari, Edge/Explorer; Google, Mozilla, Apple, Microsoft. These are the applications and companies that determine how people interface with the Internet. Furthermore, they determine when a security problem takes on enough risk to be unilaterally rejected in the interest of their users. We’ve seen this before with encryption strength, cipher suites, even plugins like Flash and Java. These four companies hold the keys, as it were, for big picture technological evolution– and the future of technology is resilience.

With this change, Google thus sets up the first real roadblock for Symantec’s business model. Google Chrome has about 60% of the browser market share. When Google drops support for an application or certificate for security purposes, it affects the browsing habits of the majority of Internet users. Symantec certificates, and those from its subsidiaries, will have demonstrably less value in Chrome once these restrictions start being enforced. This acute depreciation in the usefulness of a Symantec certificate, should, and probably will, cause en masse replacement with less problematic certificates. As the sole purpose of a certificate is to provide a trusted source verifying that the site is valid, secure, and private, such a public rebuke of Symantec’s integrity could be a crushing blow.

Insecure practices such as the certificate problems Symantec had are not just academic examples of bad technology, but possibilities for real world damage. It should be apparent now that the digital economy is the economy. There is no retrograde motion left to us as a society when it comes technology and its integration into daily life. As such, only resilient digital services can survive. Those which impose too much risk or cannot weather the attacks, outages, breaches, and problems of the digital landscape will lose the trust of the people, and of the other companies who rely upon those services to function.

Doing the Right Thing
Wouldn’t Symantec want to issue certificates in the most secure way possible – not merely to avoid Google’s wrath, but to do right by the enterprise customers who purchase their certificates? It seems not. Imagine the administrative overhead, cost, and time it would take for Symantec to change a system which, according to Google, has been broken for years. With no apparent pressing need to break from standard operating procedure, Symantec has not made those changes, even after earning nasty press about how shoddy their issuance practices really are.

This begs the question: How does enterprise technology evolve from here, without the prodding of a tech conglomerate? In a digital ecosystem of interdependent parts, large scale changes are slow and usually painful, as different parts change and grow at different speeds, leaving others behind to catch up or fade away. Google’s decision to restrict certificates issued by Symantec and its subsidiaries acts as a catalyst for this evolution, because Google is directly impacting Symantec’s business as a certificate authority by imposing security measures that their certificate process does not meet.

Businesses who rely on websites to drive revenue will not want to worry about whether their Symantec-issued certificate will cause problems for some 60% of their visitors. There are reasonably priced alternatives readily available, issued by firms able to provide an actual trustworthy certificate to their customers. These business decisions will ultimately affect Symantec’s bottom line, perhaps providing an impetus for them to take seriously the security problems they have brushed off for years.

As bad as this is for Symantec, the real problems will affect those companies and individuals relying on the company’s certificates. Google’s rebuke will repel potential visitors who use the Chrome browser from visiting sites using Symantec certificates, while also calling the businesses’ reputations into question, as Chrome identifies them as untrustworthy sites. This is an unfortunate result for those trying to avoid this very fate by buying certificates from Symantec in the first place.

Imagine this very plausible scenario: your company uses Chrome and relies on a web application hosted on a server using a Symantec certificate to conduct business.  Chrome stops trusting that server, such that in the blink of an eye, none of your co-workers can access the website without jumping through some serious hoops. Some percentage of Chrome users will likely alter their browsing habits, and some percentage of websites will likely trade out their Symantec certificates for another brand.

In the bigger picture, we should expect to see more private-sector driven enforcement of security practices. The public interest lies in every component of our digital environment being as resilient as possible, so that we can all use services without taking on an undue amount of risk. But when a private company has no profit motive to reduce their cyber risk, and instead chooses to knowingly continue bad practices, how can this public good be achieved? In order for real digital resilience to be kindled and spread, companies will need to become proactive – not wait for the death knell of a war with Google to begin changing their ways.

[Check out the two-day Dark Reading Cybersecurity Crash Course at Interop ITX, May 15 16, where Dark Reading editors and some of the industry’s top cybersecurity experts will share the latest data security trends and best practices.]

Related Content:

 

Mike Baukes is co-founder and co-CEO of UpGuard, a cyber resilience company based in Mountain View, California. View Full Bio

Article source: http://www.darkreading.com/endpoint/google-wont-trust-symantec-and-neither-should-you/a/d-id/1328682?_mc=RSS_DR_EDT

3 Tips for Updating an Endpoint Security Strategy

How to face the process of navigating new threats, tools, and features to build an effective endpoint security strategy.

There is no one-size-fits-all approach to endpoint security, a space that has become inundated with products competing to solve a problem that has challenged businesses for years.

The last three to four years have driven the emergence of new options and ways of looking at endpoint security technology, says Mike Spanbauer, vice president of strategy at NSS labs. All of these tools rely on different features; all are suited for different strategies.

It’s up to businesses to determine which tools are best to meet their needs based on their distinct approach to endpoint security.

“There is no such thing as perfect,” Spanbauer says of choosing a tool. “This is one security control, in your grand security architecture, that must be complemented by a lot of secure technologies.”

Securing the desktops within any organization, whether it’s a large enterprise or SMB, comes with challenges. For businesses working to update their strategies, here are a few tips to keep in mind:

Prioritize your needs

To update an endpoint security strategy and pick the tools to support it, you need to determine your use cases, says Spanbauer. This will fall to the team who manages security tools and is responsible for handling the forensic parts of incident response.

“The teams with products that need to be supported will dictate which features really matter,” he explains. For SMBs without dedicated incident response teams, he recommends developing more resilient backup processes in case of an attack.

Use cases for endpoint tech will also vary depending on your organization’s data center and its data services, ports, protocols, architectures, and applications.

As businesses incorporate devices connected to the IoT, they will need to be increasingly aware of their larger attack surface, prioritize services and assets that need to be protected, and know where they are located.

Determine how to collaborate

On a broader level, it’s important to establish a good working relationship with other data-conscious groups within the organization. Desktop support, for example, is an important collaborator for security teams.

While sometimes there can be contention among groups, Spanbauer acknowledges the importance of recognizing you’re all on the same team. This means regular, dedicated interactions. He also advises building a workflow process so everyone knows how to partner with one another in the event of an emergency.

Have a backup plan

Even businesses taking all the right steps can suffer a breach. When they do, it’s important to have their data backed up.

“Most enterprises have a backup” strategy” says Spanbauer. “I just don’t believe it’s strictly enforced.”

He also emphasizes enforcing endpoint security practices; for example, logging out of administrative accounts for basic productivity that doesn’t require administrative control. It’s a simple step that could make a big difference: if you click a malicious Office attachment as an admin, you could accidentally give a hacker access to conduct a more sinister attack.

“It’s convenient, it’s easy, but as a best practice you shouldn’t be writing Word documents or emails as an admin on your machine,” he continues. “You have access to those apps without [administrative control].”

[Mike Spanbauer will be speaking about endpoint security strategy as part of his session “Updating Your Endpoint Security Strategy: Is the Endpoint a New Breed, Unicorn, or Endangered Species?during Interop ITX, May 15-19, at the MGM Grand in Las Vegas. To learn more about his presentation, other Interop security tracks, or to register click on the live links.]

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: http://www.darkreading.com/endpoint/3-tips-for-updating-an-endpoint-security-strategy-/d/d-id/1328687?_mc=RSS_DR_EDT

How tech support scammers have made millions of dollars

Ahhh, the sweet smell of revenge! Nothing like unleashing some ransomware on those tech support scammers, eh?

However, fortunately for them, there aren’t hours enough in the day to turn the tables on the swindlers and social-engineer their pants off.

Unless, that is, you’re talking about researchers at Stony Brook University, who recently cooked up a robot to automatically crawl the web finding tech support scammers and figuring out where they lurk, how they monetize the scam, and what software tools they use to pull off their dastardly deeds.

That tool is called RoboVic. It’s short for Robot Victim, and it’s just one aspect of an unprecedented dive into tech support scams undertaken by two Stony Brook U. PhD candidates – Najmeh Miramirkhani and Oleksii Starov – under advisor Nick Nikiforakis.

Over the course of the study, they used RoboVic to discover hundreds of phone numbers and domains used by the scammers. And then, they jumped on the phone themselves, chatting with 60 scammers to determine what social engineering techniques they use to weasel money out of victims.

As they describe in their paper, titled Dial One for Scam (PDF), the researchers conducted this first-ever systematic study of tech support scams, and the call centers they run out of, partly to find out how users get exposed to these scams in the first place.

The answer: malvertising. In order to train RoboVic to find tech support scam pages, the researchers took advantage of the fact that the scams are often found on domain squatting pages.

Those are the pages that take advantage of typos we make when typing popular domain names. For example, a scammer company will register a typosquatting domain such as twwitter.com.

Domain parking companies have registered tens of thousands of similar, misspelled sound-alikes of popular domain names. Studies have shown that visitors who stumble into the typosquatting pages often get redirected to pages laced with malware, while a certain percentage get shuffled over to tech support scam pages.

Once there, a visitor is bombarded with messages saying their operating system is infected with malware. Typically, the site is festooned with logos and trademarks from well-known software and security companies or user interfaces.

A popular gambit has been to present users with a page that mimics the Windows blue screen of death. You’re a Mac user, you say? No cause for worry? Unfortunately, that’s flat-out wrong. Crooks have recently trained their sights on you, too, notes fellow Naked Security writer Paul Ducklin of Sophos:

This isn’t just about the keywords “Microsoft” and “Windows” any more. A year or two ago, almost all the reports we received from readers involved the crooks claiming close affiliation with Microsoft, which became a well-known indicator that the call was false.

Recently, however, readers have reported phone scams where the callers align themselves with “Apple” and “iCloud” instead. This not only avoids the red alert word “Microsoft”, but also casts the net of prospective victims even wider, given the range of different platforms where people use their iCloud accounts.

Beyond spooking visitors with their bogus alerts, tech support pages will wrap them up in intrusive JavaScript so they can’t navigate away. For example, they’ll constantly show alert boxes that ask the intended prey to call the tech support number. As the researchers describe, other techniques include messing with a user’s attempt to close the browser tab or navigate away from the site by hooking into the onunload event.

Feeling stuck like a fly in a web, a naive user will call what’s often a toll-free number for “help” with the “malware infection”. The person on the other end of the line will instruct the caller to download remote desktop to allow the remote “technician” to connect to their machine. That gives the crook complete control over the victim’s computer. At that point, perfectly innocent system messages will be interpreted as dire indications of infection.

Sure, we can fix it, they’ll say, once the hook is set. The price typically ranges in the hundreds of dollars, the researchers found, with the average price for a “fix” being $290.90.

Some of the many interesting findings from the eight-month study:

  • These scammers register thousands of low-cost domain names, such as .xyz and .space, which play off the trademarks of large software companies.
  • They use content delivery networks in order to get free hosting for their scams.
  • The scammers are abusing 15 telecommunication providers, but four telecoms are responsible for the lion’s share – more than 90% – of the phone numbers the researchers analyzed.
  • The fraudsters are actively evading dynamic-analysis systems located on public clouds.
  • The profits: making use of publicly exposed webserver analytics, the researchers estimated that just for a small fraction of the monitored domains, scammers are likely to have made more than $9m.
  • These guys take their time reeling us in. The average call duration was 17 minutes.
  • They use only a handful of remote administration tools (81% of all scammers used one of two software tools). Their favorites include LogMeIn Rescue, CITRIX GoToAssist and TeamViewer.
  • Scammers use more than 12 techniques to convince users their systems are infected, such as stopped services and drivers.
  • Scammer call centers are estimated to employ, on average, 11 tech support scammers.

By the way, in case you’re wondering, the researchers emphatically did not pay these scammers:

We chose not to pay scammers primarily for ethical reasons. As described [elsewhere in the study], the average amount of money that a scammer requests is almost $300. To get statistically significant numbers, we would have to pay at least 30 scammers and thus put approximately $9,000 in the hands of cybercriminals, a fraction of which would, almost certainly, be used to fund new malvertising campaigns and attract new victims.

The researchers suggest that to keep the public safe from these swindlers, we’re going to need more public education – with broader use of public service announcements, for example – and some help from browser makers.

As it is, desperate users who can’t navigate away from these pages often try rebooting. Browsers that remember open tabs will just deposit the victims right back in that hell hole, though. The researchers suggest that browser makers might want to help them out by adopting a universal panic button: a shortcut for users feeling threatened by a webpage.

That’s good stuff. But our advice is even simpler: if you find yourself trapped by one of these scam pages, don’t call that number. As we’ve said before with regards to unsolicited tech support calls, there’s nothing useful to hear, and nothing useful to say.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mf2UsJZlUBk/