STE WILLIAMS

Are you a student? Your personal data is there for the asking

When parents send their kids (who at that point are legally adults) off to college, they expect not only that they will be educated, but that they will be living in a safe place with a reasonable measure of personal privacy.

Most higher-ed institutions provide at least a measure of that. But when it comes to the privacy of students’ personally identifiable information (PII), not so much. Besides dates of attendance, courses of study, honors and awards and any degrees earned, a lot more is there for the asking – name, home address, school address, email address, telephone number, date and place of birth, possibly even height and weight and some medical records – for just about anybody at a US higher education institution.

This is in spite of, or perhaps because of, the 1974 federal law that sounds like it would protect student PII – it’s titled “The Family Educational Rights and Privacy Act” (FERPA). That law requires “education records” to be kept private, but not “directory information”.

And, says Leah Figueroa, a data analyst who has worked in higher education for 13 years, directory information is being shared without the knowledge or consent of the students – unless they have the savvy to put a “privacy hold” on it, which as a practical reality is about as likely as people reading all the way through a “privacy policy” and “terms of use” statement before clicking “agree” to use an operating system, a social media site or their favorite mobile app.

At one level, this may sound low risk – the kind of information that, as Bret Cohen, an attorney at Hogan Lovells with a focus on student privacy and data collection, put it, “would be typically published in a student directory provided to other students, or in a program handed out at a school athletic event”.

It’s just that the days are long gone when that information was just on paper and pretty much stayed on campus, or was sought mainly by researchers or employers looking to verify a candidate resume. In a digital world, thousands of those records can go anywhere, instantly.

And they do, said Figueroa, who now works at a community college in Texas (that’s as specific as she wants to get). She estimates that the college provides an average of 90,000 student records per year. If the requests come under the Freedom of Information Act (FOIA), “we’re not even allowed to vet them,” she said.

According to FERPA, directory information, “would not generally be considered harmful or an invasion of privacy if disclosed”. But Figueroa said colleges can pretty much decide what to include, which could mean a student photo, student ID and parents’ names, address, and where they were born, in addition to the list above.

She said while many of the requests are legitimate – from researchers or other colleges seeking to recruit students – some are likely coming from predatory loan companies or other kinds of aggressive marketers. A stalker could even find out the dorm address of a student.

That directory information, along with any degrees earned and dates of attendance is enough to, “create fake identities, to dox people, to do just about anything,” she said.

Figueroa presented a talk on the topic in April at Infosec Southwest titled “FERPA: Only Your Grades Are Safe – OSINT in Higher Education,” and more recently in a Skytalk at Defcon in Las Vegas. Paul Roberts, editor-in-chief of the Security Ledger, also featured her in a recent podcast.

In that talk, she described doing her own small survey of colleges to see what they would turn over. In one case, she said for $50 the institution sent her directory information (emailed and unencrypted) on more than 22,000 students that included the names and addresses of the parents of international students.

Another privacy hole can be medical records, which generally fall under the Health Insurance Privacy and Accountability Act (HIPAA). But Figueroa said a student’s medical records can lose HIPAA protection if anyone but the designated provider requests them – even the student.

Then they suddenly become education records, and they lose all HIPAA protection.

Yes, students do have the right to put a so-called “privacy hold” on their directory information, but Figueroa says most educational institutions aren’t very proactive or consistent about letting them know about it or how to do it.

There’s no standardized way that you have to let students know. Some places it’s on the website, some in catalog, school fact book or some other esoteric place. Some schools let you just sign in with your ID to opt out, some you have to be physically present.

Cohen said the law does require that schools provide notice of the types of information considered “directory,” and the right to opt out. But he added that “there is probably room for privacy-conscious schools to make this information more readily available”.

Figueroa said she sometimes feels like she’s on a one-woman crusade. But she has company. The Electronic Privacy Information Center proposed a Student Privacy Bill of Rights in March 2014 that called for, among other things, making it easier for students to access and amend their records, to limit the collection and retention of records and to forbid using it “to serve generalized or targeted advertisements”.

EPIC also sued the US Department of Education in 2012 over what the group said were “unlawful” changes to privacy provisions in FERPA, but that went nowhere – it was dismissed in September 2013, with the court finding that neither EPIC nor co-plaintiffs had standing.

Still, Cohen said there has been progress. He cited the Data Quality Campaign, which reported that from 2013 through September 2016, “36 states passed 73 student data privacy bills into law.  Congress has introduced a number of student privacy legislative proposals, and more than 300 education technology providers have taken the Student Privacy Pledge since it was introduced at the end of 2014.” He added that

… while there is room for FERPA to be modernized, I think that the most impactful way to protect student privacy right now is to better inform students and parents of their rights.

Theresa Payton, CEO of Fortalice and a former White House CIO, agreed.

I would love for students to be able to opt-in/opt-out of having their information shared and fully understand the implications of the state of their personal privacy. I’d also like to see more anonymization of student data for research purposes and trend forecasting.

Figueroa said the bottom line is that the risks from handing out that information to whoever asks is more of a threat than students, or even their parents, realize. “It’s a treasure trove of PII that you can use to do all kinds of things,” she said.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/FKBR7H__Bl8/

Energy firm slapped with a fine after making 1.5m nuisance calls

Don’t want to be auto-dialed by direct-call marketers?

In the UK, it’s simple to opt out: just register your number with the free Telephone Preference Service (TPS). Yep, just register, and there you have it: a nice, quiet phone, free of salespeople calling* at obnoxious hours trying to sell you on lowering your energy bill and …. oh, wait, is that your phone ringing?

Why yes, it was your phone ringing, if you were unfortunate enough to be one of the people who opted out but still received calls between April 2015 and March 2016.

The calls came from Home Logic, a UK firm based in Southampton that offers energy-saving solutions. The company is now £50,000 lighter thanks to a penalty issued by the Information Commissioner’s Office (ICO) for making marketing calls to people who had made it clear they didn’t want to be contacted in that way.

It was a tech glitch, Home Logic said. What happened was that it licensed the numbers it used to make marketing calls from third-party providers. It then uploaded that data to an electronic dialer system that screened the numbers against the TPS register. One of the third-party providers left it up to Home Logic UK to ensure that the data supplied was screened against the TPS.

Technical issues knocked the system out for 90 days out of 220 between April 2015 and March 2016. That didn’t slow down Home Logic, though: the unsolicited marketing calls kept right on coming, but with no screening against the TPS register.

In fact, during that time, Home Logic made 1,475,969 unsolicited direct marketing calls to people who’ve hopefully now regrown any hair they tore out. Out of those nearly 1.5m calls, 133 went to TPS subscribers, who complained to the ICO.

Does a £50,000 (USD$64,185) fine sound a little light? Historically, it is.

In May, the company behind 99.5m nuisance calls, Keurboom Communications, was hit with a record fine of £400,000 by the ICO. The ICO said that more than 1,000 people complained about the company’s robocalls.

That’s still shy of the maximum fine – £500,000 – the ICO can impose on data controllers for violating regulation 21 of the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR).

The £50,000 Home Logic fine also pales in comparison to the total fin £750,000 fine Ofcom, the UK’s telecoms regulator, lobbed at TalkTalk in 2013 for abandoned and silent calls to potential customers. That’s the type of nuisance call where automated dialing systems, used in call centers to generate and attempt to connect calls, greet people with silence if there aren’t enough agents to handle all the successfully connected calls.

Still and all, £50,000 is a lot more money than the £2,640 companies are charged for a year-long subscription to the TPS, the ICO noted.

*In truth, you have to take the TPS with a grain of salt: a few years back, in meetings with undercover reporters for the Daily Mail, data bosses admitted to ignoring the no-call list.

The same goes for the US: being on the “Do Not Call” list doesn’t necessarily stop the scammers or fraudsters. In fact, robocalls are on the rise: the Federal Trade Commission (FTC) received nearly 3.5m robocall complaints in fiscal year 2016, up 60% from the year before.

CNBC quotes Alex Quilici, CEO of YouMail:

The ‘Do Not Call’ registry actually works for legitimate businesses. The problem is all the people who don’t respect it, who are the scammers who [couldn’t] care less.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/IXAvooBdA8U/

Malware rains on Google’s Android Oreo parade

Thanks to Chen YuRowland Yu and William Lee of SophosLabs for their behind-the-scenes work on this article.

Google has had an exciting summer, for good and bad reasons.

The good news: Google just officially launched the eighth version of its operating system, Android Oreo, with enhancements for battery life and security. Last month, it also began rolling out a new feature called Google Play Protect, designed to scan apps that could cause harm to your Android device and data.

The bad news: at least five different types of malware were found in Google Play in August alone, including spyware, banking bots and aggressive adware. Thousands of apps contain these malicious payloads and have infected millions of users.

Yesterday, we reported that Google has had to pull some 500 apps from its Play Store, which together had been downloaded more than 100m times. The apps – which weren’t in themselves malicious – all used a software development kit (SDK) called Igexin. Among other things, the Igexin SDK has the ability to spy on victims “through otherwise benign apps by downloading malicious plugins”. Advertising SDKs make it easy for app developers to tap into advertising networks and deliver ads to their users.

As well as that issue, SophosLabs researchers have identified other threats to your device.

Banking bots

SophosLabs discovered some 20 different apps in Google Play – all detected as Andr/Banker-GUB – and recorded some Google Play download information for two versions:

 

 

This malware family relies on obfuscation and packers to make reverse engineering more difficult:

 

It uses a powerful framework called OkHttp to exchange data and information from a website. The framework uses aggressive techniques to navigate systems with multiple IP addresses:

A second bankbot was just discovered few days ago and is able to silently download APK from a remote website, lure users to install APK using fake credit points, and display a spoofed “security update payment” based on the Google Firebase Messaging Service:

GhostClicker

Researchers also intercepted an adware family called GhostClicker, which disguises itself as part of the Google Play service library or Facebook Ads library. It adds itself as a package named “logs” into those libraries. Some variants request device administration permission, and actively simulate click-on advertisements it delivered to earn revenue:

While other variants are more conservative, they register themselves as BroadCastReceiver and pop up advertisements:

Sophos detects the auto-click variant as Andr/Clicker-HO and another variant as Android Adload.

Defensive measures

The continued presence of malicious and compromised Android apps demonstrates the need to use an Android antivirus such as our free Sophos Mobile Security for Android. By blocking the install of malicious and unwanted apps, even if they come from Google Play, you can spare yourself lots of trouble.

In the bigger picture, the average Android user isn’t going to know what techniques the malware used to reach their device’s doorstep, but they can do much to keep it from getting in – especially when it comes to the apps they choose. To that end, here’s some more general advice:

  • Get Android Oreo. A rough summer aside, Google’s new OS is certainly an improvement and should do better at keeping out malware going forward.
  • Stick to Google Play. It isn’t perfect, but Google does put plenty of effort into preventing malware arriving in the first place, or purging it from the Play Store if it shows up. In contrast, many alternative markets are little more than a free-for-all where app creators can upload anything they want, and frequently do.
  • Avoid apps with a low reputation. If no one knows anything about a new app yet, don’t install it on a work phone, because your IT department won’t thank you if something goes wrong.
  • Patch early, patch often. When buying a new phone model, check the vendor’s attitude to updates and the speed that patches arrive. Why not put “faster, more effective patching” on your list of desirable features, alongside or ahead of hardware advances such as “better camera” and “higher-res screen”?


 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/1D1KPRPwlyw/

Verizon: US government requests for phone records on the up

According to Verizon’s latest transparency report for the US, government requests for phone records were up for both individuals and large groups in the first half of 2017.

But requests for data were way, way up for large groups, as in, when the government requests a mass “tower dump” of every caller who connected to an individual phone tower as they passed by.

From the report, which has comparison figures in six-month blocks going back to the second half of 2014:

In order to try to identify a suspect of a crime, the government may apply to a court for a warrant or order compelling us to provide a ‘dump’ of the phone numbers of all devices that connected to a specific cell tower or site during a given period of time.

For the first half of this year, Verizon says that it’s received approximately 8,870 warrants or court orders for cell tower dumps. The number has been surging: there were 3,200 warrants or orders for cell tower dumps in 2013. Three years later, in 2016, that number shot up to 14,630.

The total number of demands for customer data was 138,773 in the first half of this year.

Verizon went along with most of those demands from law enforcement: it rejected about 3% of the demands it received this year, which amounted to rejection of almost 2% of subpoenas and 6% of warrants and orders.

There are a number of reasons as to why Verizon rejects a given demand as being legally invalid. A different type of legal process might be needed for a particular type of information requested, for example. Verizon doesn’t report information on requests for information that it rejects on such grounds.

Out of the 97% of requests it went along with so far this year, there were 68,237 subpoenas, 722 wiretap demands, and 3,963 “trap and trace” orders that let law enforcement see the numbers of a target’s incoming calls in real time.

Verizon also says it got 27,478 emergency requests, or demands from police for information in matters of “the danger of death or serious physical injury.” The extremely vague number of national security letters Verizon can report was between one and 499.

While demands for tower dumps have spiked, the overall number of requests so far this year isn’t the highest Verizon has seen. That honor would go to the first half of 2015, when it handled 149,810 information requests from law enforcement.

Those numbers include both the data demands for which Verizon handed over information and those it didn’t. For example, it’s not uncommon for the carrier to receive a subpoena that seeks subscriber information, but it also improperly seeks other information, such as stored content, which Verizon says it can’t provide in response to a subpoena.

While we would provide the subscriber information (and thus would not consider this a rejected demand), we would not provide the other information.

In fact, Verizon says that it’s compelled to hand over contents of communications – as in, text messages or email content – to law enforcement “relatively infrequently”. Verizon only releases stored content to law enforcement when they come knocking with a probable cause warrant in hand. Law enforcement can’t get at it with a general order or subpoena. During the first half of 2017, Verizon says it received 4,436 warrants for stored content.

Verizon hands over its subscribers’ location information only in the case of a warrant or order, not in response to a subpoena. US laws differ on what law enforcement needs to get at people’s location data, depending on what area of the country you’re in. In some cases, law enforcement needs a court order, and in some cases they need a warrant – in either case, a judge has to sign off on the demand.

In the first half of this year, Verizon received approximately 20,442 demands for location data: about three-quarters of those were through orders and one-quarter through warrants.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/6H5qJ_QBae4/

Probing the online phish market reveals thriving, profitable underworld

A new study has lifted the lid on the booming ecosystems of fake websites that underpin phishing scams, revealing a wide variety of prices and products from cheap knock-ups to bespoke fraud services offering concierge-level customer support.

Infosec firm Clearsky surfed popular Russian and English-speaking underground boards and forums, looking for fake webpage creation services. Researchers then attempted to make direct contact with vendors of fake sites via instant messaging (mostly Jabber) to tease out more intelligence about their skills, offers and pricing.

Clearsky went through this process with 15 different phishing vendors, checking the prices for two main types of fake sites: a fraudulent banking login page designed to harvest credentials, and a counterfeit page that would not exist on a real banking website designed to trick marks into entering their credit card number, expiration date and CVV number.

In addition, Clearsky’s team checked whether the vendors are just duplicating the original website, or developing it from scratch. Duplicate websites are easier to produce but are more likely to be discovered and taken down quickly.

In many cases duplicate websites are blocked by Chrome/Safari, one phishing site vendor told the security researchers. Another vendor offered to add a filter to prolong the pre-exposure lifetime of the fake website.

More qualified vendors discussed how to keep fake websites under the radar for the greatest amount of time while script-kiddie types fail to grasp the difference between between simply duplicating a website and developing a fake from scratch, Clearsky discovered. Some of the vendors, duplicate the website and make basic “cleaning” i.e. basic changes in HTML and content, it adds.

Phishing website pricing table [source: Clearsky blog post]

Two different types of workers are required for fake website creation: the developers and the designers. Some developers work with third-party designers when a design or change in the websites is required.

The average price for banking login pages is about $60. Those who just duplicate the original site charge about $20-30 and those who develop the fake website from scratch ask for $50 or more, with some vendors quoting up to $200.

When researchers asked about pricing for additional pages that don’t exist at real websites – those designed to steal credit card data – the fee tended to be significantly higher because it required extra development and design work.

Some vendors also develop tools and control panels (example below) that make it easier for would-be cybercriminals to collect and potentially resell stolen credentials.

Phishing site control panel [source: Clearsky]

Some vendors also publish brash advertisements (example below) although all actively push sales of their illicit services through various incentives, Clearsky reports.

“Most of the vendors work very hard to promote their services, constantly pump up their topics in different forums, and although the basic pricing of most of them is relatively low, in order to gain proper reputation, they offer various kinds of actions and discount,” the researchers said.

For example, one vendor offered free creation of fake websites on .de top-level domains as part of a limited-time offer.

Delivery times varied with some vendors willing to complete their work in anything from ten minutes to an hour, while others asked for several days.

Colourful advertisements promote phishing website creation services [source: Clearsky]

Clearsky’s full report, The Economy behind Phishing Websites Creation, can be found here (PDF). ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/24/phishing_website_economics/

A blast from the past: Mobile trojans abusing WAP-billing services

Crooks slinging mobile trojans have reverted to old techniques by stealing users’ money through WAP-billing services.

The “unusual” rise in mobile trojan clickers that steal money from Android users through Wireless Application Protocol (WAP) billing has been tracked by security researchers at Kaspersky Lab. The unexpected trend had been in evidence for a while, but in Q2 of 2017 it became surprisingly common, with thousands of affected users in different countries across the globe, mainly in India and Russia, according to Kaspersky Lab.

WAP billing has been widely used by mobile network operators for paid services and subscriptions for many years. This form of mobile payment charges costs directly to the user’s mobile phone bill, avoiding the need for bank card registration or a sign-up process.

The technology normally works by redirecting users to a different web page where the user activates a subscription and their mobile account is charged.

Cybercrooks are abusing this legitimate technology by developing trojans that covertly subscribe to “services” owned and controlled by fraudsters. A simple registration of domains in a mobile operator’s billing system allows fraudsters to connect their website to a WAP-billing service. As a result, money from a victim’s account is siphoned off to line the pockets of fraudsters.

“We haven’t seen these types of [WAP-billing service] trojans for a while,” said Roman Unuchek, security expert at Kaspersky Lab. “The fact that they have become so popular lately might indicate that cybercriminals have started to use other verified techniques, such as WAP-billing, to exploit users. Moreover, a premium rate SMS trojan is more difficult to create. It is also interesting that malware has targeted mainly Russia and India, which could be connected to the state of their internal, local telecoms markets. However, we have also detected the trojans in South Africa and Egypt.”

The most prevalent trojan strain abusing WAP-billing services, the Trojan-Clicker.AndroidOS.Ubsod malware family, receives URLs from its command-and-control server and opens them. According to Kaspersky Security Network statistics, this trojan infected almost 8,000 victims in 82 countries during July 2017. Another popular mobile malware using the same scam mechanism uses JavaScript files to click buttons with WAP billing. Examples of this variant include the Xafekopy trojan, which is distributed through ads while masquerading as useful apps such as battery optimisers and the like and has a Chinese-speaking origin.

Using JavaScript has allowed some miscreants to bolt on CAPTCHA bypass functionality to the likes of the Podec trojan, a strain of mobile malware particularly active in Russia.

Some trojan families, such as Autosus and Podec, exploit Device Administrator rights, making them harder to delete.

Michael Covington, VP of product strategy at Wandera, said: “While we have certainly seen examples of malware that targets users of WAP-billing services, it is not the most prevalent threat that we see on mobile. In fact, the class of malware that we currently see in broad distribution is adware. It seems that many attackers are simply going after a quick payday and mobile adware, much like spam was on email, provides the easiest way to profit from mass distribution.”

To become active through mobile internet, all WAP-billing mobile trojan versions are able to turn off Wi-Fi and turn on mobile data, as explained in a blog post by Kaspersky Lab here. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/24/wap_scam_mobile_trojans/

Comparing Private and Public Cloud Threat Vectors

Many companies moving from a private cloud to a cloud service are unaware of increased threats.

Because most companies that have followed relatively traditional IT strategies are now considering putting mission-critical applications and data into the public cloud, it’s worth examining the differences in private versus public clouds when it comes to threats that applications and data encounter. When I talk to customers about the differences, I use a metaphor of what’s happening onstage versus backstage in these two deployment scenarios.

Onstage
In private data centers and public clouds, I define onstage as all the virtual machines (VMs) and containers a company runs inside the data center. We tend to protect what’s onstage in two ways: first by examining the inner behavior of each workload and second by watching the traffic entering and exiting the workload. The security industry supplies all manner of agents for the first use and provides physical and virtual firewalls and switches for the second.

Backstage
In private data centers, what I define to be backstage is your hypervisor or container operating system, your storage, your server management (the Intelligent Platform Management Interface and the like), and your firewall, switches, and routers. Many sophisticated attacks involve backstage components because most defenders don’t think about needing to detect attacks on them. This is shown by the Shadow Brokers and WikiLeaks data dumps, where many attacks against switches, firewalls, and routers are shown to be in nation-states’ offensive cyber arsenals.

In public clouds, there’s a lot more backstage activity, and even some of the same things that are backstage in private clouds expose a substantially larger attack surface in public clouds.

New Threat Vectors Emerge in the Public Cloud
Take storage as an example. In your private data center, you may have a network-attached storage (NAS) system. It’s your NAS, and no one can get to it without getting past your perimeter firewall first, so threats can be contained. Of course, an attacker could first compromise some end-user system in your network and then pivot to the NAS, copy data from it, and send it to an external data drop, but the exfiltration of data would be seen by your firewall.

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Click for more info and to register.

Now consider storage in Amazon Web Services. You store data in Amazon’s Simple Storage Service (S3). But if you fail to configure things correctly, you might expose the S3 bucket with your data to anyone on the Internet. That’s what happened in July to some Verizon data.

My point is that in the cloud, your virtual firewall can’t protect you from this type of threat because this traffic is outside your virtual network. S3 is effectively a multitenant NAS from which anyone knowing the right URL (as in the Verizon case) or possessing the right API key (in cases when the S3 bucket is better protected) can copy information.

Attackers Use New Services to Accomplish Their Goals
Compute services like Lambda are a part of AWS’s serverless compute infrastructure. If an attacker gets inside a workload you have running in AWS, or gets hold of the right API key without compromising your workload, she can install a Lambda function that is run whenever some event occurs.

One use of Lambda functions would have the attacker install a function that is run every time one of your S3 buckets is modified. The function would make a copy of any changed objects to Glacier backup storage, something that would appear relatively normal — except the attacker’s Lambda function would copy data to the attacker’s Glacier storage, not yours.

Note that you may never chose to utilize Lambda, so you might not even consider its security implications, but the fact that you didn’t utilize this service doesn’t mean an attacker is prevented from exploiting its existence.

Attack Surfaces inside New Services You Utilize
In the examples above, your only intent was to utilize compute and storage functions — basically what most people in the industry would refer to as infrastructure as a service.

But what if you want to make use of some more exotic services to speed up your time-to-market? Let’s say you’re deploying on Azure and you’re intrigued by the promise of the Azure Bot Service, which enables you to “reach customers on multiple channels.” When you utilize such services, you’re effectively in the land of platform as a service. You’re using services that are part of the Azure platform, which promises to make your organization more efficient but also could make it harder for you to migrate to another cloud platform.

The question you must ask yourself is, how secure is the Azure Bot Service? There is no guarantee that your functions built on this service will run in a VM or container dedicated to only your use. For scalability, your function might run in the same workload as many other subscribers’ functions utilizing the same service. While escaping out of a VM into a hypervisor to get into another VM on the same physical server is pretty difficult, will it be as difficult for an attacker to pull out of the Azure Bot Service? Given the imbalance of scrutiny that hypervisor code and the Azure Bot Service code are likely to undergo, I’d guess the answer is no.

In your own private cloud, the ratio of onstage to backstage attack vectors on which you may need to focus is about 90:10. In public clouds, it’s more like 60:40, because elements (for example, storage) that exist in both places are multitenant in public clouds, because public clouds provide services (such as AWS Lambda) that can be exploited to attack you, even if you’re not using them, and because any platform-specific serverless services (like Azure Bot Service) you utilize will potentially expose you to difficult-to-quantify multitenant threats.

The lesson: don’t assume that the same tools you use in your private cloud will adequately protect you in the public cloud.

Related Content:

Oliver Tavakoli is the chief technology officer at Vectra Networks, Inc. View Full Bio

Article source: https://www.darkreading.com/cloud/comparing-private-and-public-cloud-threat-vectors/a/d-id/1329676?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Benefits of Exploiting Attackers’ Favorite Tools








To InformationWeek
Network Computing
Darkreading





Dark Reading | Security | Protect The Business - Enable Access

Search

BLACK HAT USA 2017 — Symantec senior threat researcher Waylon Grange tells editor-in-chief Tim Wilson at the Dark Reading News Desk that malware authors and attackers write vulnerable code, too. What can security researchers learn from those flaws?

Watch the full two-day show and all 45 interviews at DarkReading.com/DRNewsDesk.

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Click for more info and to register.



‘);
}



‘);
}

Comments

‘);
}

‘);
}

Register for Dark Reading Newsletters

Live Events

Webinars


More UBM Tech
Live Events

Dark Reading Live EVENTS

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments

0 Comments


Cartoon Contest

Write a Caption, Win a Starbucks Card! Click Here

Latest Comment: Darn – typed UNICORN instead of UNICODE.  


Security Vulnerabilities: The Next WaveJust when you thought it was safe, researchers have unveiled a new round of IT security flaws. Is your enterprise ready?

Reports

[Strategic Security Report] How Enterprises Are Attacking the IT Security Problem

[Strategic Security Report] How Enterprises Are Attacking the IT Security Problem

Enterprises are spending more of their IT budgets on cybersecurity technology. How do your organization’s security plans and strategies compare to what others are doing? Here’s an in-depth look.

14 Social Media-Savvy CISOs to Follow on Twitter

14 Social Media-Savvy CISOs to Follow on Twitter

9 of the Biggest Bug Bounty Programs

2017 Pwnie Awards: Who Won, Lost, and Pwned

Dark Reading - Bug Report
Bug Report

Enterprise Vulnerabilities
From DHS/US-CERT’s National Vulnerability Database

CVE-2017-0290Published: 2017-05-09
NScript in mpengine in Microsoft Malware Protection Engine with Engine Version before 1.1.13704.0, as used in Windows Defender and other products, allows remote attackers to execute arbitrary code or cause a denial of service (type confusion and application crash) via crafted JavaScript code within …


CVE-2016-10369Published: 2017-05-08
unixsocket.c in lxterminal through 0.3.0 insecurely uses /tmp for a socket file, allowing a local user to cause a denial of service (preventing terminal launch), or possibly have other impact (bypassing terminal access control).


CVE-2016-8202Published: 2017-05-08
A privilege escalation vulnerability in Brocade Fibre Channel SAN products running Brocade Fabric OS (FOS) releases earlier than v7.4.1d and v8.0.1b could allow an authenticated attacker to elevate the privileges of user accounts accessing the system via command line interface. With affected version…


CVE-2016-8209Published: 2017-05-08
Improper checks for unusual or exceptional conditions in Brocade NetIron 05.8.00 and later releases up to and including 06.1.00, when the Management Module is continuously scanned on port 22, may allow attackers to cause a denial of service (crash and reload) of the management module.


CVE-2017-0890Published: 2017-05-08
Nextcloud Server before 11.0.3 is vulnerable to an inadequate escaping leading to a XSS vulnerability in the search module. To be exploitable a user has to write or paste malicious content into the search dialogue.

googletag.display(‘div-gpt-ad-961777897907396673-15’);

Information Week
<!–
UBM DeusM
–>

UBM Tech



Yahoo Hack Suspect to be Extradited to US

Karim Baratov, accused of working with Russian intelligence for the 2014 Yahoo breach, has waived an extradition hearing.

Karim Baratov, who is in custody in Canada for his alleged role in the 2014 Yahoo data breach, will soon be handed over to US marshals after deciding not to fight extradition to the states.

Baratov was among four individuals indicted in March of 2017 for their roles in hacking Yahoo systems and stealing information from some 500 million Yahoo accounts. DoJ’s indictments charged that Russian nationals and agents of Russian intelligence agency FSB Dmitry Aleksandrovich Dokuchaev, 33, and Igor Anatolyevich Sushchin, 43, allegedly hired one of the FBI’s Most Wanted cybercriminals, Alexsey Alexseyevich Belan, aka “Magg,” 29, a Russian national and resident, along with Canadian and Kazakh national Baratov, aka “Kay,” 22, to conduct a cyber attack and breach of Yahoo.

Amadeo DiCarlo, Baratov’s attorney, said the move to waive the hearing is the “quickest route into the U.S.” and that they are “anxious” to get him to the US and allow him to face the US charges, CBC reports.

“The court order is already in place to have the marshals come up to pick up Karim,” DiCarlo said. The move is expected to happen within two weeks.

Baratov’s legal team say he was ignorant of what he was doing and for whom in relation to the Yahoo case. 

Read more at CBC.

 

 

 

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/yahoo-hack-suspect-to-be-extradited-to-us/d/d-id/1329689?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Battle of the AIs: Don’t Build a Better Box, Put Your Box in a Better Loop

Powered by big data and machine learning, next-gen attacks will include perpetual waves of malware, phishes, and false websites nearly indistinguishable from the real things. Here’s how to prepare.

We’ve heard a lot about the fallout from last year’s Shadow Brokers bombshells, particularly when it comes to previously unreleased exploits put into the wild. Those exploits include EternalBlue, which was weaponized into the WannaCry ransomware that wreaked havoc on the Internet.

However, the most damaging element wasn’t necessarily the actual exploits. Yes, there has been an incredible amount of harm done, but it only foreshadows the real damage control the industry will be doing in the coming months and years.

Included in the information leaked by the Shadow Brokers, but not as widely understood, was a trove of information on the intelligence community’s methodology for finding vulnerabilities and building exploits. Now that this has been exposed, security pros had better be prepared to weather continuous attacks by zero day exploits against any and all applications and platforms.

In the next 12 to 36 months, we’re going to see hackers using these techniques to build the next generation of attacks. There will be perpetual storms of malware, click-free attacks, perfect lures — and much of it will be untraceable, with exploits becoming unique and essentially disposable.

Powered by big data, machine learning, and natural language processing engines, we’ll see phishes and false websites that will be nearly indistinguishable from the real things. There won’t just be new worms, but the equivalent of Web drive-by attacks extended to major services and even mobile platforms.

These attacks will be launched from command and control (CC) networks that were never seen before and will never be seen again. Expect malware that is constantly evolving and never reused, or which contains so many new detections that traditional antivirus software can’t handle it all.

As such, domain analysis for malware CC networks will become an obsolete art. IP reputation filters will be useless. Information from intelligence providers will become less valuable.

Get Your Head out of the Gear and into the Analytics
To defend against this dizzying new reality, we need to produce new types of logs that are more behavior-based and do a better job of using automation to analyze and detect other outputs across the entire stack. Security professionals need to be thinking about how they’re building their fully automated analytical loop instead of what a specific device is detecting.

 

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Click for more info and to register.

Essentially, today’s gear represents a collection and input mechanism. It’s a collector and an actuator. It’s both an earphone and a switch. To defend against the attacks of tomorrow, we need to extend the power of that collection effort, and close the loop with machine learning logic that can correlate, corroborate, and take action appropriate to the server or device in question.

To do this, companies must first establish methods for collecting information from all layers. The way to defend goes back to TCP/IP and the Open Systems Interconnection (OSI) model, from the physical layer to the application layer and everything in between.

Improve Your Ability to Detect Anomalies and Close the Loop
There are multiple places to detect anomalies — end-user machines, network gear, firewalls and application-aware firewalls, servers. Output feeds about the behavior of the full stack on these devices must be collected from all these physical locations.

After establishing a collection methodology, it’s time to get better at identifying anomalies, with the idea of creating an engine that will know the markers across all layers and devices.

Once those anomalies are understood, they must be fed into some kind of analytical system to be correlated. This allows for corroboration of what’s happening at the different layers, enabling more assured detection.

Then there should be some logic that looks at what the anomaly is and where it is best to halt the activity with the least impact to operational processing. At this point, the system can decide where to stop the traffic and make a determination. And the answer may be: multiple places.

In other words, we’re going to get to a point where the system doesn’t just automatically detect something and corroborate it but goes beyond that to determine the best place to stop it and take action to close the loop.

Don’t Be Afraid to Get Crafty
Yes, there’s machine learning embedded in this process. Is it something you can build? Yes. Something you can buy? Not always. Notice that I called for the feeds to go to analytical system and not a SIEM. The concept is not fully fleshed out in the industry, though a lot of players are working on it, especially in the SIEM market. The next generation of security analysis capabilities will clean all the disparate inputs, normalize them so that they can be used for analysis, and allow us to compare information from all sources.

In the end, this process will combine and automate network and application security to an extent most organizations haven’t experienced before. But to do so, companies will have to get really knowledgeable about what is happening in their network and what the blind spots are. We have to ask questions such as: Do I have anything that gives me visibility into what is happening at the session layer on a PC? To what extent do I have the stack in my PCs, servers, and network covered?

All this is imperative today. In this new era, companies who rely on block lists, human analysis, or end users being able to spot phishing emails are going to be completely exposed.

And the pace is about to change due to our adversaries’ ability to generate new attacks in an automatic way. How do we fight off more automation on the bad guys’ part? The answer is a massive push into automation and a clearer and speedier corroboration of data on ours.

Related Content:

Mike Convertino has nearly 30 years of experience in providing enterprise-level information security, cloud-grade information systems solutions, and advanced cyber capability development. His professional experience spans security leadership and product development at a wide … View Full Bio

Article source: https://www.darkreading.com/endpoint/battle-of-the-ais-dont-build-a-better-box-put-your-box-in-a-better-loop/a/d-id/1329682?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple