STE WILLIAMS

Facebook Offers $1 Million for New Security Defenses

The social media giant has increased the size of its Internet Defense Prize program in order to spur more research into ways to defend users against the more prevalent and common methods of attack.

BLACK HAT USA – Las Vegas – Facebook is dramatically upping its efforts to entice security researchers to come up with new ways to secure and defend the Internet.

The social media titan is increasing the size of its Internet Defense Prize to $1 million to be doled out in a series of prizes throughout 2018, said Alex Stamos, Facebook’s chief security officer, who will deliver the keynote address here today at Black Hat. Facebook last year awarded $100,000 in Internet Defense Prizes, and a total of $250,000 since starting the awards recognition program along with USENIX in 2014.

Facebook’s goal is to encourage researchers to develop new ways to defend Internet users against vulnerabilities, and minimize the success rate of attacks, especially those that involve the re-use of the same password on multiple accounts, or duping a newbie Internet user into sharing personal and financial information during the creation of their Internet account.  

It’s the simpler day-to-day attacks like these, rather than the ultra-complex and rare 0-day attacks, where at least half of security research should be focusing, Stamos said in an interview with Dark Reading. Stamos says he estimates that offensive research feels like it accounts for 99% of the work being performed and only 1% is devoted to defensive security research.

As part of the Internet Defense Prize competitions, researchers will be given a variety of topics where Facebook would ideally like to see more research, Stamos said.

While a lot of defense researchers are focusing on authentication or new ways to authenticate oneself, Stamos noted that account lifecycle management is also an area of interest.

“What we see less from the research community is understanding that the entire lifecycle of somebody’s relationship with an online service has actually security issues throughout it,” Stamos said. “There’s the creation of the account, what do you do when someone loses their phone, loses their password. These are issues that the bad guys are actually exploiting … so research into the real world would be a great thing to happen.”

Facebook is also interested in seeing more research surrounding the worldwide mobile device ecosystem, he said.

“There is a lot of research into the new flaws or ways to exploit fully patched or very expensive devices. But that is not reflective of a huge percentage of the world population,” Stamos said.

A large portion of the global population cannot afford smartphones that cost upwards of $600 or $700, but rather use less expensive Android devices that may cost $50 to $100 and are loaded with an older version of the operating system, he noted.

“There is a huge focus on finding 0-Days on iPhones, and while that is a great thing to do, there is almost no research into the real mobile phone ecosystem and what it looks like and how we can keep people safe if we are shipping hundreds and millions of these phones,” observed Stamos.

Empathy in Security

Twenty years ago the security industry was fighting for respect and to have companies understand that vulnerabilities needed to be patched, Stamos recalled. Now, however, the security industry has won the fight but the questions of “what do we do now” looms, he said.

Security researchers can improve their defense tactics by developing more empathy for users who are in a lower socioeconomic bracket. For example, a youth living in an underserved community may purchase an older version of a smartphone that is running an operating system that does not have the latest updates. “What would their security experience be like?” Stamos said.

By walking in those users’ shoes and developing an empathy for how they may behave when it comes to security, a defensive researcher can catch more things that could potentially go wrong, he noted.

Greater empathy may also come by way of a more diverse workforce. Facebook also announced today it hopes to expand diversity in the security workforce. The company is teaming up with CodePath to develop online and in-classroom cybersecurity courses for Virginia Tech, California State University San Bernardino, Mississippi State University, Merritt College, Hofstra University, and The City College of New York. The classes will be offered starting this academic school year, with students potentially landing an internship at Facebook, Stamos said.

In addition to developing empathy for users, security researchers can also benefit by extending empathy to software developers or other members inside and outside of their tech team at a micro-level, Stamos said. For example, security researchers with dismissive attitudes about finding vulnerabilities in another person’s code, may makes those researchers feel smarter, but that does little to effect real change in the security community, Stamos noted.

Security researchers with an empathic nature are also needed at a macro level, which would include working with politicians and law enforcement when they find themselves thrown together, he said, such as the San Bernardino terrorist attack, when government officials were trying to unlock a terrorist’s iPhone. Another more recent example relates to the questions that have emerged about Russia’s involvement with the US elections and elections in Europe.

Facebook also announced today it will be a founding sponsor of the Defending Digital Democracy Project. This initiative will focus on improving the security around elections and the Democratic process. Facebook will provide financial and technical support to Harvard University’s Belfer Center, Stamos said.

Stamos said he has already seen some signs of a movement toward more empathy in security: “We have started to see some security people in our community start to think this way,” he said. “I figure we’ll do better this time than it taking the next 20 years.”

Related Content:

 

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/facebook-offers-$1-million-for-new-security-defenses/d/d-id/1329468?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Wild West of Security Post-Secondary Education

Black Hat researchers will show how inconsistent security schooling is at the university level.

Although an increasing number of universities and post-secondary institutions are offering some level of cybersecurity education, the discipline suffers from a lack of consistent accreditation or measurement of educational efficacy. As things stand, educators aren’t carefully considering their curriculum standards and recruiters are having a hard time using scholarly credentials as a measurement for new employees.

This is the premise of a Black Hat talk by two Rochester Institute of Technology (RIT) professors who today plan to expose one of the fundamental problems behind the shortage in security talent across the industry.

They took a deep dive examining security programs across the US for their presentation. Foremost among their findings was that while most schools today use their computer science degrees as the main method for disseminating cybersecurity knowledge, the actual security content of these compsci degrees is absolutely miniscule.

The Association for Computing Machinery (ACM) curriculum guidelines that govern compsci degree accreditation only requires three to nine lecture hours of security for a four-year computer science degree, says Rob Olson, a professor of programming, mobile security, and Web app security at RIT. As he emphasizes, those aren’t credit hours — those are actual hours in the classroom.

“That’s not just application-level security or coding-level security. That includes, in the computing science curriculum, where networking security and strong security principals would fit in,” chimes in his co-presenter, Chaim Sanders, also a professor at RIT.

The breakdown typically looks something like one hour dedicated to fundamental security, one to two lecture hours of secure design, one to two hours on defensive security, one hour on threats and attacks, and two optional hours on network security.

“And then — this is one of my favorites — one lecture hour on all of cryptography,” Olson says. “And that’s optional. That’s optional.”

Meanwhile, a number of schools are recognizing that they need to step up their game for cybersecurity and are making program changes accordingly. According to Olson and Sanders, for about 25% of schools that means specialized cybersecurity degrees. This is good in theory, but it presents problems at the execution level. First of all, some worry about whether this is even an effective method for teaching security today. While increasingly more real-world organizations move toward DevSecOps, where security is a shared discipline across the developer and operations teams, breaking it out like this goes in the opposite direction that most IT departments are moving.

“So that seems to be an interesting, although maybe not necessarily very effective, maneuver, because it separates out who will essentially become the developers from the people who are going to be doing security in organizations,” says Sanders.

Meanwhile, at a more fundamental level there’s no true accreditation available as a backstop for these specialized cybersecurity programs. At best, the National Security Agency (NSA) has its own set of designations that have been serving as a pseudo accreditation and which governs grants to these schools from the government for cybersecurity improvements.

“The closest thing to accreditation we have is NSA designations and in those cases there’s been a lot of open-endedness historically, which has fueled a lot of fly-by-night schools that are doing it as a draw but which don’t necessarily have the technical expertise to maintain the computing security program,” Sanders says.

This has created a large degree of stratification of the haves and have-nots, with only the tech schools able to offer a curriculum that keeps pace with today’s rapidly changing attack and defense trends. The trick is that it’s difficult to even convey that to employers because there’s no consistent measurement of cybersecurity educational efficacy either.

“There is very little assessment within higher education of things like learning outcomes for cybersecurity,” Olson says. “The curriculum guidelines that are there say these programs are supposed to teach security, but they’re not actually assessing the security knowledge that students are getting all that much.”

Related Content:

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/careers-and-people/the-wild-west-of-security-post-secondary-education/d/d-id/1329471?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hacking the Wind

A security researcher at Black Hat USA shows how wind turbine systems are susceptible to potentially damaging cyberattacks.

BLACK HAT USA – Las Vegas – Gaping security holes in wind energy control networks make them vulnerable to cyberattacks for extortion and physical destruction purposes, a researcher showed here today.

Jason Staggs, a security researcher at the University of Tulsa, has spent the past couple of years crisscrossing the US and hacking away at the systems that run the wind turbines that convert wind energy into electrical power. He did so with the blessing of operators of the wind farms, who allowed him to test the security of a single turbine at their sites and with the stipulation he would not disclose the names, locations, or products involved for security reasons.

What he found was a disturbing trend among these renewable power systems: “We were seeing the same vulnerabilities over and over again” in each wind farm and across multiple vendors’ equipment and models, Staggs said in an interview with Dark Reading last week.

If the vulnerabilities he found some familiar, it’s probably because they are typical of traditional ICS/SCADA-type systems: easy-to-guess or default passwords, weak and insecure remote management interfaces, and no authentication or encryption of control messages.

Staggs says an attacker would need control over just one turbine at a wind farm to take over the entire operation. He physically plugged a homegrown Raspberry Pi-based tool onto the control system network, and found that it only took that one turbine to control the entire operation.

“I had to have physical access to [just] one turbine to rule them all,” he says. Staggs, who presented his research here today at Black Hat, says he was able to pull off the hack at multiple wind farms around the country.

He admits that security weaknesses in the wind farms echo those of so many other ICS/SCADA systems also built for high availability operations as the priority. But he was most interested in what an attacker could do once he or she hacked the wind turbine system.

“No one is looking at the implications from an attacker’s motive: how could they leverage this access control system to control the wind turbine, to damage it or hold it for ransom?” he says.

Extortion-type hacks could be lucrative, he says, with downtime for a system costing $10,000 to $30,000 per hour. “If you can hold a 250 megawatt [system] at ransom for one hour” the wind farm operator just might be willing to pay a less expensive ransom fee, he says.

Wind today represents 5.6% of electricity generated in the US, according to the Department of Energy, and by 2030, wind could provide 20% of the nation’s electricity.

“The more devious thing to do would be to gain access [to the wind turbine controller system] and wait for years until we’re more dependent on wind and then do bad things” with the systems, he says.

Wind farm vendors typically set up the systems for the wind farmers, which are typically power companies or their subcontractors  Turbine system vendors school them on how to use and monitor the turbine system. After that, the wind farmer is on its own for the actual operations, he says. So “we’re helping them ask the right [security] questions” of the vendors, he says. “We’re trying to raise awareness of wind farm companies who operate these farms.”

Hack the Wind

The wind-turbine automation controllers Staggs tested were stationed sat the base of the turbine – ome 300 feet off the ground – with only a padlock as physical security. Staggs says there were no security cameras in place, so his only obstacle besides the harrowing height was to crack the hardware lock. “You can pick [the lock] or cut it with bolt cutters, open the door, and have all the access to the ICS network switch,” he says.

Once he plugged his Raspberry Pi tool onto the CAN bus flat network architecture, he was on the network that broadcasts unencrypted communications among the other pieces of equipment, including the turbines themselves.

That would allow an attacker to alter the controls of the turbines, including the motors, gears, and power control. He or she could change the speed values, for instance, which would force the turbines to spin out of control and break, or bring the turbines to to a standstill, halting power generation.

The automation controller – basically the brains of the system – communicates to the programmable logic controllers (PLCs) that run the turbine’s controls. It’s most commonly a Windows embedded, Linux, or Vxworks, system, Staggs found.

“If you know what you’re doing, you can actually mess with the braking system to activate or degrade its [the turbine] integrity,” he says.

WindTools

Staggs built two network attack tools for his wind turbine research, Windshark and Windpoison. Windshark takes advantage of the unencrypted protocols between the human operator and automation controllers inside the turbines. “It can change the operational state of a turbine; turn it on and off,” he says. “Or change the maximum power output.”

Windpoison basically executes man-in-the middle attacks on the network. “We can control commands and what the operator sees,” he says. “We could falsify the current RPMs of the rotor positioning,” for example.

He also built a tool for remotely hacking the turbine systems from afar, without physical access. The so-called Windworm proof-of-concept abuses the telnet and FTP interfaces used for network management of the turbine networks, which often employ default or easy-to-crack passwords.

Staggs hopes to release the tools once the turbine equipment vendors fixed the security flaws he has identified.

To pull off any of these attacks, an attacker would need some knowledge of the systems and vendor equipment, however. “Sometimes the attacks have to be customized for different vendors,” too, he says.

The good news is that there are some things that operators of wind farms can do for now to protect their networks from ransom, sabotage, or other cyberattacks, including segmenting the network and ensuring that wind turbines are isolated from one another on the network to avoid a single point of failure via an attack. Stags also recommends adding an inline firewall between turbines, or adding encrypted VPN tunnels between turbines in the field and the substations that run them.

“So if one turbine is compromised, it can’t compromise the others,” Staggs says.

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/hacking-the-wind/d/d-id/1329453?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

FBI Talks Avalanche Botnet Takedown

FBI unit chief Tom Grasso explains the takedown of Avalanche and how the agency approaches botnet infrastructures.

BLACK HAT USA – Las Vegas – Tom Grasso, unit chief of the FBI’s cyber division, took the Black Hat stage to discuss the processes and partnerships leading up to the massive Avalanche takedown in December 2016.

Avalanche “wasn’t a botnet,” he noted at the beginning of his talk. It was an infrastructure for enabling botnets, created by two administrators and active since 2010. The multitiered network of servers was used to spread malware campaigns, facilitate “money mule” laundering schemes, and act as a fast-flux communication infrastructure for other botnets.

The network affected more than 500,000 systems and caused hundreds of millions of dollars in losses. Malware powered by Avalanche included Nymain ransomware and GozNym, a banking Trojan designed to steal credentials and initiate fraudulent wire transfers.

Grasso displayed an ad for Avalanche on criminal forum DirectConnection, where it was described as “ideal for hosting Trojans” with “bulletproof hosting” and high-speed uplinks. More than 800,000 malicious domains were associated with Avalanche; its complexity “demonstrates the great lengths criminals will go to, to make this work,” he explained.

“We’re not talking about some kid in his mom’s basement; … we’re talking about businessmen. This is a business to them,” he said. “This was a strategic move by the criminals running this to add another level of complexity to make it unsusceptible to law enforcement intervention.”

As part of his presentation, Grasso discussed the FBI’s approach to reducing the threat of botnets. Its steps include neutralizing threat actors through arrest, charge, and prosecution; disabling the infrastructure; and mitigating the threat by sharing IOCs and signatures.

Working with the private sector is essential, he added. Private sector businesses identify priority threats and the FBI works with them to brainstorm solutions. Both sides share intel on the problem and determine a way to neutralize the threat.

The FBI worked with private companies, international organizations, and foreign governments to take down Avalanche. Partner organizations included FBI agents, German state and federal police, Ukrainian police, Shadowserver, nonprofit Registry of the Last Resort, and Fraunhofer, a German company that mapped out the technical patterns of Avalanche.

“The criminals are really excellent at collaborating. … It’s one of the reasons they’re great at what they do,” said Grasso. “If we’re going to do something about these problems, it’s gonna be a joint effort.”

In November 2015, it was discovered the administrators behind Avalanche were using a private server in Moldova to communicate with clients and for the domain registration panel. In January 2016, they moved the functions of the Moldovan server to a private server in the US.

A search warrant on the private server revealed email addresses for the administrators and a buddy list with more than 200 clients. Official discovered easy-reg.net was an administrative panel that stated 3,000 domains run over Avalanche websites. One chat discovered by officials included an explanation of the “fast-flux” decisions driving criminal activity on the network.

The investigation of Avalanche included the arrest of five individuals and searches across four countries, the seizing of servers, and an “unprecedented effort” to sinkhole more than 800,000 malicious domains associated with the infrastructure.

Going forward, Grasso emphasized the importance of working with private and international partners as criminals conduct operations abroad.

“The bad guys are never in your country. … They’re always somewhere else when you’re investigating this stuff,” he said.

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/fbi-talks-avalanche-botnet-takedown/d/d-id/1329473?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Chips with everything – are you ready to be bio-hacked?

The news that a US shopping self-service vendor plans to implant or “bio-hack” dozens of its employees with tiny chips was always going to grab people’s attention.

The Wisconsin company involved, Three Square Market, is far from the first organisation to do this –Swedish startup hub Epicentre experimented with the idea earlier this year, as have plenty of adventurous individuals – but it is still be an early example of how human microchipping could be used in mainstream business.

The idea is for 53 of the company’s workers to have a tiny $300 NFC (Near Field Communication) RFID chip inserted under the skin between their thumb and index finger, on a voluntary basis. This device can then carry credit card data, allowing wearers to buy goods from the company shop without having to carry plastic.

This application serves as an advert for the company’s self-service vending systems, which doubtless explains why it has come up with the idea as a clever news advertisement for itself.

It will also be used by employees to enter the workplace and authenticate to desktop PCs, which means they won’t need to log in using conventional credentials.

“It takes about two seconds to put it in and to take it out,” Three Square Market’s Patrick McMullan told the BBC.

It would be easy to throw the word “Orwellian” at bio-hacking but, arguably, that is to misapply the term.

The chip does not track the individual’s location, nor does it allow surveillance beyond the fact they have entered a building, logged on to a PC or bought something, which any digital technology can also do. Three Square Market is not watching its workers.

The high-frequency 13.56 MHz NXP NTAG216 NFC chip (888 bytes of writable data) used has been around since 2012, finding a niche in a range of product and smartphone tags. The underlying NFC technology is also used in a wide variety of technology nobody thinks twice about, including contactless credit and debit chip cards as well as inside smartphones themselves.

All the same, putting a chip inside a human being does feel as if it’s crossing a line. Normally people authenticate themselves by carrying a token of sorts, for instance a credit card or two-factor security token. In this concept, the employee becomes the token.

Asking people to turn themselves into a walking authentication system sounds novel today but raises legal and ethical issues that might one day cause problems.

It’s unlikely employees could be compelled to have a chip inserted but would there be a hidden price for anyone unwilling to agree to what might be pitched as an important security boost? It’s also the case that NFC chips are developing rapidly, acquiring more memory as they add functions. That, or their limited lifespan, could inevitably demand upgrades.

Perhaps the biggest unknown is security. The data stored on these NFC chips is encrypted and can’t be read remotely, but it’s impossible to rule that out should some kind of vulnerability be uncovered.

The possibility of hacking chips sitting inside humans, whether to steal data or compromise capabilities, sounds far-fetched. The question is how much hard work tightening security needs to be done before people can take this on trust.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/kXK_GukL3Lg/

WikiLeaks drops another cache of ‘Vault7’ stolen tools

The WikiLeaks “Vault 7” almost-weekly drip-drip-drip of confidential information on the cybertools and tactics of the CIA continued last week.

The latest document dump is a trove from agency contractor Raytheon Blackbird Technologies for the so-called “UMBRAGE Component Library” (UCL) Project, which includes reports on five types of malware and their attack vectors.

This is the 17th release of specific CIA hacking or surveillance tools since the initial announcement by WikiLeaks on March 7.

According to a statement announcing the latest release:

The documents were submitted to the CIA between November 21st 2014 (just two weeks after Raytheon acquired Blackbird Technologies to build a Cyber Powerhouse) and September 11th 2015. They mostly contain Proof-of-Concept ideas and assessments for malware attack vectors – partly based on public documents from security researchers and private enterprises in the computer security field.

Raytheon Blackbird Technologies acted as a kind of “technology scout” for the Remote Development Branch (RDB) of the CIA by analysing malware attacks in the wild and giving recommendations to the CIA development teams for further investigation and PoC development for their own malware projects.

The component library includes:

A new variant of the HTTPBrowser Remote Access Tool (RAT), used by a threat actor known as “Emissary Panda,” believed to be in China, which was built in 2015. It is a keylogger, and according to Raytheon captures keystrokes  “using the standard RegisterRawInputDevice() and GetRawInput() APIs and writes the captured keystrokes to a file”.

A new variant of the NfLog RAT, also known as IsSpace and used by “Samurai Panda”. It is, according to Raytheon, “a basic RAT that polls C2 servers every 6 seconds awaiting an encoded response”. If it detects that a user has administrative privileges, “it will attempt to reload itself using the elevated permissions”.

Regin, described as “a very sophisticated malware sample,” which has been around since 2013. It is used for target surveillance and data collection. Raytheon said it has a six-stage, modular architecture that “affords a high degree of flexibility and tailoring of attack capabilities to specific targets”. It is also stealthy, with an, “ability to hide itself from discovery, and portions of the attack are memory resident only”.

HammerToss, a suspected Russian state-sponsored malware, which became operational in 2014 and was discovered in 2015, uses Twitter accounts, GitHub or compromised websites, and cloud storage to arrange the command and control operations for the malware. It is considered the most sophisticated malware of the five in the current release.

Gamker, an information-stealing Trojan that “uses an interesting process for self-code injection that ensures nothing is written to disk”.

As WikiLeaks noted in its announcement, these were all malware attacks found in the wild, and therefore not secret. But the CIA’s hope clearly was that they would lead to development of “their own malware projects” – to be used to conduct attacks not just on individual computers or systems, but social media platforms like Twitter as well.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/n8ZtQ_O6soU/

Privacy dust-up as Roomba maker mulls selling maps of users’ homes

iRobot, maker of the cat chariot-cum-auto-vacuum Roomba robot, is looking into selling maps of our homes to one of the Big Three companies behind artificially intelligent (AI) voice assistants – Google, Amazon and/or Apple.

What does the Roomba know about us? Well, the data points its higher-end models collect include your home’s floor plan, the rough dimensions of everything located on your floor, the areas of your home that require the most cleanup and hence likely see the most activity, how often you clean, and the distances between all your stuff.

iRobot CEO Colin Angle told Reuters that this kind of spatial data is just the ticket when it comes to helping a smart home get over its dumbness about our physical environments:

There’s an entire ecosystem of things and services that the smart home can deliver once you have a rich map of the home that the user has allowed to be shared.

Examples given by Guy Hoffman, a robotics professor at Cornell University: sound systems that could match home acoustics, air conditioners that can schedule airflow by room, or smart lighting that can adjust according to time of day and window position. Plus, of course, the more data that companies like Amazon, Google or Apple have on us, the better they can market home goods at us.

In fact, iRobot made Roomba compatible with Amazon’s Alexa voice assistant in March, as in, “Alexa, ask Roomba to begin cleaning.”

Angle told Reuters that iRobot could reach a deal to sell spatial maps of homes to Google, Amazon, and/or Apple in the next couple of years. It’s already in active discussions with Amazon and Google about its ongoing effort to add Alexa and Google Assistant functionality to the Roomba line, Angle told Tech Crunch. Neither Amazon, Apple nor Google have so far commented on the news.

Privacy advocates’ response: you can wheel that privacy-hoovering notion right off a steep staircase.

Tech Crunch reports that, following a flurry of articles and a load of ears pricking up on the privacy front, iRobot seems a bit taken aback. Angle has, after all, been talking about integrating mapping with Alexa since March.

It’s stressing that, as Angle said in his interview with Reuters (which took place in May), selling off maps of users’ homes to the highest bidder is going to be strictly opt-in. From a statement Angle sent to Tech Crunch:

iRobot takes [the] privacy and security of its customers very seriously. We will always ask your permission to even store map data.

iRobot has not formed any plans to sell the data. We do hope to extract value from the information, but would only do so with the permission of our customers… But to be clear, this is only if you opt in. It is still unclear what – if any – actual ‘partnerships’ would be needed to make that happen.

Roomba owners can opt out of cloud-sharing within the iRobot Home app. But as Gizmodo found when squinting at iRobot’s privacy policy, the legalese states various situations in which it will indeed share user data with third parties.

That includes typical scenarios: iRobot can share your data internally, with subsidiaries, third-party vendors, and the government, upon request. Nothing surprising there. Plus, users have to give consent to share data with third parties for marketing purposes.

But keep reading and you’ll find that there are plenty of cases where it doesn’t need our say-so to share our data:

We may share your personal information with other parties in connection with any company transaction, such as a merger, sale of all or a portion of company assets or shares, reorganization, financing, change of control or acquisition of all or a portion of our business by another company or third party or in the event of bankruptcy or related or similar proceeding.

As Angle stressed, the map sharing is just in the talking stages at this point. Hopefully, the privacy concerns it’s causing will help to shape how the data sharing opt-in will look when it goes live.

Will it be in teensy type, buried in a lengthy legal document? Or will it be a whole lot easier for users to spot?

Let’s hope it’s the latter. After all, this is the infamous Internet of Things (IoT) we’re talking about. This is the place where data too often gets swept up and put into bags that have holes.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PcbK_P6oZNM/

Philadelphia RaaS: our map of how it works (and how to prevent it)

Yesterday at Black Hat USA 2017, Sophos released an in-depth report called “Ransomware as a Service (Raas): Deconstructing Philadelphia,” written by Dorka Palotay, a threat researcher based in SophosLabs’ Budapest, Hungary, office. It delves into the inner mechanics of a ransomware kit anyone can buy for $400. Once purchased, the bad guys can hijack and hold computer data for ransom in exchange for payment.

Yesterday, we focused on the marketing around Philadelphia and its leap from the Dark Web to promotion on the open web. Today, we focus on how the RaaS kit itself works.

Ransomware analysis

For a ransomware campaign to succeed, attackers must overcome four main challenges:

  1. Setting up a command-and-control server to communicate with victims,
  2. Creating ransomware samples,
  3. Sending the samples to the victims, and
  4. Managing the attacks (collecting statistical information, checking payment etc.).

When someone buys Philadelphia ransomware, they get an executable. This is the so-called Philadelphia headquarter. The headquarter helps the attackers in the first, second and fourth challenges during their attacks. We have seen examples where developers help their customers in the third step as well.

There are three systems involved in a Philadelphia ransomware attack. Two are under the control of the attacker and the third is the computer of the victim.

The attacker needs a computer on which to run the headquarter, and also needs a web server to communicate with the victims. In the following sections we explain what happens on these three systems during the preparation and successful execution of an attack. The figure below shows the three systems and the communication channels between them. The propagation of the malware is not included.

Track victims and (maybe) give mercy

In addition to the marketing, the product itself is advanced with numerous settings buyers can tailor to better target how they attack their victims, including options to “Track victims on a Google map” and “Give Mercy”. Tips on how to build a campaign, set up the command-and-control center and collect money are also explained. It’s all right there.

Ironically, the “Give Mercy” feature is not necessarily to help victims, but is instead there to help cybercriminals get themselves out of a sticky situation, says Palotay. It’s also there in case friends of an attacker accidentally find themselves ensnared or if the crooks want to test their attack.

The option to track victims on a Google map which sounds creepy, gives a glimpse into how cybercriminals determine the demographics of those they’ve deceived. This information could help them decide to repeat an attack, course correct the next attack or bail with the Mercy option.

Extra features for extra money

The Mercy and Google tracking options and other features in Philadelphia are not unique to this ransomware – but they aren’t widespread, either. These are examples of what’s becoming more common in kits and, as result, shows how ransomware-as-a-service is becoming more like a real-world software market. Palotay said:

The fact that Philadelphia is $400 and other ransomware kits run from $39 to $200 is notable. The $400 price tag, which is quite good for what Philadelphia buyers are promised, includes constant updates, unlimited access and unlimited builds. It’s just like an actual software service that supports customers with regular updates.

Philadelphia also has what’s called a “bridge” – a PHP script to manage communications between attackers and victims and save information about attacks.

Additional features that Philadelphia buyers can customize include the text of the ransom message that will appear to victims, the color of the text, whether the message appears before a victim’s data is encrypted, and “Russian Roulette,” which deletes some files after a predetermined timeframe. “Russian Roulette” is common in ransomware kits, and is used to panic users into paying faster by randomly deleting files after a number of hours.

Having customization options and bridges brings in more profit and adds a whole new dimension to cybercrime that could increase the speed of ransomware innovation, says Palotay.

In other RaaS cases SophosLabs examined, pricing strategies ranged from splitting a percentage of the ransom coming from victims with kit customers to selling subscriptions to dashboards that follow attacks.

Stolen code

The report also reveals that some cybercriminals have “cracked” or pirated Philadelphia and sell their own ripped-off version at a lower cost. While cracking is not new, the scale is interesting. Ready-made threats that don’t require attackers to know what they’re doing and are easily available for purchase are constantly evolving. Sophos expects this trend of upping the ante and committing fraud against fraudsters to continue. Palotay said:

It’s not uncommon for cybercriminals to steal one another’s code or build upon older versions of other ransomware, which is what we saw with the recent NotPetya attack. The NotPetya attack combined Golden Eye, a previous version of Petya, with the Eternal Blue exploit to spread and infect computers globally.

Defensive measures

For best practices against all types of ransomware, Sophos recommends:

  • Back up regularly and keep a recent backup copy off-site. There are dozens of ways other than ransomware that files can suddenly vanish, such as fire, flood, theft, a dropped laptop or even an accidental delete. Encrypt your backup and you won’t have to worry about the backup device falling into the wrong hands.
  • Don’t enable macros in document attachments received via email. Microsoft deliberately turned off auto-execution of macros by default many years ago as a security measure. A lot of malware infections rely on persuading you to turn macros back on, so don’t do it!
  • Be cautious about unsolicited attachments. The crooks are relying on the dilemma that you shouldn’t open a document until you are sure it’s one you want, but you can’t tell if it’s one you want until you open it. If in doubt, leave it out.
  • Patch early, patch often. Malware that doesn’t come in via document macros often relies on security bugs in popular applications, including Office, your browser, Flash and more. The sooner you patch, the fewer open holes remain for the crooks to exploit. In the case of this attack, users want to be sure they are using the most updated versions of PDF and Word.
  • Use Sophos Intercept X, which stops ransomware in its tracks by blocking the unauthorized encryption of files.
  • Try Sophos Home for Windows and Mac for free with family and friends.
  • Check out our webcast on RaaS, scheduled for August 23.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MNu4zsgoQvM/

Where are the holes in machine learning – and can we fix them?

Machine learning is taking over the world. It’s at the center of how your mobile phone, Amazon Echo and Google Home devices work and how Google produces search results. Experts predict it’s going to play a role in automating fast food production and elements of the construction and trucking industries in the coming decades – and it’s taking information security efforts to the next level in the form of more effective intrusion detection systems.

But like all good technological advances, the bad guys have the capacity to exploit machine learning algorithms. Sophos chief data scientist Joshua Saxe puts it this way:

Insecurities in machine learning algorithms can allow criminals to cause phishing sites to achieve high-ranking Google search results, can allow malware to slip by computer security defenses and, in the future, could potentially allow attackers to cause self-driving cars to malfunction.

At BSidesLV on Wednesday, Saxe outlined the dangers and ways to minimize exposure while the bigger fixes are being worked on.

The perilous unknown

Machine learning is still new enough that there are many unknowns. One is that security practitioners don’t know how to make machine learning algorithms secure. Saxe said:

While the security community has raised concerns about machine learning, most security professionals aren’t also machine learning experts, and thus can miss ways in which machine learning systems can be manipulated.  Additionally, machine learning experts recognize that today’s machine learning systems are vulnerable to attacker deception, and recognize this as an unsolved problem in machine learning.

Computer scientists are working to solve the problem, but as of now, all machine learning algorithms – even those deployed in real-world systems like search engines, robotics systems, and self-driving cars have serious security issues.

The danger to cars was one of Saxe’s main examples, as this slide demonstrates:

For security professionals, the ability to hack such technology raises the question of how the bad guys could potentially wreak havoc on machine learning-based security technology. Another example Saxe used was the potential to corrupt biometric security systems, as seen in this slide:

Now what?

Having presented the dangers, the question now is what security professionals can do to minimize the threat. Saxe’s advice:

  • Whenever possible, don’t give attackers access to your machine learning model.
  • Whenever possible, don’t give attackers black-box access to your model.
  • In cybersecurity settings, use layered defenses so that attacker deception vis-à-vis a single machine learning system is less damaging.

As scary as the threat might look to the average person, Saxe said there’s cause for hope: those in the machine learning community are well aware of the vulnerabilities and are constantly working on ways to close security holes. He said:

Researchers at Sophos are exploring the ways cybersecurity machine learning algorithms can be attacked and are working to fix these issues before criminals discover and exploit them. We’ve found some interesting mitigations, such randomizing the training sets we select and deploying diverse machine learning models, that we’ve found makes it harder for criminals to succeed.

Expect more detail on that in the near future at Naked Security.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RT9NrwmY2Vw/

Time-rich netizens marshall ballot-stuffing bots against… Radio Times contest

Internet ballot-stuffing has existed for as long as Rickrolling, if not longer, but it used to be a serious endeavour requiring a certain level of commitment, however misguided.

Yesterday a Reddit community sprung up dedicated to the proposition that it’s worth the trouble to use bots to skew a Radio Times poll. Yes, the TV listings mag beloved of aunties across this United Kingdom, and specifically, its website, radiotimes.com, where the poll was run.

The poll, managed by PollDaddy, saw James O’Brien of talk radio station LBC pitted against Brady Haran of the podcast Hello Internet in the final two of the radio and podcast award.

The netizens appeared to want O’Brien, who had been shortlisted in the poll, which closed at 2200 BST last night, to lose and Haran, whose Hello Internet listeners are known as “Tims”, to win.

Judging by the chat on the thread, PollDaddy’s developers valiantly deployed bans on IP addresses that voted too often as well as slinging out CAPTCHAs to those voters it was not sure about. We’ve asked PollDaddy for comment about the poll glassbowls.

Later yesterday the batch-voting bots for Windows and JavaScript were supplanted in complexity by a Linux app that relies on Tor for anonymity. In theory the latter script, put together by one of the unusually motivated Reddit denizens, ought to have defeated CAPTCHAs, although there was no great confidence on this point. Another character put together a web-based client that’ll run on anything, including mobile phones.

O’Brien had already been labelled the loser in the run-off in his Wikipedia entry by Tuesday afternoon, hours before the poll closed.

Why this particular bunch of Redditors dislike O’Brien is a doctrine of faith, not really explained, and frankly, who cares? It’s not as if Star Trek versus Star Wars is under discussion.

We’ve asked PollDaddy and LBC for comment and will update if we hear from them. Radio Times declined to comment. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/26/reddit_ballot_poll_radio_times/