STE WILLIAMS

DEA and ICE hiding cameras in streetlights and traffic barrels

Drug and immigration cops in the US are buying surveillance cameras to hide in streetlights and traffic barrels.

Quartz spotted a number of contracts between a company called Cowboy Streetlight Concealments and two government agencies: the Drug Enforcement Administration (DEA) and Immigration and Customs Enforcement (ICE).

As government procurement documents show, since June, the DEA has spent about $22,000 to buy “video recording and reproducing equipment” in Houston, Texas, while the Houston ICE office paid out about $28,000 for the same type of equipment, all of it coming from Cowboy Streetlight Concealments.

It’s unknown where those surveillance cameras will be installed or where they’ve already been plugged in. Quartz reports that ICE offices in the Texas cities of Dallas, Houston, and San Antonio have all ponied up money to buy equipment from Cowboy Streetlight Concealments. The DEA’s most recent purchases were funded by the agency’s Office of Investigative Technology, in Lorton, Virginia.

Streetlight is owned by Christie Crawford and her husband, who’s a police officer in Houston. Crawford told Quartz that she wasn’t at liberty to go into detail about federal contracts: all she can say is that the government tells her company what it wants, and Streetlight builds it:

Basically, there’s businesses out there that will build concealments for the government, and that’s what we do. They specify what’s best for them, and we make it. And that’s about all I can probably say.

Does it really matter where the hidden surveillance cameras are being installed? Maybe to me and you, but that could just be because we aren’t aware of how ubiquitous surveillance cameras are. Crawford:

I can tell you this – things are always being watched. It doesn’t matter if you’re driving down the street or visiting a friend, if government or law enforcement has a reason to set up surveillance, there’s great technology out there to do it.

Another company in this space, Obsidian Integration, last week received a DEA contract for “concealments made to house network PTZ [Pan-Tilt-Zoom] camera, cellular modem, cellular compression device”. Obsidian, which sells “covert systems” and “DIY components,” lists among its customers the Department of Homeland Security (DHS), the Secret Service, the FBI, and the Internal Revenue Service (IRS), among other government agencies.

Last week, Obsidian was also granted a $33,500 contract with New Jersey’s Jersey City Police Department to buy a covert pole camera. The city’s resolution noted that the reason it needs a hidden camera is so that police can “target hot spots for criminal and nuisance activity and gather evidence for effective prosecutions.”

Quartz noted that it’s not just streetlights that are spying on us: the DEA is stashing hidden cameras in other places that can just as handily surveil the masses:

In addition to streetlights, the DEA has also placed covert surveillance cameras inside traffic barrels, a purpose-built product offered by a number of manufacturers. And as Quartz reported last month, the DEA operates a network of digital speed-display road signs that contain automated license plate reader technology within them.

Unfortunately, there’s scant oversight regarding where surveillance cameras can be put or how the government can use them, ACLU senior advocacy and policy counsel Chad Marlow told Quartz:

[Local law enforcement] basically has the ability to turn every streetlight into a surveillance device, which is very Orwellian, to say the least. In most jurisdictions, the local police or department of public works are authorized to make these decisions unilaterally and in secret. There’s no public debate or oversight.

What little effort has gone into curtailing local governments’ pervasive surveillance hasn’t met with much success. In January 2018, a California committee passed senate bill SB-712: a piece of legislation that would tweak the law that says you can’t cover your car’s license plate. It basically amounted to “keep your spying, data-collecting, privacy-invading cameras away from our cars.” As it is, there are businesses that send automated license plate readers (ALPRs) up and down streets to document travel patterns and license plates and sell the data to lenders, insurance companies, and debt collectors.

The bill was not passed.

Seven months after it was voted down, the California city of Sacramento admitted to tracking welfare recipients’ license plates: a failure to comply with the city’s own regulations on license plate data collection.

The use of streetlights to surveil citizens should come as no surprise: it’s in keeping with these agencies’ habitual, mass surveillance. In 2015, the ACLU obtained documents revealing that the DEA, over the course of several years prior, had been building a massive national license plate reader (LPR) database that it shares with federal and local authorities, with no clarity on whether the network is subject to court oversight.

Identifying cars and users based on their license plates is nothing new. It’s what they’re for: license plates are public, unique identifiers. And of course we’ve always had the ability to follow a person or a car by spotting an individual, or a license plate, and watching where they go, with or without streetlamps and spying trash barrels.

So in some ways, this is nothing new. It’s just yet another sign of the scale and growing ubiquity of surveillance.

There are properties and capabilities that emerge from large collections of data that don’t exist in the same data at smaller scales (it’s why we had to invent a term – Big Data – to describe it).

Even if you believe that people who don’t have anything to hide shouldn’t have anything to fear from surveillance, bear in mind that these large, valuable collections of data aren’t necessarily being amassed in a secure fashion, properly disposed of, or kept safe from prying eyes… or from just being handed to journalists who ask for them, for that matter.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Q9PrPaM-a6E/

WordPress GDPR compliance plugin hacked

The EU General Protection Data Regulation (GDPR) is supposed to make companies take extra care with their customers’ personal data. That includes gathering explicit consent to use information and keeping it safe from identity thieves.

WP GDPR Compliance is a plugin that allows WordPress website owners to add a checkbox to their websites. The checkbox allows visitors handing over their data to grant permission for the site owners to use it for a defined purpose, such as handling a customer order. It also allows visitors to request copies of the data that the website holds about them.

Users send these requests using admin-ajax.php, which is a file that lets browsers connect with the WordPress server. It uses Ajax, a combination of JavaScript and XML technology that creates smoother user interfaces. This system first appeared in WordPress 3.6 and allows the content management system to offer better auto-saving and revision tracking among other things.

The GDPR plugin also allows users to configure it via admin-ajax.php, and that’s where the trouble begins. Attackers can send it malicious commands, which it stores and executes. They can use this to trigger WordPress actions of their own.

Wordfence, the WordPress security firm that discovered the flaw, said that attackers were exploiting it in two ways.

In the first, attackers created administrative accounts by allowing new users to register and then alter a setting to automatically make them administrators. Then they installed a malicious plugin that infected the site with malware. Attackers were using this method to install a PHP web shell – a script that gives them remote admin capabilities on the web server, which provided them with terminal access and a file manager, Wordfence reported.

In the second exploit, attackers uploaded a series of scripted tasks that are scheduled via WP-Cron. Cron is a common task scheduling system that handles jobs on Unix systems, and WP-Cron is the way that WordPress handles scheduled tasks.

This attack, which is more complex than the first, used the e-commerce plugin WooCommerce, which is one of the plugins that WP GDPR Compliance supports. It hijacked a WooCommerce function to install another plugin called 2MB Autocode. This plugin allows administrators to inject their own PHP code into WordPress posts.

The attackers used this attack to inject a PHP backdoor script that downloaded code from another site. The 2MB Autocode plugin then deleted itself.

Wordfence couldn’t find any obvious executable payload in this attack, but said that the attackers may be building a collection of websites and biding their time:

It’s possible that these attackers are stockpiling infected hosts to be packaged and sold wholesale to another actor who has their own intentions. There’s also the chance that these attackers do have their own goals in mind, but haven’t launched that phase of the attack yet.

The plugin developers fixed the flaw after the WordPress security team removed the plugin from the WordPress directory. Since then, the WordPress team has once again made it publicly available.

However, some users were not quick enough to update their systems. One posted on the plugin’s support forum:

I was not quick enough to update and have been hit with the WP GDPR Compliance Plugin hack. Website is now down HTTP error 500.

 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/_i5_ncdzXpY/

Google and Cloudfare traffic diverted to China… do we need to panic?

Conspiracy theorists can stand down from puce alert!

A network outage that affected US providers including Google and Cloudflare on Monday, intermittently diverting traffic via China…

…has been chalked up to a blunder.

Here’s why.

Internet traffic depends heavily on a system called BGP, short for Border Gateway Protocol, which ISPs use to tell each other what traffic they can route, and how efficiently they can get that traffic to its destination.

By regularly and automatically communicating with one another about the best way to get from X to Y, from Y to Z, and so on, internet providers not only help each other find the best routes but also adapt quickly to sidestep outages in the network.

Unfortunately, BGP isn’t particularly robust, and the very simplicity that makes it fast and effective can cause problems if an ISP makes a routing mistake – or, for that matter, if an ISP goes rogue and deliberately advertises false routes in order to divert or derail other people’s traffic.

Simply put, good news about reliable routes travels fast via BGP, but bogus news about incorrect or nefarious routes travels just as fast, until someone notices and the competent majority in the community react to correct the blunder.

That’s what seems to have happened in this case, where traffic to Google and other networks was intermittently disrupted, though fortunately not for long:

To envisage BGP blunders in driving terms, imagine that you are cruising on the freeway but receive a radio or satnav alert that the road ahead is closed just after the next exit, due to an accident.

You dutifully take the next exit to get off the freeway, only to find that the bulletin was wrong – it’s the next on-ramp that’s closed, not the freeway itself.

In other words: you’ve needlessly left the fast-flowing freeway; you can’t get back on it again without diverting through a nearby town; and everyone else who heard the bulletin did the very same thing, thus making a bad situation worse and clogging up the town centre.

What went wrong?

In this case, it looks as though a Nigerian ISP made a routing mistake that was accepted by a huge Chinese ISP, thereby inadvertently causing the outage – in internet terms, an injury to one can easily become an injury to all.

But was it a mistake, or should we assume some sort of conspiracy?

After all, Nigeria is popularly connected with online fraud; China is frequently accused of internet espionage; and a recently published paper explicitly claimed that China has been systematically using BGP hijacking as a cyberattack technique.

Put all of this together and it’s easy to jump to the conclusion that something deliberate and nefarious happened here, rather than simply blaming a momentary lapse.

But, as experienced network operators have already pointed out, if this were a deliberate hijack, it was a spectacularly ineffective and obvious one that didn’t work, because the ISP community at large quickly noticed and got it fixed.

Nevertheless, blunders of this sort do send network traffic where it wouldn’t usually go, giving even more people than usual to sniff it out, capture it, and comb through it later.

So it’s a great reminder of the slogan you’ll see on the T-shirts we like to wear in our live videos

Dance like no one’s watching
Encrypt like everyone is

(Yes, you can buy those shirts in the Sophos online store :-)


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Re-KmbqNEvg/

Sophisticated Campaign Targets Pakistan’s Air Force

Espionage campaign uses a variety of new evasion techniques.

A new campaign of exploits and malware has hit Pakistan’s Air Force, and it shows signs of being the work of a sophisticated state-sponsored actor in the Middle East. It also has implications for governments and organizations far from Pakistan’s borders, according to Cylance researchers.

The espionage campaign has been named “Operation Shaheen” in reference to the Shaheen Falcon that is the symbol of Pakistan’s Air Force. According to Kevin Livelli, director of threat intelligence at Cylance and one of three authors of three bundled reports detailing the operation, Shaheen is frequently invoked in the phishing email messages used as launch vectors for the attacks.

After the email messages, though, the campaign quickly becomes highly sophisticated. The threat actor, dubbed the “White Company” by the Cylance researchers, uses an array of evasion and obfuscation techniques to hide the presence and operation of malware.

“The White Company is the first threat actor of any kind that we’ve encountered that targets and effectively evades no fewer than eight different antivirus products,” Livelli says. Those eight products — from Sophos, ESET, Kaspersky, Bitdefender, Avira, Avast, AVG, and Quick Heal — were then turned against their owners when the malware “surrendered” to the antivirus software on a specific date. The surrender, he says, seems intended to distract, delay, and divert the target’s resources after the espionage package had achieved persistence on the victim’s systems.

According to Livelli, the White Company’s campaign is notable not just for the sophistication of its evasion techniques, but for the many layers of obfuscation employed. As Tom Pace, senior director of consulting services at Cylance and another report author, explains, “One of the techniques is packing the malware, which is a common technique. They’re packing it in five different layers, which is pretty significant.” That’s because with each level of packing, there’s a risk of corrupting the exfiltrated data, making it unusable, he says.

“For the White Group to risk packing five times is indicative of a very good familiarity with leveraging this kind of tool, and it’s something we don’t really see very often,” Pace says. Most threat actors might pack their malware once or even twice, but five-level packing is “… both impressive technically, and something we don’t see,” he adds.

Operation Shaheen is not the only White Group campaign under way, either, though Cylance hasn’t yet completed the research to say who the other targets are. Even for those not currently in the group’s crosshairs, though, there are reasons to be concerned by this activity.

“If you apply the traditional techniques of investigating these kinds of incidents, you would have missed most of the key takeaways here and not really understood what was going on in the campaign,” Livelli says. “If [traditional techniques are] applied in another context, and you’re following the tried-and-true methods, you’re not going to learn the right answers.”

As for what to do with that concern, both Livelli and Pace suggest a redoubling of basic efforts. “Even people that are incredibly sophisticated, with no technical limitations to their skills, are still just sending emails,” Pace says.

And users can be trained to avoid those emails, he adds. “If you look at some of the titles of documents there, they are like a perfect example of things that you see in most companies’ security awareness program training,” he explains.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/sophisticated-campaign-targets-pakistans-air-force/d/d-id/1333253?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

RIP, ‘IT Security’

Information security is vital, of course. But the concept of “IT security” has never made sense.

Information security is vital, of course, and I’m not proposing its elimination. But it’s time to kill off the whole concept of IT security because it never made sense in the first place.

Without doubt, information security is an essential function for any organization that values its data and/or reputation. However, the term “IT security” leads to confusion about what the security roles are, where the responsibilities start and end, and how competing objectives between departments are prioritized. This is especially important because information security has priorities opposed to those of the IT department.

Defining Terms
There are two possible meanings for “IT security.” One focuses on the types of controls under its scope, applying only to technical safeguards and putting aside the administrative and physical ones. The alternative definition specifies the department that security is concerned with — IT, that is — and ignores information maintained by other departments.

Just as there is no marketing-specific security or HR-specific security, an IT-specific security focus makes little logical sense. Placing information security within another department leads to a narrower and short-sighted implementation of information security for the whole organization. 

Where IT Security Fails
When primary security expertise is located under the IT department, the perspective is restricted to the realm of the IT department and veers from information security’s traditional holistic, organizational oversight. These two departments have different concerns, risks, and priorities. Some of the IT priorities are adaptability, technical features, and efficiency; infosec priorities include confidentiality, integrity, and availability. Some overlap occasionally will exist, but it is not significant enough to overcome the glaring differences and frequent conflicts between the two. 

An objective for one department introduces risks for the other. One example is the vulnerability scanning of network devices. Scans may cause additional scheduling headaches for IT, misbehaving devices, and user complaints about the efficiency of the systems. The consequence of a seemingly innocuous scan is that IT must temporarily put aside its priorities in order to react to these complications. So, obviously, vulnerability scanning is not on the IT wish list.

These frustrations work in both directions. New features that IT implements introduce more vulnerabilities, more systems to secure, and more risks. Information security staffers have additional headaches for every new system introduced to the environment. Therefore, a separation with distinct executive authorities should be maintained to serve as a check and balance to each other in the same way that a finance department exists to create a budget and prevent one department from spending all of the company’s profits.

CIOs Are Not CISOs
I have the utmost respect for CIOs and the responsibilities that are endlessly heaped upon them. However, one responsibility that they should not be tasked with is information security. Infosec has its own skill set and mindset, which are different from those in IT because of how people in that role have been trained and conditioned. The difference in ability causes a difference in position; CIOs are specialists in IT, and CISOs are specialists in information security.

It’s human nature for us to have a bias toward the things we know best. If security is under IT — as is the case with IT security — this bias will relegate the security objectives to secondary priorities, with IT goals taking precedence. Or, as sometimes happens, security objectives become merely an afterthought to IT’s priorities. 

Infosec Done by IT
It’s true that many organizations aren’t large enough to justify an entire department for information security — especially when that group may be only one or two individuals, or not have a specialist. Usually in these situations, the responsibility for security controls are incorporated into the roles for whichever IT staff member is working on a function that overlaps both IT and infosec. This maintains the IT department as both the implementor and verifier of its own work, a significant conflict of interest. 

A Match Made in Heaven
Most every organization that has begun to implement security controls also already has either a risk management or internal audit department. Either of these departments is a better fit for an emerging information security group. A significant aspect of infosec is the verification of controls, and the independence from the operational duties of that which is being evaluated is key. This leaves the IT department as the implementors of the technical security controls, just as they were. However, now the governance of deciding which controls to implement and the verification of their effectiveness reside with an impartial entity outside of IT.

Give Infosec Some Respect
Information security deserves an equal footing just like any other foundational department, such as accounting, marketing, or IT. Conflicts between the different priorities should be settled by senior executives from multiple disciplines looking holistically at the costs and benefits instead of the inherently IT-focused CIO handling this intradepartmentally.

Make the Move
If you have an IT security group or security-focused staff that takes direction and reports to the head of IT (i.e., the CIO, VP of IT, etc.), then take the opportunity to properly segregate duties and the conflicting interests from one another. Move the processes for verifying technical, administrative, and physical information security controls to a separate department than the one implementing them.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Kevin Kurzawa has a background in a variety of environments, with each having its own unique business drivers. His experiences in IT and information security have ranged from Department of Defense contractors large and small (including Lockheed and Harris) to traditional … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/rip-it-security/a/d-id/1333236?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Netskope Announces Series F Funding Round

The $168.7 million round will go toward RD and global expansion, says cloud access security broker provider.

Netskope, a cloud access security broker (CASB) provider, has announced a $168.7 million Series F funding round. The company says it plans to use the funds to increase RD and global expansion efforts.

The Series F round was led by existing investor Lightspeed Venture Partners. Current investors Accel, Geodesic Capital, Iconiq Capital, Sapphire Ventures, and Social Capital also participated, as well as new investor Base Partners. The round brings Netskope’s total funding raised to $400.1 million.

Read more here.

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/netskope-announces-series-f-funding-round/d/d-id/1333256?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Does wiping your iPhone count as destroying evidence?

Police are accusing a 24-year-old woman, arrested in connection with a drive-by shooting, of remote-wiping her iPhone and thereby destroying evidence – a felony offense.

Her defense: I don’t even know how to do that!

Daniel Smalls, the lawyer for the accused – 24-year-old Juelle L. Grant, of Schenectady, New York – on Monday told the local news outlet The Daily Gazette that his client wasn’t involved in the shooting, in which no one was injured; that she “didn’t access anything to remotely delete anything”; and that she “wouldn’t have any knowledge how to do that.”

His client is not a computer-savvy person, Smalls said. In fact, his staff is puzzling out this “remote wipe” thing now, he said:

We’re doing research on it ourselves.

Last week, police said that they believe that Grant may have been the driver of a vehicle involved in a drive-by shooting last month, so they seized her iPhone X as evidence at the time.

But then, according to court documents, Grant allegedly remote-wiped the device, in spite of knowing full well that the police intended to inspect it for possible evidence:

The defendant was aware of the intentions of the police department at the conclusion of the interview with her.

Police arrested Grant on 2 November and charged her with three felonies: two counts of tampering with physical evidence and one count of hindering prosecution. According to The Daily Gazette, one of the tampering charges has to do with the remotely wiped phone, while the other tampering charge and the hindering charge are concerned with her alleged actions on the day of the shooting.

She’s accused of driving the shooting suspect from the scene and concealing the shooter’s identity. By allegedly driving the suspect away, police say that she also helped remove another piece of evidence: the gun.

The case raises a few questions, firstly: Why didn’t Schenectady police store Grant’s phone in a Faraday bag? It would have blocked remote access to the device and thus made the remote wiping impossible. Anybody can buy one online.

The Daily Gazette asked, and the answer amounted to head scratching. City police spokesman Sgt. Matthew Dearing told the newspaper that he didn’t know if detectives had such technology, but he’d check. As of late Thursday, he hadn’t heard back.

Also, Grant’s lawyer said that Grant got a new phone within the days following her iPhone having been seized… could that have affected the data on her old phone?

Easy wipey presto gone-zo

Ms. Grant’s professed ignorance of how to remote-wipe her iPhone notwithstanding, it is, in fact, easy as pie. There are plenty of useful Android apps (like this one, from Sophos) that you can use to remote wipe, or you can simply use Apple’s own Find My iPhone service to erase a device.

Doing it on purpose is one thing. But what if your device is set to erase after X number of hours if you haven’t unlocked it? That’s what one Redditor pondered:

Say you set up a dead man switch on your phone. And you have to enter a code say every 24 hours or it wipes your stuff.

If you are arrested and your phone confiscated and you can’t put in your code while it’s in evidence, would that count as destroying evidence?

That’s a great question. Unfortunately, as far as I can tell, it’s a hypothetical one, given that there doesn’t seem to be any such app that will automatically wipe a phone if it hasn’t been touched in X number of hours.

Not that nobody’s ever thought to ask, mind you. This guy did, a year ago. The only thing his Reddit respondents could come up with were suggestions that were variations on the ability to consciously, purposefully remote-wipe, as opposed to set-it-and-forget-it.

Of course, set-it-forget-it would theoretically give a defendant such as Ms. Grant an excuse to avoid that felony charge of willful tampering with evidence …as in, “Oh, dear, did I forget to inform you that I have a dead-man switch on my phone? Silly me. Of course I never would have let my phone remotely, automatically purge itself of any potential evidence on purpose.”

But are there ways that somebody else could have wiped her iPhone?

Yes, if somebody else had her iCloud account credentials and managed to log in from the same IP address. Likely? Well, that’s another question entirely. It’s up to her lawyer and prosecutors to hash out the question of just how likely it is that Ms. Grant had her iCloud account hacked just in time for potential evidence to go up in smoke.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/A5HNoRzV7eU/

Between you, me and that dodgy-looking USB: A little bit of paranoia never hurt anyone

Arriving at a recent conference organised by one of the government’s many regulatory bodies, I received my obligatory lanyard – and something else, credit-card-shaped, emblazoned with the branding for event. “What’s this?” I asked.

“Oh, that’s a USB key.”

I presume the conference organisers mistook my wild-eyed stare of disbelief as one of benevolent gratitude and admiration for their consideration of my storage needs. Who could have thought this gift a good idea? Someone who had never heard of Stuxnet, or of any of the now-too-numerous-to-count stories of USB keys being used to infiltrate organisations, exfiltrate data, even destroy computers?

Image by Danomyte http://www.shutterstock.com/gallery-256714p1.html

Stuxnet ‘a game changer for malware defence’

READ MORE

Then I wondered if I was becoming paranoid – or not paranoid enough.

Times have changed. Technology has become a multitrillion-dollar business upon which the fate of nations and whole economies now depend. Over the last 30 years, technological risk became a major consideration in national strategic risk, and once that happened it became impossible to view the operations of the technology sector through a purely economic lens.

We spend plenty of time digesting earnings reports while blithely ignoring other considerations – the politics and mechanisms of power – because they don’t fit neatly onto balance sheets, leaving us open to all sorts of attacks – everything from industrial espionage to sabotage to the poisoning of algorithms to support political ends. (Hello, Facebook!)

Maybe that’s because those of us with long careers in technology don’t want to wear the full weight of our responsibilities that have grown as computing has become fundamental to the operation almost every process, everywhere. Where once we obsessed about “keeping things up and running”, there’s a tacit recognition this now means “keeping the world from imploding”, a task that feels as though it becomes more difficult by the day, as others step in and work to steer things to serve their own ends.

All of that awareness landed in my hand with that USB key, at once just an innocent gift and simultaneously inspiring a dark chain of thought about provenance, chain of custody, country of production, and the economic benefits of having access to a very select set of government agencies and commercial firms represented at this conference. A rich surface ripe for attack, and this device a possible vector.

Zuckerberg

More than 87m Facebook profiles farmed, says second ex-Cambridge Analytica witness

READ MORE

Attacks and penetrations represent costs of business in a connected world. They too have been reduced lines on a balance sheet, carefully hidden away in an accounting of “cyber” costs that banks and other financial institutions bury so that their customers continue to see them as stable, reliable and secure. Yet for as long these attacks continue – and succeed – hiding that fact may be doing us more harm than good.

Somewhere in the wide range between ignorant and terrified we have to find a new place to have a conversation about security. A little nuance could serve us and our institutions well, giving us some ways to think constructively and proactively about how we want to inform our practice as technologists with a broader awareness of this sector’s importance to both economic and and national security.

A few organisations already send out fake phishing emails, offering the employees caught up in these pseudo-scams additional training (and, presumably, unbeknownst to them, additional monitoring). While a good beginning, we need a broader education in “Practical Paranoia”: how to tell the difference between commercial interest and national interest; between marketing hype and political propaganda; between authentic relationship and clever manipulation. Without that training – and the techniques flowing from it – technology will remain the plaything of those who have mastered the arts of control. This industry will continue to be exposed as ignorantly serving the ends of the powerful, rather than our customers.

In an era of pervasive autonomous systems, we need to provide assurance that autonomous devices will perform as designed, will not spy or go rogue or otherwise act to destroy the confidence essential both to commercial success and to the public’s perception of safety. We don’t have that. We don’t even have a strong sense of why we need it, operating as though we’re still in the halcyon world of the mid-1990s, when there were no enemies anywhere. Perhaps a bit more paranoia would serve us well. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/13/security/

2018 On Track to Be One of the Worst Ever for Data Breaches

A total of 3,676 breaches involving over 3.6 billion records were reported in the first nine months of this year alone.

It has been another brutal year for organizations, according to a new report summarizing data breach activity in the first nine months of 2018.

On the one hand, the number of reported data breaches this year between Jan. 1 and Sept. 30 was down 8% compared with the same point last year. In addition, the number of exposed records for the first nine months of this year was lower by a substantial 49%. Yet at the same time, the numbers still translated to 3,676 breaches and a staggering 3.6 billion records compromised.

That puts 2018 on track for having the second-most number of reported breaches in a year and the third-highest number of records exposed overall since 2005, according to Risk Based Security, which analyzed data pertaining to breaches gathered from public sources, through automated and proprietary processes, and other means.

Seven of the breaches this year exposed 100 million or more records, and the 10 largest accounted for more than eight in 10 of all records compromised. Among those suffering major data breaches this year were Facebook, Under Armour, Ticketfly, and Hudson’s Bay Company.

That there were fewer data breaches and records compromised in the first nine months of 2018 compared withthe same period last year could be that attackers were more engaged in crypto-currency mining activities in the early part of this year. There were also no catastrophic events like the WannaCry and Petya/NotPetya outbreaks as in 2017, at least through the end of September. But that does not mean the threat has become any less.

“Breaches are not going away; the problem is not getting better,” says Inga Goddijn, executive vice president of Risk Based Security. “There is still money to made by stealing sensitive and confidential data.”

Despite mounting regulatory pressures, this year saw little improvement in the interval between when organizations first discover a breach and when they publicly disclose the event. In 2017, organizations took an average 47 days to publicly disclose an event; this year the number stood at 47.5 days.

For all the investments that organizations are making in breach detection and response, most discover a breach only after being informed of it by an external party. Just 483 — or 13% —of the 3,676 publicly reported data breaches were discovered internally, according to Risk Based Security. In well more than half the reported breaches — 2,171 — the breached entity did not know about the intrusion until being informed by a third party.

“The vast majority of breaches are still uncovered by external sources, such as law enforcement or banks detecting fraudulent activity, then alerting the organization they may have an issue,” Goddijn says. “Until we get better at finding breaches in-house, I’m skeptical we’ll see much improvement [in breach reporting].”

As has been the case for several years, insiders posed the biggest threat to data. Fraud — a term that Risk Based Security uses to describe any sort of malicious insider activity or no-technical methods of illegally accessing data — accounted for nearly 36% of the records compromised.

In fact, some of the most damaging incidents this year resulted from insiders selling access to databases containing sensitive data, Goddijn says. More than 30 of 51 data breaches involving intellectual property in the first nine months of 2018 stemmed from inside the organization. In addition to malicious activity, many organizations suffered data compromises because of employees and others with insider access mishandling assets.

Email addresses, passwords, names, and, addresses were the most commonly exposed data types. But 18% of the breaches exposed Social Security numbers, 15% involved credit card data, and 11% compromised birth dates.

While insiders were responsible for the most number of records compromised, hacking by external parties continued to be the primary reason for security incidents at most organizations.

“Typically, hacking is financially motivated, whether it be to steal data that can later be monetized or leverage system access for some other operation that ultimately generates income for the actor,” Goddijn says. But there were other causes for external hacking as well, including political motivations and curiosity, she adds.

Somewhat surprisingly given current regulatory pressures, about 35% of organizations that suffered a breach this year did not or were not able to disclose the number of records impacted in the incident.

Ironically enough, many of these breaches were less significant than the refusal to disclose details might suggest Goddijn says. More than 48% of all breaches, in fact, exposed between one and 1,000 records. “We’ve become so accustomed to seeing headline-busting breaches — with hundreds of thousands or even millions of records lost — that when the number is ‘undisclosed,’ people have a tendency to assume the worst,” she notes.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/2018-on-track-to-be-one-of-the-worst-ever-for-data-breaches/d/d-id/1333252?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How to fit all of Shakespeare in one tweet (and why not to do it!)

Here at Naked Security, we’ve written about steganography before.

Steganography is a fascinating trick for sending secret messages – and it’s intriguingly different from cryptography, even though the two techniques are often lumped together as if they were the same.

Simply put, cryptography scrambles messages so that only the intended recipient can read them.

Generally speaking, attackers who intercept encrypted messages know that something was said, but can’t figure out what it was.

For example, I confidently predict that you will not be able to unravel the original content of this coded message:

   CBTEM YPQNE TUTMQ WLJFJ FKRBG
   OFYIA DQTLP GNCBD LDOHN AHMOR
   MHUUG EJOWN CSCXA VGTUH SXTLN
   BOTXN ATFMU WLHID RXDWC IJMEA
   KWQEI PUGMF KPHSL HCHUY TMUJE

But those five-letter groups, reminiscent of an enciphered World War Two message, will almost certainly make you suspicious.

You might therefore assume I’ve got something important to hide, a fact that could land me in even hotter water than if I’d blurted out the message openly.

Steganography therefore aims to disguise messages not only to keep the contents secret, but also to hide the existence of a message in the first place.

The process could be as simple as agreeing in advance on a special interpretation of a casual word or gesture.

According to rumours, if Queen Elizabeth II innocently twiddles her wedding ring while at an event, it’s a discreet signal that she’d like one of her entourage to extract her gracefully from her current conversation so she can move on to other guests without causing offence.

Hiding in images and sound

Another approach to steganography involves finding existing files or messages that contain data that’s unimportant and unlikely to be scrutinised, and replacing it with secret data instead, thus effectively hiding in plain sight.

One well-known trick for sneaking data into files that are popularly shared involves adding what’s supposed to look like “noise” into files such as photos, videos or music.

For example, each pixel in an 8-bit greyscale photo can have a value from 0 (totally black) to 255 (brightest white), even though you’d be hard pressed to tell the difference between a grey level of, say, 129 and 130, or between 8 and 9.

In fact, it might be technically impossible for you to discern a difference, given the combination of inaccuracies introduced by your camera when capturing the image, your screen when displaying it, and your eye when viewing it.

You can therefore replace some of the data in your image to encode information from somewhere else, for example by overwriting the bottom few bits of each image pixel with data of your own.

This leaves you with a hybrid file that is part image and part personal information.

But if anyone opens up your image-with-data-stashed-inside, it still looks like an image, albeit a noisy or low-quality one.

You can then use public services such as social media websites and photo-sharing services to distribute your hidden-message pictures in an innocently open sort of way.

Unfortunately, images distributed via online photo-sharing sites often get altered, or transcoded, along the way, on the grounds that online versions don’t have to be perfect copies of the original.

The image someone else sees after you’ve uploaded the original may have been scaled down to a standardized size, converted lossily between different formats, re-compressed to save space, or any combination of these tranformations.

The data you’ve hidden in the image pixels is unlikely to surive this sort of modification intact.

Also, as you can see in the examples above, the more data you try to hide in the pixels of an image, the less natural the image looks, limiting the data-carrying capacity of steganographic images dramatically.

Data that’s been embedded covertly in files such as images can often be detected because the resulting files don’t quite look right.

The data stashed in the images above is a pseudorandom sequence of bits, similar to what you’d see if you compressed or encrypted your personal data before hiding it in the image. Although this randomness looks like typical digital noise to the human eye, it doesn’t really match the noise produced by camera sensors, which is unpredictable but not particularly random. Indeed, images that seem to have been doctored in this way may arouse even more suspicion than explicitly encrypted information, and analytical tools exist that aim to detect unnaturally created data of this sort.

Hiding publicly on Twitter

This gave security researcher David Buchanan food for thought.

Just how much data could you stash reliably in a Twitter image upload, for example?

Twitter usually scales, crops, compresses and transcodes your images before publishing them via its website or in its apps – indeed, there are good cybersecurity reasons for Twitter to stick to a well-defined set of of image formats, sizes and content types.

Lots of image formats allow you to add what’s called metadata – information about the information in the image itself – to record details such as where a photo was taken, what camera type was used, what lens and exposure settings were used, and your personal comments about the image.

Twitter, along with many other on-line services, keeps some of the metadata from your images, but dumps the rest, at least in part to stop you sneakily using its servers to store or distribute data that isn’t really supposed to be there.

Buchanan quickly found that Twitter discarded most of the metadata in the images he uploaded, but not all of it.

Twitter always seems to keep added metadata of a type called ICC_PROFILE, short for International Color Consortium cross-platform profile format.

Because the ICC profile is supposed to explain how to display the image, rather than how it was originally captured or what the creator thought of it, retaining the profile data seems like a vital part of rendering it correctly when it’s viewed in a browser or an app.

But the ICC Specification, which takes up 130 pages, offers plenty of little known and rarely used components for stashing data that Twitter will retain but subsequently never use, providing that each chunk of data you embed is less than 64KB in size.

Data formats with 64KB limits typically have that restriction for exactly the same reason that early microcomputers couldn’t use more than 64KB of RAM – a consequence of how much storage was used to keep a count of how much storage was needed. If you routinely allocate just two bytes to store the sizes of any lumps of data in your file, you can’t keep track of lengths longer than 16 binary digits, for the same reason that the 5-digit mechanical odometers in older cars wrap round after 99,999 kilometres and go back to reading zero. The binary number 11111111­11111111 (0xFFFF in hexadecimal, 65,535 in decimal) effectively fills up two bytes of storage – you can’t add to that number without wrapping back to zero and confusing your application.

Buchanan was able to perform the following data-stashing trick:

  • Take the Complete Works of Shakespeare, published for free in a single HTML file by Project Gutenberg, and weighing in at 7,033,657 bytes.
  • Compress it into a multipart RAR archive. Each archive part was limited to 64,512 bytes, for 31 parts in all totalling 1,938,197 bytes. (English text compresses rather well, because it uses a few characters very often, some characters rarely, and many characters not at all.)
  • Package the 31 RAR files into a single ZIP file. ZIP files contain internal headers that allow an unzipping program to skip over additional bytes between each file embedded in the ZIP.
  • Arrange extra data between each ZIP component to form a large and correctly formatted, though purposeless, ICC_PROFILE data file.
  • Add the ICC_PROFILE data to a tiny 64×64 pixel JPEG image of William Shakespeare, Esq.
  • Upload the image to Twitter.

If you view his tweet, the image looks like a low-resolution 10KByte image of Shakespeare…

…but if you download the image, you can feed it directly into an UNZIPping program, which will ignore all the interleaved data – including the image itself – that doesn’t belong to the compressed parts of the ZIP:

   $ unzip downloaded-image.jpg
   Archive:  downloaded-image.jpg
   error [downloaded-image.jpg]:  missing 454 bytes in zipfile
     (attempting to process anyway)
    extracting: shakespeare.part001.rar
    extracting: shakespeare.part002.rar
    extracting: shakespeare.part003.rar
    [. . . .]
    extracting: shakespeare.part029.rar
    extracting: shakespeare.part030.rar
    extracting: shakespeare.part031.rar
   $

Thanks to a rather forgiving file format, your unzipper spits out 31 sequentially named fragments of a RAR archive.

If you then feed the first fragment into an UNRARing program, the unarchiver will correctly figure out that it’s supposed to decompress all the numbered fragments in turn, spitting out shakespeare.html:

   $ unrar x shakespeare.part001.rar
   UNRAR 5.61 freeware      Copyright (c) 1993-2018 Alexander Roshal

   Extracting from shakespeare.part001.rar
   Extracting  shakespeare.html               3%
   Extracting from shakespeare.part002.rar
   ...         shakespeare.html               6%
   [. . . .]
   Extracting from shakespeare.part030.rar
   ...         shakespeare.html              99%
   Extracting from shakespeare.part031.rar
   ...         shakespeare.html              OK 
   All OK
   $

You can then open the HTML file directly in your browser and read the Bard’s collected work at your leisure, where you will find, both literally and figuratively that:

There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.

Buchanan reported this to Twitter as “a bug”, which we think is a reasonable assessment…

…but Twitter replied that it wasn’t a bug, meaning that it seems fair to treat it as a feature:

Should you do it?

Interestingly, if you look at the etymology of the word steganography – from the Ancient Greek στεγανός [stegaNOS], meaning covered or water-tight – you can argue that this technique does, literally at least, qualify as steganographic.

The stashed data is packaged up safely, and won’t leak out or disappear after you upload it to Twitter.

But the data isn’t κρυπτός [krypTOS], meaning hidden or secret, at all – the bloated size of the image file is a dead giveway.

In other words, if you are looking for a steganographic technique that actually conceals your data as well as containing it – the way that the word steganography is usually used these days – then be careful.

This technique is not for you if your need is to hide your data in plain sight, and might even attract additional scrutiny if you are spotted using it in real life.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/KdLCTURA6_Q/