STE WILLIAMS

Shock! Hackers for medieval caliphate are terrible coders

DerbyCon An analysis of the hacking groups allying themselves to Daesh/ISIS has shown that about 18 months ago the religious fanatics stopped trying to develop their own secure communications and hacking tools and instead turned to the criminal underground to find software that actually works.

Kyle Wilhoit, a senior security researcher at DomainTools, told the DerbyCon hacking conference in Kentucky that while a multiplicity of different hacking groups with different aims have consolidated themselves under the banner of the United Cyber Caliphate (UCC), their coding skills and opsec are “garbage.”

A few years ago the UCC created three apps for its followers to use – some script-kiddie level malware that was riddled with bugs, a version of PGP called Mujahideen Secrets that the NSA just love, for all the wrong reasons, and a DDOS tool called “Caliphate cannon” that was laughably poor.

“ISIS is really really bad at the development of encryption software and malware,” Wilhoit said. “The apps are sh*t to be honest, they have several vulnerabilities in each system that renders them useless.”

Wilhoit said the Daesh-bags have therefore started using mainstream communication systems like Telegram and Russian email services popular with online criminals to communicate. Even so, their lousy security is getting members killed.

He recounted how he’d found an open server online containing photographs of active military operations by ISIS in Iraq and Syria, which were to be used for propaganda purposes. However, the uploaders had included all the metadata in the photographs, making them easy targets. Little wonder four of the groups’ IT leaders have been killed in the last two years by drone strikes.

Many become one

Wilhoit also used his DerbyCon presentation to detail the formation of at least four specialised Islamic hacking groups. One, the Caliphate Cyber Army, for example, formed about four years ago and concentrated on online defacement of websites.

The Islamic State Hacking Division concentrates on trying to get into government databases in the US, UK and Australia so that they can compile and publish kill lists of targets. To date there is no evidence that this group has succeeded. Wilhoit said that’s because their technical skills are negligible.

The Islamic Cyber Army focuses on researching basic information about power grids, with a sideline in defacing websites. There’s no evidence they have actually managed to break into a power company, instead they share basic information about such systems online, Wilhoit opined.

The Sons of the Caliphate Army are another online group who caused a brief stir when they claimed to have plans to kill Mark Zuckerberg. Obviously they’re yet to succeed and now work under the UCC banner.

One unifying theme of these group’s work is the stunning lack of success and ineptitude. They will deface a website few people visit and claim a success, or try and launch a DDoS attack using a couple of dozen infected PCs.

The terrorists are also fond of using social networks to recruit members and recruit members. Wilhoit said Facebook takes such pages down within 12 hours and Twitter pulls accounts before the number of followers reaches triple figures.

Even attempts to use the internet for fundraising have been problematic, he said. While some of the groups mentioned above have solicited Bitcoin donations to help them buy weapons, scammers have adopted their tactic and Islamic State stylings, diluting the amount of donations that reach extremists.

“If UCC gets more savvy individuals to join then a true online terrorist incident could occur,” Wilhoit concluded. “But as it stands ISIS are not hugely operationally capable online. As it is right now we should we be concerned, of course, but within reason.” ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/25/extremist_hackers_dubious_competence/

Guess – go on, guess – where a vehicle tracking company left half a million records

A US outfit that sells vehicle tracking services has been accused of leaving more than half a million records in a leaky AWS S3 bucket.

The Kromtech Security Centre, which has made belling this particular cat its hobby, says it found a total of 540,642 ID numbers associated with SVR Tracking, an outfit that uses GPS devices to track vehicles so they can be found if their owners fall into arrears on payments.

Kromtech says data left lying around includes “logins / passwords, emails, VIN (vehicle identification number), IMEI numbers of GPS devices and other data that is collected on their devices, customers and auto dealerships”.

The passwords were hashed. But the database also records where on a particular vehicle the tracking device was hidden.

339 logfiles from the trove included maintenance records for vehicles, among other data, and identified 427 dealerships using SVR’s tracking devices.

Because SVR Tracking doesn’t know when a car might be stolen (or how long will elapse before a theft is discovered) the tracking devices send their location more-or-less continuously back to the company’s database, where it’s kept for 120 days.

That means if a miscreant had accessed the dataset, they would have been able to pinpoint a vehicle’s location for that period of time.

Kromtech says it informed SVR Tracking of the problem and the bucket has since been secured.

SVR Tracking posted a statement on its Website:

While SVR is not in a position to confirm the accuracy of everything reported by others, Kromtech contacted SVR on September 20, at which point we immediately began our own investigation into an incident concerning one of our data repositories. Within 3 hours, SVR fixed the repository configuration vulnerability Kromtech identified. SVR’s investigation into potential unauthorized access to the repository is ongoing, and we will take any further steps reasonably necessary to help safeguard sensitive information pertaining to our customers.

What a relief. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/25/svr_tracking_records_leak_from_insecure_s3_bucket/

Adobe security team posts public key – together with private key

Finnish security researcher Juho Nurminen is a bit of a retweet celebrity right now, for all the wrong reasons.

Not his wrong reasons, but the wrong reasons of Adobe’s Product Security Incident Response Team (PSIRT).

To explain.

Most security teams publish encryption keys so that you can communicate securely with them, using a public key cryptography tool such as PGP or GPG.

Public key encryption, also known as asymmetric encryption, is the sort that uses two keys, rather than one: a public key to lock a file, and a corresponding private key to unlock it.

(You generate these keys, which act as a sort of mathematical function and its inverse, as a pair, and although it’s pretty quick to generate a keypair, it’s as good as impossible – computationally infeasible, in the jargon – to figure out the private key given the public key.)

It’s probably obvious from the nomenclature here that the PUBLIC key is for locking files, and that you share it, well, PUBLICLY so that anyone can encrypt data to send it to you securely.

The PRIVATE key is for unlocking data that was encrypted with your public key, and as long as you keep it, you know, PRIVATE, then only you can ever decrypt that data.

At the risk of labouring the point, you make the public key public, and you keep the private key private.

For this reason, PGP/GPG keys, when converted into text format for easy storage and use, look something like this:

PSIRT PGP Key (0x33E9E596)

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: Mailvelope v1.8.0
Comment: https://www.mailvelope.com

[redacted]

-----END PGP PUBLIC KEY BLOCK----- 

-----BEGIN PGP PRIVATE KEY BLOCK-----
Version: Mailvelope v1.8.0
Comment: https://www.mailvelope.com

[redacted]

-----END PGP PRIVATE KEY BLOCK----- 

When you want to send someone your public key, the ---BEGIN--- and ---END--- blocks are there as a visual clue to help you copy and paste the right one.

Adobe it seems, generated the above keypair within the past few days , giving it a lifetime of one year, presumably to replace last year’s now-expired key.

Of course, a new public key is no use until you make it public, so Adobe popped it onto the PSIRT blog…

…but posted the whole keypair, public and private.

Someone who has Adobe’s private key can not only post messages that apparently have Adobe’s imprimatur, but also decrypt messages that people sent to Adobe under the assumption that only the PSIRT would ever get to read them.

Fortunately, as far as we can see, Adobe’s (now-revoked) private key was itself encrypted with a passphrase, meaning that it can’t be used without a secret unlock code of its own, but private keys aren’t supposed to be revealed even if they are stored in encrypted form.

If you let your your PGP/GPG private key slip, your leak cuts both ways, potentially affecting both you and the other person in the communication, for messages in either direction.

What to do?

  • Don’t use Adobe’s public key with the PGP fingerprint shown below to send information to PSIRT.
  • Don’t trust any messages signed with this key, even though the leaked private key was encrypted.
  • Don’t make this mistake yourself if you use public-key cryptography tools. (It’s an easy mistake to make when you’re copying text – so, to borrow a saying from carpentry, measure twice, cut once.)

We’ll say it one more time: make the public key public, and keep the private key private!

Key details:
   Adobe PSIRT [email protected]
       public key:   4096R/AF877616 2017-09-18 [expires: 2018-09-18]
       fingerprint:  3EF4 735A 73A6 7BFB DCC1  1BC9 86C0 0FC2 AF87 7616


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/dJ9DQLIBDrY/

Tracking phones without a warrant ruled unconstitutional

A Washington DC Court of Appeals said on Thursday that law enforcement’s warrantless use of stingrays—suitcase-sized cell site simulators that mimic a cell tower and that trick nearby phones into connecting and giving up their identifying information and location—violates the Constitution’s Fourth Amendment protection against unreasonable search.

The ruling (PDF) overturned the conviction of a robbery and sexual assault suspect. In its decision, the DC Court of Appeals determined the use of the cell-site simulator “to locate a person through his or her cellphone invades the person’s actual, legitimate and reasonable expectation of privacy in his or her location information and is a search.”

Here’s how the Fourth Amendment reads:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

In 2013, D.C. Metropolitan Police used a stingray to nab a suspect, Prince Jones. The Appeals Court on Thursday rejected the government’s defense, saying that doing so without a warrant “violated the Fourth Amendment.”

From the ruling:

We thus conclude that under ordinary circumstances, the use of a cell-site simulator to locate a person through his or her cellphone invades the person’s actual, legitimate and reasonable expectation of privacy in his or her location information and is a search. The government’s argument to the contrary is unpersuasive.

This could have wide-ranging impact on policing tactics. According to a December 2016 report (PDF) from the House Oversight and Government Reform Committee, between 2010 and 2014, U.S. taxpayers have spent $95 million on 434 cell-site simulator devices. Just one such device costs around $500,000.

But then again, it might not. As CBS News notes, privacy advocates are worried that Attorney General Jeff Sessions’ tough-on-crime stance might favor an approach to stingray surveillance that doesn’t stress the need for warrants to protect privacy rights.

At any rate, this decision follows a trend in court decisions about warrantless use of stingrays.

Last month, a federal judge in Oakland, California, ruled that police must generally have a warrant before they use a cell-site simulator. Notably, the judge didn’t suppress the evidence collected via stingray, however, thereby enabling a man who shot a police officer to get sentenced to 33 years in prison.

Last month’s decision brought an end to US v. Purvis Lamar Ellis, a case against three men who were charged with racketeering and the attempted murder of an Oakland, California police officer in 2013. A fourth man who’d pleaded guilty in April 2017, Damien McDaniel, was sentenced to 33 years in prison on Wednesday.

Yes, police must generally have a warrant, US District Judge Phyllis Hamilton wrote in a 39-page order denying Ellis’s request to suppress his cell phone records. That’s not too surprising: courts have been moving toward this interpretation of cell phone surveillance. In her ruling, Hamilton said that the use of a stingray (two, actually) to find Ellis constituted a search under the Fourth Amendment and therefore required a warrant.

Likewise, in 2015, the US Department of Justice issued a new, enhanced policy about stingray use, aimed at increasing privacy protection. It stated that thenceforth, all federal agencies had to obtain a search warrant – based on probable cause – before using cell-site simulators. That, being a policy change, was never actually written into law.

Nonetheless, within months, also in 2015, California followed suit, passing a law requiring a warrant to search computers, cellphones and tablets.

But there’s a big “but” in this particular case. Hamilton ruled that in the case of US v. Ellis, no warrant was needed, and so the evidence gathered by the stingrays wouldn’t be suppressed.

The reasoning behind dismissing the need for a warrant has to do with what’s known as “exigent circumstances.” In the context of criminal proceedings, that covers emergency situations where swift action is needed to prevent imminent danger to life or serious damage to property, to keep evidence from being destroyed, or to stop a suspect who’s taking flight. In situations with exigent circumstances, no warrant is required.

The case started with the attempted murder of a rival gang member by McDaniel and three other members of the Sem City gang in Oakland.

The men opened fire on the rival gang member when they found him on their turf, at a bus stop, on Jan. 20, 2013. They shot him in the head at close range while he lay on the ground, according to a Department of Justice statement, but the victim survived.

Police got a tip that the suspects’ getaway car was at an apartment complex in East Oakland the next day. Officer Eric Karsseboom went to the area in an undercover car to stake out the suspects when McDaniel saw him, opened the passenger door and grabbed Karsseboom’s off-duty gun from the center console, according to court records.

McDaniel tried to take another gun from Karsseboom’s waistband while other Sem City gang members held guns on the officer. The gang members pistol-whipped Karsseboom, and McDaniel shot him in his left arm.

McDaniel’s co-defendants, Deante Terrance Kincaid and Joseph Pennymon, pleaded guilty last month to charges connected to the shooting and beating of Karsseboom. Purvis Lamar Ellis, the fourth defendant charged in the attack, was the only one who pleaded not guilty. Ellis is the one whose motion to get his cellphone records suppressed was denied.

Ellis was expected to plead guilty at a plea hearing on Thursday.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/i7yNluqNEn0/

NBD: Adobe just dumped its PRIVATE PGP key on the internet

An absent-minded security staffer just accidentally leaked Adobe’s private PGP key onto the internet.

The disclosure was spotted by security researcher Juho Nurminen – who found the key ion the Photoshop giant’s Product Security Incident Response Team blog. That contact page should have only included the public PGP key.

Adobe has not returned a request for comment on the matter, possibly because it has slightly more pressing concerns at the moment. Namely, key rotation and internal public-private key education. It has also torn down its private key from the security blog.

It goes without saying that the disclosure of a private security key would, to put it mildly, ruin a few employees’ Friday. Armed with the private key, an attacker could spoof PGP-signed messages as coming from Adobe. Additionally, someone (cough, cough the NSA) with the ability to intercept emails – such as those detailing exploitable Flash security vulnerability reports intended for Adobe’s eyes only – could use the exposed key to decrypt messages that could contain things like, say, zero-day vulnerability disclosures.

Armed with that info, miscreants could exploit that information to infect victims with malware before Adobe had even considered deploying a patch.

On the other hand, PGP isn’t exactly known for being a user-friendly system, and the process of intercepting and decrypting messages would be difficult to do before the keys are changed.

While very embarrassing for Adobe, the likelihood this will lead to any sort of catastrophic incident is fairly low, especially if the key is only being used for email. Still, it’s rather clumsy. We’re all only human after all. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/22/oh_dear_adobe_security_blog_leaks_private_key_info/

Aw, not you too, Verizon: US telco joins list of leaky AWS S3 buckets

Yet another major company has burned itself by failing to properly secure its cloud storage instances. Yes, it’s Verizon.

Researchers with Kromtech Security say they were able to access an AWS S3 storage bucket that contained data used by the US telco giant’s billing system and the Distributed Vision Service (DVS) software that powers it.

“DVS is the middleware and centralized environment for all of Verizon Wireless (the cellular arm of VZ) front-end applications, used to retrieve and update the billing data,” Kromtech revealed today.

“Although no customers data are involved in this data leak, we were able to see files and data named ‘VZ Confidential’ and ‘Verizon Confidential’, some of which contained usernames, passwords and these credentials could have easily allowed access to other parts of Verizon’s internal network and infrastructure.”

The researchers also say they were able to retrieve a number of Outlook messages, router host information, and “B2B payment server names and info.”

The insecure instance, which had been configured to allow anyone on the internet to access, was closed after Kromtech reported the issue to Verizon.

As with previous S3 misconfigurations, this one seems to be down to human error, rather than any technical failings on the part of Verizon or AWS: we’re told it was rather the result of someone forgetting to disable public access.

“Upon analyzing the content of the repository, we identified the alleged owner of the bucket and sent responsible notification email on September 21st,” said the Kromtech team.

“Shortly after that, online archive has been took down and it has been later confirmed that the bucket was self-owned by Verizon Wireless engineer and it did not belong or managed by Verizon.”

Verizon did not return a request for comment on the report.

This is not the first biz Kromtech researchers have spotted keeping confidential data in an insecure storage bucket. In recent months, the company has spotted vulnerable bins run by the likes of Time Warner Cable, and hotel booking company Bookzie. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/22/verizon_falls_for_the_old_unguarded_aws_s3_bucket_trick_exposes_internal_system/

Want to get around app whitelists by pretending to be Microsoft? Of course you can…

DerbyCon A sprinkle of code and an understanding of the Windows digital certificate process is all that’s needed for a miscreant to sneak malware past Microsoft’s application whitelist within a corporate environment.

In a keynote address at the DerbyCon hacking conference in Kentucky, USA, on Friday, Matt Graeber, a security researcher with SceterOps, detailed how he managed to disguise and run a banned software nasty as a legit whitelisted app, and thus bypass Redmond’s security mechanisms.

Usually anyone trying to fool Microsoft’s defenses in this way, via PowerShell, will be caught by the executable signature checks within the Get-AuthenticatedSignature function. However, we’re told, there’s also CryptSIPVerifyIndirectData, which can be abused to green-light malicious applications with a counterfeit signature. The only thing you need are some coding tools and, oh yeah, admin privileges on the target computer.

“By fooling PowerShell signature checking I could validate myself as anyone,” Graeber said. “I am Microsoft at this point. I can be Google, I can be anyone I want to be. I can do this remotely and it’s not hard to get admin privileges.”

Graeber said that he has since verified that malware using bogus signatures to masquerade as white-listed programs can be validated and run within non-PowerShell environments on Windows. He has detailed the whitelist bypass technique in this here white paper [PDF] if you want all the techie details. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/22/bypassing_app_whitelists_microsoft_windows/

Don’t fear the software shopkeeper: T&Cs banning bad reviews aren’t legal in America

DerbyCon Security vendors are inserting language into their products’ terms and conditions that attempt to silence critics, folks attending this year’s DerbyCon conference were told on Friday.

More and more infosec software makers now include legal language in their TCs insisting that their products cannot be tested for usefulness if the results are going to be published. Effectively, the developers are trying to ban negative reviews from emerging online. Some publishers even specify a fine – up to $25,000 in some cases – if someone speaks out in public about a product’s failings and weaknesses.

“We have a lot of vendors acting like bullies,” said John Strand, founder of Black Hills Information Security, during his DerbyCon keynote. “Researchers are terrified that they are going to get sued. As a result most of the analysis of products you see is either from the vendor or vendor-approved.”

The classic case of this came earlier this year, when CrowdStrike went to court to prevent software scrutineers NSS Labs from publishing a review of the security biz’s Falcon endpoint protection system. The case provides much hope for hackers and testers, Strand said.

Ultimately, the Delaware district judge overseeing CrowdStrike’s lawsuit ruled against [PDF] the Falcon slinger on a number of grounds. Most important for the hacking community is that the security shop’s terms and conditions banning criticism were made illegal nearly a year before.

In 2016, US Congress passed the Consumer Review Fairness Act after it was introduced by Representative Leonard Lance (R-NJ). The text of the legislation explicitly bans gagging orders for product reviewers and testers, or imposing fines for comments because such opinion is in the public interest.

The bill was introduced after businesses began suing people who left bad reviews online, and it has been enormously helpful for those interested in information security. It provides solid legal protection for those who test products and find them wanting.

There is a potential loophole for biz bullies: the Electronic Frontier Foundation warns that some outfits are trying to use the the Digital Millennium Copyright Act to silence such reviews, but so far the courts have sided with consumers and testers. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/22/dont_fear_the_vendor_tcs_lockdowns_on_comparing_kit_arent_legal/

Using infrared cameras to break out of air-gapped networks

Even the cleverest malware is stranded unless it can communicate with the people who sent it.

This can be hard to achieve without a network’s defenders noticing the malware’s chatter, so stealthy communication is at a premium for malware that wants to go unnoticed.

The most extreme example of this challenge occurs when malware has no direct connection to the outside world at all, such as is the case in isolated networks that are “air-gapped” from the outside world.

In this situation, the malware typically has two ways to communicate: infect storage media used to ferry data and software to and from the protected network (the approach used by the infamous Stuxnet malware), or get an insider to access the gapped systems.

Researchers at Israel’s Ben-Gurion University prefer a third way: they’ve come up with a new proof-of-concept gap-beating attack, dubbed “aIR Jumper”, based on controlling the infrared (IR) LEDs inside surveillance cameras.

The team wanted to see whether these devices could be used to jump the gap and exfiltrate data (sneak it out of a network), infiltrate data (sneak it in as part of command and control) or, ideally, a combination of the two.

To work, the malware (already inside the air-gapped network using one of the techniques mentioned above) must look for and compromise network-attached surveillance cameras, which are typically fitted with infra-red LEDs to enable night vision.

For cameras facing on to a public car park or street, the researchers discovered that data could be exfiltrated as encoded infra-red flashes at throughputs of 20 bits/sec, per camera, to an attacker with a video camera standing tens of metres away.

Command and control data could then be infiltrated back to the malware by reversing this process at a throughput of 100 bit/sec, per camera, using infra-red LEDs from kilometres a way.

This is enough to transmit:

Sensitive data such as PIN codes, passwords, and encryption keys that are then modulated, encoded, and transmitted over the IR signals.

Better still:

The covert channel can be established with more than one surveillance camera in order to multiply the channel’s bandwidth.

Despite its relatively low bandwidth, the attack has the compelling advantage of being both incredibly hard for an attacker to spot, either visually (infra-red being invisible to humans) or by security systems (because it never traverses internet gateways).

In a sense, the aIR attack works by creating its own alternative port into and out of the network using surveillance cameras as the medium.

Of course the success of this approach against air-gapped networks would depend on the network configuration.

You might reasonably expect air-gapped networks to be completely isolated from devices like cameras, in which case the aIR attack would fail. Even if the cameras were accessible, the malware would still have to compromise them, which would hinge on how well secured they were.

On the other hand, researchers at Ben-Gurion University and elsewhere have researched techniques for jumping air gaps directly, including using electro-magnetism to communicate with mobile phones, using heat, via disk and fan acoustics, and even hard drive LEDs blinking at drones.

The best-known example of this class of attack is probably the legendary and strange BadBIOS from 2013, which was said amongst other things to use inaudible sounds to jump air gaps.

As for camera security, one has only to turn to the mass compromise of public surveillance cameras in Washington DC earlier in 2017 to see the potential for trouble.

So, much of this is possible if a little cloak and dagger – would attackers really want to risk standing in car parks at night holding video cameras though?

The obvious defence is simply to secure surveillance cameras while isolating them from sensitive networks.

The researchers at Ben-Gurion University aren’t really concerned with what’s likely, but what’s possible and they seem determined to show that, air-gapped or not, nothing is ever completely out of reach.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mVGvoKQr89k/

News in brief: DDoS threat spam; Army logic bomber; Viacom leak

Phantom Squad’s DDoS threat spam

A DDoS extortion gang calling itself Phantom Squad has hit companies around the world with a spam campaign that demands they pay a ransom or suffer DDoS attacks.

Security researcher Derrick Farmer discovered the spam campaign and told BleepingComputer that the threats started 19 September and haven’t stopped. The publication reported:

The emails contain a simple threat, telling companies to pay 0.2 Bitcoin (~$720) or prepare to have their website taken down on Sept. 30. Usually, these email threats are sent to a small number of companies one at a time, in order for extortionists to carry out attacks if customers do not pay. This time, this group appears to have sent the emails in a shotgun approach to multiple recipients at the same time, a-la classic spam campaigns distributing other forms of malware.

Those who receive the emails are advised to contact their local authorities and not cave in to ransom demands.

Admin plants “logic bomb” in Army computer

The Ledger-Inquirer reports that 48-year-old Mittesh Das has been convicted of planting destructive code in a U.S. Army computer program nearly three years ago.

Das, a contractor, was responsible for a system that handled “pay and other data” for 200,000 reservists. Das reacted to the Army’s decision to switch to a different supplier by planting a “logic bomb” that detonated the day the new company began administering the system.

The paper quotes Director Daniel Andrews of the Computer Crime Investigative Unit for the U.S. Army Criminal Investigation Command:

Let this be a warning to anyone who thinks they can commit a crime in cyberspace and not get caught. We have highly trained and specialized investigators who will work around the clock to uncover the truth and preserve Army readiness.

Removing the malicious code and recovering lost data cost taxpayers more than $2.5 million.

Viacom data leak

Entertainment giant Viacom exposed “the keys to its kingdom” on an unsecured server, according to Hacker News.

A security researcher found a misconfigured Amazon S3 bucket (a type of cloud storage) with a gigabyte’s worth of credentials and configuration files for dozens of Viacom properties.

From Hacker News:

Among the data exposed in the leak was Viacom’s master key to its Amazon Web Services account, and the credentials required to build and maintain Viacom servers … the unprotected server also contained GPG decryption keys [but] did not contain any customer or employee information.

As damaging as that sounds, Viacom insists that no harm was done. It said in a statement:

We have analyzed the data in question and determined there was no material impact. Once Viacom became aware that information on a server—including technical information, but no employee or customer information—was publicly accessible, we rectified the issue.

Catch up with all of today’s stories on Naked Security


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ehIJOhZpXkI/