STE WILLIAMS

Chinese crack down on ‘money-sucker’ Androids

The Chinese government is to crack down on “money sucking” mobiles: Android-based handsets that subsidise themselves by stealing from the customer’s account.

The crackdown aims to involve network operators, target retailers and ensure that selling handsets featuring pre-installed Trojans is explicitly illegal, according to the Google translation.

The idea is to set up a central unit to manage complaints, though it seems the scam has been going on long enough to build up considerable momentum.

The handsets concerned are sold cheaply, and generally unbranded, though some bear forged logos. Once they go into use the Android-based handsets start quietly sending text messages, or making a silent call or two. The transactions only incur a fee of about around 20 pence a time, in the hope the user will never notice, while the miscreant collects the termination fee or other premium charge. (more…)

Gamers raid medical server to host Call of Duty

A server storing sensitive patient information for more than 230,000 people was breached by unknown hackers so they could use its resources to host the wildly popular Call of Duty: Black Ops computer game.

New Hampshire-based Seacoast Radiology warned patients on Tuesday that the hacked server stored their names, social security numbers, medical diagnosis codes, address, and other details. On a website established after the mid-November breach, the medical group urged patients to monitor their credit reports for signs of identity theft, although there’s no evidence of any misuse of the information. (more…)

DUP website translated into Irish by mischievous hacktivist

A mischievous hacktivist broke into three websites run by the Democratic Unionist Party on Wednesday night to replace the website of the staunchly unionist Ulster party with an Irish language version.

Party leader Peter Robinson’s welcome message to the site was translated into Irish and appended to include support of the “Irish Language Act”, the BBC reports.

In reality, the DUP has repeatedly blocked the introduction of the proposed law, which is backed by nationalist majority party Sinn Fein.

The hacker, who rejoices in the Joycean moniker of Hector O’Hackatdawn @HectorOHackAtD), also defaced the websites of party bigwigs peterrobinson.org and jeffreydonaldson.org. (more…)

Gawker makes a hash of non-ASCII characters in passwords

Gawker is phasing out the use of email-address-and-password login in favour of more modern OAuth authentication and the use of anonymous one-off accounts.

Tom Plunkett, CTO at Gawker Media, briefly explained the plans in responding to the discovery of another password-related security snafu involving the media news and gossip site. Computer scientists at Cambridge University discovered that, until a fortnight ago, it was failing to handle non-ASCII characters in passwords. Instead, all non-ASCII characters were mapped to the ASCII ‘?’ prior to generating a password hash.

As a result of the cock-up, the accounts of Native Korean speakers, to quote just one example, might be opened by hackers who simply guessed a string of question marks.

Joseph Bonneau, a computer scientist at Cambridge University, came across the security hole in researching the handling of non-ASCII characters in passwords. “Gawker was using a relatively little-known Java library with the known bug of converting all non-ASCII characters to ‘?’ prior to hashing,” Bonneau explained.

Bonneau credits Gawker with responding quickly to his discovery by applying a fix within three days, though the number of exposed accounts is small. Gawker’s blog is only available in English and checks by Bonneau suggested fewer than one in 50,000 users elected a password which was entirely non-Latin.

The latest glitch follows a far more serious breach last month, when security slip-ups by Gawker resulted in the exposure of millions of user passwords. A database dump containing user login credentials, chat logs, and other Gawker-site collateral was released as a Torrent by hacking group Gnosis. Gnosis extracted the material after gaining root access to Gawker’s servers. The attack was motivated in large part by an online feud between hackers affiliated with anarchic imageboard 4chan and Gawker.

Gawker responded to the breach by asking users to change their password, a similar response to its attitude to the much less significant non-ASCII hash snafu. Users affected by the non-ASCII bug are being prompted to change their password as soon as they login to the site with their old (vulnerable) credentials. Meanwhile, Gawker is making backend changes that will allow it to move to a more secure password system, Plunkett explained.

“Longer term (beginning early February), we will be migrating all of our users to our new commenting platform that will be described on the tech.gawker.com blog later this month,” Plunkett explained. “This will eliminate the need for email addresses or passwords on our platform. Once this change goes live, new commenters will not be able to register with a user/password – we will support only OAuth or anonymous accounts we are calling ‘burners’.”

Gawker earlier said it was going to introduce two-factor authentication logins for its employees, in response to the compromise of the site’s security last month.

Interview: Jailbroken iPhones a vector rather than a vulnerability

Earlier this week, Sense of Security hit the headlines advising against the careless use of jailbroken iPhones in corporate environments. The Register speaks to the company’s security consultant Kaan Kivilcim, who presented his findings at the ASIA conference in December, about what the company found. (more…)

Man nabbed nude pics from women’s email accounts

A California man on Thursday admitted breaking into the Facebook and email accounts of hundreds of women and stealing stealing nude and seminude pictures of them.

George Samuel Bronk, 23, of Citrus Heights, pleaded guilty to seven felony charges, including computer intrusion, false impersonation and possession of child pornography. He faces as maximum six years in prison and will have to register as a sex offender.

When Bronk’s home was raided in September, investigators found more than 170 explicit photographs of women stored on his hard drive. The women resided in California and 16 other states as well as the UK.

Bronk acquired the pictures by trawling Facebook for women who included their email addresses and personal information, such as their favorite food, their high school or mother’s maiden name. He then used those details to reset the passwords for their email accounts. Once in, he searched the victims’ sent folders for nude or semi nude pictures.

In some cases, he sent the pictures to everyone in the victim’s address book. In other cases, he threatened to make the pictures public unless the women sent even more explicit images. He told one women he did it “because it was funny.”

The investigation began after one victim notified Connecticut State Police that her account had been breached. The agency then contacted the California Highway Patrol after discovering the perp was likely located there.

Investigators are having a hard time identifying the majority of the victims. In some cases, the investigators were able to rely on locating tags embedded in the photos. Police have emailed 3,200 questionnaires to potential victims, but so far, only 46 women have come forward.

A press release from the California Attorney General’s office is here. ®

SAP buys secure login tech ERP

German software giant SAP has moved into the security market with the acquisition of identity and access management technology from Secude.

SAP, which is best known for its Enterprise Resource Planning and enterprise application software, plans to roll secure login and enterprise single sign-on technologies acquired via the acquisition into its portfolio. The Secude deal, announced Wednesday, also involves the move of consulting staff to SAP, a move that will help its sales staff to flog SAP-owned security technology as an alternative to third-party add-ons.

SAP plans to bake the basic version of Secude’s secure login into future releases of its enterprise applications – at no extra cost. The technology will sit alongside SAP’s existing NetWeaver identity management technology, as explained in a statement by SAP here.

Switzerland-based Secude, which remains separate from SAP, plans to focus on developing and marketing its remaining data protection products. The financial terms of the deal, announced Wednesday, were not disclosed.

More and more enterprise technology heavyweights are buying into security. SAP’s deal to acquit technology assets from Secude can be loosely compared to EMC’s decision to acquire RSA Security, the market leader in secure remote access, back in 2006, in a much larger and more significant deal. More recently there’s HP’s purchase of security tools firm ArcSight for $1.5bn, a move that strengthens its existing security portfolio, and Intel’s far more puzzling acquisition of McAfee.

Back in the day, security start-ups used to pine over an acquisition by the likes of Cisco or Symantec. These days a much larger range of potential suitors are available. ®

Vodafone Sacks Login Leakers

Vodafone has dismissed “a number of staff” following the misuse of login credentials that allowed unauthorized access to a Web portal meant to be accessible only to its employees and those of its resellers.

The data breach, misinterpreted as customer details being published on the Internet by a number of Australian media outlets earlier in the week, led to accusations that customer data such as credit card numbers was being stolen, and that people were misusing customer data to spy on individuals.

In an announcement made on Thursday, Vodafone re-stated that “customer records are not publicly available or stored on the Internet”.

The company says it is continuing to review its data security.

Questions will, however, remain. As pointed out to The Register by ANU professor and Australian Privacy Foundation chair Dr Roger Clarke, Vodafone will have to satisfy organizations like the Australian Privacy Commissioner that the portal used by staff and partners doesn’t provide access to more customer data than they need to carry out their normal functions. ®

SaaS Survey Results

Hosted apps Throughout this workshop, we have been looking at the factors that affect the acceptance of SaaS. Ultimately what it boils down to is trust, and when we look at what it is that creates trust, you tell us that the most important factors are:

  • A demonstrable track record on privacy and security
  • The quality of service and support

Looking at privacy and security, it is clear that for many of you there is still a long road to go before you are convinced of moving your applications and data beyond your firewalls and into the cloud. From the preliminary results of our survey on SaaS security, most of you are of the opinion, rightly or wrongly, that both SaaS security and privacy are worse than on premise capabilities. In many cases there seems to be a defensive reaction to SaaS data storage, with apples to oranges comparisons that skirt the issue of how to share and collaborate securely and effectively:

“If I have my data stored locally then it can only be stolen / lost through my own incompetence. If it is stored in the Cloud then not only am I risking losing it through my own mistakes but also through those of other people. If files are stored on a device kept in my cupboard then not even the most incompetent network admin on the planet can cause it to get taken.”

Some feel that the SaaS model is still immature and has yet to prove itself worthy, and are waiting for a it all to settle down before moving forwards even if their own infrastructure is far from perfect:

“At least keeping our data in house, the equivalent of a hiding our cash under the bed, means we are in control of it and know how it’s being looked after (even if it’s not that well). I think the cloud-computing industry needs to have successfully survived a few crises before we can categorically say that they’re safe enough to entrust with our company’s most precious assets.”

Arguably, the brouha surrounding Wikileaks is one of these defining events in the control of data in a SaaS provider. Regardless of the rights or wrongs of Wikileaks in leaking confidential information, the fact that application and data hosting services have been terminated without a legal hearing should be of concern for all companies.

“The high-handed treatment of Wikileaks by Amazon highlighted a weakness of cloud services. They should be run on the principles relied upon by telephone companies and ISPs – they are not responsible for content. Amazon’s intervention was little less than political censorship. If every carrier in the Internet had this attitude nothing would get through.”

Few consider that SaaS can offer better security and privacy, although there are certainly those that have done their homework and are using SaaS confidently, or that as a SaaS provider have developed a trusted solution that is widely used:

“We have over 1 million users on a PCI/DSS certified cloud platform based in the UK.”

“If you look at most cloud systems they have all the usual stuff. Data centres, firewalls, physical security etc. There is more investment in on demand flexibility and distributed storage, which makes sense for anyone who wants 100% uptime. You are jumping on the back of someone else’s investment.”

The approach that the following companies have taken is to de-risk SaaS, evaluating it on the level with on-premise solutions or to an even higher standard:

“Yes, risk is an issue, but with the right risk policy and data protection plan you can choose the right provider for your services.”

“Use an appropriate standard that provides a higher level of assurance than your current processes. It is highly unlikely that your current processes will pass PCI/DSS, so if you outsource to someone that passes PCI/DSS you have given the job to some one that has passed a much higher level of vetting than your current operation and is thus lower risk.”

Central to risk management is the question of performance – do you trust the provider to actually do what is agreed, and what actions to take should something go wrong. Judging by the feedback, there is a lot of concern here:

“And if your data is in Timbuktu, what about your outsourced admins? A UK admin might be approached with an offer of £50k for data/secrets the likelihood is that he’ll turn it down and report the incident. Offered £1m you might get a bite. A similar £50k offer to someone who has a fraction of the UK salary and living costs would be just as tempting as a £1m to someone in the UK.”

“Do we have any redress when, as they are certain to at some point, things go wrong? Who has a big enough stick to give them a smack on our behalf, occasionally, when they deserve it – or are cloud providers too big or nebulous to hurt?”

Another risk factor of course is what to do in the case of wanting to move providers or get everything back in house. This is a real worry, and something that should be agreed upfront:

“Exporting the data if I should decide to leave my provider is almost certainly going to be hideously complex and expensive.”

In practice, we know that some SaaS providers have some pretty good capabilities to allow for data movement and exchange. From the comments we’ve had it is important not to make assumptions, but to check it out. Another area to look at is the role of emerging standards to ease the movement of data between applications so that costly integration projects are not necessary when moving to another provider.

We know that service and support are major factors influencing the long term cost of ownership or service delivery. On an enterprise scale this needs to be localised and widespread in order for it to be responsive and relevant.

“A more important issue is can you phone them up if it goes wrong. At the cloud summit someone explained that support from Amazon and Google was non-existent – post in a forum and wait 3 months.”

The opinion expressed above, if encountered in reality, would usually result in a swift termination of service and a move back in-house or to a competitive provider. Support is commonly an Achille’s Heel for many IT solutions, not just SaaS, and the quality and capability will vary dramatically. Look for providers that can offer responsive support with local language skills and responsive support based on agreed SLAs, or engage with partners that can provide these capabilities on the ground.

It’s clear the jury is still out on SaaS applications, with divided opinions and a lot of gut instinct rather than cold light of experience influencing the path taken. The evaluation of risk comes down to knowing what it is that you need or want and how to measure it. This is more easily said than done, and the problem is eloquently summed up:

“If you know what you need, you can find it in the cloud. But in my experience, I have not seen too many companies that know what they need.”

For many companies, this leads to implementing IT by default as the accepted path, but it’s not necessarily the best approach for IT or the business. It boils down to knowing what you need, and then selecting the best solution that fits, be it SaaS or a different on-premise solution.

Datacentre Networks

Datacentre The job of a datacentre network is to connect the equipment inside to the outside world, and to connect the internal systems to each other. It needs to be secure, high performance and operate with an eye on energy consumption, with a guiding principle of minimising device numbers and costs, so you end up with a system that can do what’s needed while remaining as simple as possible.

Every facility is different, so there’s no off-the-shelf answer as to what exactly goes into a datacentre network. Component selection will vary according to budget, business requirements, site location and capacity, available power and cooling, and a host of other criteria depending on circumstances. That said, you’re likely to find that most datacentre networks arrive at common solutions to common problems and so look fairly similar.

You can conceive of a datacentre network as a series of layers, with the stored data at the bottom. On the first layer is the connection to the outside world – the internet – and, if it’s an enterprise’s own datacentre, to the rest of the company. If the datacentre is owned by a service provider and is servicing a number of external clients, the Internet connection and any other connections linking clients directly also sit on the outside ring.

The second layer, commonly referred to as the edge or access layer, consists of IP-based, Ethernet devices, such as firewalls, packet inspection appliances and switches, that route traffic to and from the core of the datacentre to the outside world. Here too sit many web servers in a so-called demilitarised zone or DMZ: hemmed in by firewalls, external visitors are allowed this far into the datacentre network but no further.

Below this is the core, with large, high-performance switches consisting of blades plugged into chassis, with each blade providing dozens of ports. The chassis is likely to be managed by a management blade, while other features such as security and traffic shaping can be provided by further blades. All data passes through these devices.

Closer to the servers will be a further layer, consisting of a series of switches, maybe one per rack or row of racks, depending on density, tasked with distributing data to and between servers in order to minimise load on the core.

Behind the servers, conceptually, is the main storage. This, the fourth and final layer, consists of a series of high-performance storage arrays connected via a Fibre Channel network that’s entirely separate from the main network. This means that only the servers can connect directly to the storage, although there’s likely also to be a link from the storage to the IP network for management purposes.

The Fibre Channel network needs separate switches and management systems to configure it, adding to IT staff’s workload, so this situation is slowly changing. In ten years time, industry analysts expect that most storage systems will be connected using the IP-based Ethernet network, probably running at either 40Gbps or 100Gbps.

Let’s look at an example of the network’s job. You click on a link in your browser, which generates a request for data that arrives at our datacentre via the Internet connection. The incoming request is scanned for malware, and is re-assembled and decrypted if WAN optimisation and encryption are in use. It’s then sent on to a switch in the access layer. This switch routes the request to a web server in the DMZ, which might be physical or virtual, and which might be fronted by a load balancer to allow a cluster of servers to handle high traffic levels.

The web server receives and processes the request. A response needs information from a database, so the web server calls for data from a database server at the core of the network.

The data demand is passed to a core switch which routes it to a database server. The processed request traverses the storage network, is pulled off the disks, arrives back from main storage, is packaged up and sent back to the web server. It’s then assembled into a web page and pushed back out the Internet connection.

While a broad-brush look at network design, this is the template with which a datacentre network designer will approach the problem of building a new network from scratch. ®