STE WILLIAMS

Ransomware Actors Cut Loose on Health Care Organizations

An attack on Allscripts last week that knocked out EHR services to 1,500 clients is the third reported incident just this month.

A string of recent attacks suggests that ransomware operators are sharply ramping up their focus on healthcare organizations.

Last week, electronic health record (EHR) provider Allscripts became at least the third organization in the health sector to get hit by ransomware since the start of this year.  

The other two were Indiana-based Hancock Health which ended up paying some $50,000 to get back access to critical information systems, and Adams Health Network also of Indiana, that managed to recover without any disruption. In all three incidents, attackers used different variants of SamSam, a well-known ransomware family to encrypt critical data.

Of the three victims, the $1.5 billion Allscripts is the largest, providing service to 45,000 physician practices, 180,000 physicians and 2,500 hospitals. The January 18th attack on Allscripts affected systems hosted in the company’s datacenters in North Carolina, resulting in its EHR and Electronic Prescription for Controlled Substances (EPCS) services becoming unavailable to some 1,500 clients.

Most of those impacted by the outage were small healthcare entities and individual physicians, some of who vented their anger on Twitter and other channels as Allscripts worked over a period of multiple days to restore its systems.

The EHR provider did not respond to a Dark Reading request Wednesday for an update on recovery efforts, nor has the company provided any information on the incident on its website. So it is not clear if all systems have been completely recovered as of Wednesday afternoon.

However, in update calls with providers and in statements to healthcare outlets, Allscripts has described in a fair amount of detail, the attack, and its response. One of them, the Texas Medical Liability Trust has provided a relatively detailed timeline of events and recommendations for those impacted by the incident.

Mac McMillan, CEO of CynergisTek, a company that provides security consulting services to healthcare organizations says the attack left those using Allscripts’ PRO EHR without access to client medical records. Those working in states that have mandated the use of EPCS had to resort to some very difficult workarounds for prescribing controlled substances to those in need of it, he says.

“The ones most impacted were the small practices that traditionally outsource (electronic medical records) and don’t plan for or have a viable backup when their vendor goes down,” McMillan says. “They simply have to wait until the vendor recovers.”

The attack highlights the need for those using such services to re-evaluate critical systems and vendor support and put response plans in place in the event of outages. “We’ll see more of these cloud-based attacks in the future. Their impact is so much greater for those launching them,” McMillan noted.

Ransomware attacks on hospitals and other healthcare organizations are not new. But the flurry of recent incidents suggests a heightened threat actor focus on the sector.

Security vendor Cryptonite in December 2017 released a report on cyberattacker activity in the healthcare sector and noted an explosion in incidents involving ransomware last year. The report, based on data gathered from breaches reported to the Health and Human Services Office of Civil Rights, showed there were 36 publicly reported ransomware incidents among health care institutions in 2017.

The number represented an 89% increase in ransomware attacks from the 19 reported in 2016. Among the top 10 healthcare data breach and hacking incidents last year, the top six were caused by ransomware. The biggest of them—an incident at Airway Oxygen—impacted some 500,000 records.

Mike Simon, CEO and President of Cryptonite says the reasons for the attacker interest in health care institutions are basic. “Healthcare networks are highly interconnected and this provides a substantial opportunity for cyberattackers to penetrate multiple high-value targets,” he says. EMR and EHR systems used by hospitals and large physicans’ practices are connected to mobile phones and tablets used by ambulatory clinicians, who in turn communicate with labs, nursing facilities, scan and surgical centers, and numerous other facilities.

“Healthcare networks’ architectures typically have a relative high number of known vulnerabilities [with] missing patches and updates, embedded and exposed processors in medical devices, a large number of internet of things (IoT) devices and more,” he says. “These make them particularly susceptible to a variety of known attacks for which most of these networks have no defense in place.”

Another factor driving heightened interest in the health care sector is the apparent success that ransomware attackers have had extracting money from victims. “When an attacker has success within a particular vertical it’s obviously tempting for them to do more of the same,” says Richard Ford, Chief Scientist, Forcepoint. “The concept of ‘if it ain’t broke, don’t fix it, works just as well for attackers” as it does for defenders, he says.

The use of SamSam variants in many of the recent attacks suggests attackers are going after healthcare organizations in a methodical manner, adds Joseph Silva, a member of Cyxtera’s cybersecurity analytics operations.

“Unlike the majority of ransomware families, SamSam isn’t delivered into a victim environment through phishing or malvertising methods,” he says. Rather it is being used to target specific healthcare organizations, gaining access to the environment and looking for high-value systems to infect.

“The threat actors utilizing SamSam are actively probing the victim environment for vulnerable servers, and then using those servers to enumerate the environment and identify systems that contain high-value data,” Silva says.

Related content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/ransomware-actors-cut-loose-on-health-care-organizations/d/d-id/1330901?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

NFC card skimming – is it really a thing? [VIDEO]

Last week we published an article entitled Does your credit card need a tinfoil hat to keep it safe on the train?

Sophos expert Matt Boddy set out to answer some modern-day concerns we hear surprisingly frequently: if you’ve got a contactless payment card or passport, could you get digitally pickpocketed, and, if so, can you prevent it happening?

Well, now!

Lots of readers messaged us to say they really enjoyed Matt’s entertaining and practical style (and thanks for your kind words, everyone), but we also managed to stir up a whole load of controversy that we didn’t expect.

So we took to Facebook Live to work through all the talking points we’ve been confronted with since last week…

(Can’t see the video directly above this line, or getting an error such as “no longer available”? Watch on Facebook instead.)

Note. With most browsers, you don’t need a Facebook account to watch the video, and if you do have an account you don’t need to be logged in. If you can’t hear the sound, try clicking on the speaker icon in the bottom right corner of the video player to unmute.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/CiLr7L3aaAs/

NHS deploys Microsoft threat detection service on just 30,000 devices

NHS Digital has yet to explain why it has taken months to roll out Microsoft’s Enterprise Threat Detection Service (ETDS) to only about two per cent of the UK health service’s targeted installed base.

The ETDS element was included in a custom support agreement that covers all NHS orgs in the UK under a framework penned in August following the crippling WannaCry attack in May.

Today, NHS Digital – the body that oversees information technology provided to the sector – told us the use of Microsoft’s service will give its techies cyber alerts designed to reduce the chance of a major breach or malware infection, and remediation advice should nasties get through.

It said the service contract followed a pilot with NHS Digital and Blackpool Teaching Hospitals Foundation Trust. ETDS has so far been deployed on “over 30,000 machines” and will “eventually” cover up to 1.5 million devices within healthcare across hospital trusts and GP practices.

ETDS is just one area of the framework the NHS signed last summer: it provides patches and updates for devices across the sector running various flavours of Windows including XP, Server 2003 and SQL 2005. It runs until summer 2018.

The agreement followed the unwelcome news last summer that at least 81 of the 236 NHS Trusts in England were among institutions across the globe that were hit by WannaCry.

The National Audit Office reported on the attack in October and said the UK health service could have defended itself “if only it had taken simple steps to protect its computers”. The full extent of financial cost remains unquantified.

The Department of Health failed to agree a working process with NHS England to secure computer and medical kit in the event of a cyber attack, meaning “patients and NHS staff suffered widespread disruption, with thousands of appointments and operations cancelled”.

Specifically, 19,494 appointments were shelved and this included 139 patients that had had “an urgent referral for potential cancer cancelled”.

The Register asked NHS Digital to detail the cost of the ETDS bought from Microsoft, the cost of the overall year-long framework, why ETDS has only reached 30,000 machines, and if the procurement heads considered alternative suppliers. We were told answers would arrive by the day’s end.

The Department for Health and NHS England are so far yet to respond to our request for comment last week, when the team behind an open-source Linux project called it day, citing a lack of support for their work and little appetite among some senior healthcare officials to treat their addiction to Microsoft products and services.

For what it’s worth, Dan Taylor, director of security (and clearly a corp-speak expert) at NHSDigital, said: “It is our role to alert organisations to known cybersecurity threats and advise them of appropriate steps to minimise risks; this marks a step change in our capability to provide high quality, targeted alerts to allow organisations to counter these threats and ensure patients’ needs continue to be met.”

Er, well said, that man. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/24/nhs_deploys_microsoft_threat_detection_service_on_30k_devices/

GDPR: Ready or Not, Here It Comes

As organizations all over the world look ahead to May 25 when Europe’s General Data Protection Regulation takes effect, many will fall short.

“Hindsight is 20/20” is an old cliché that laments the clarity of retrospection and the regret that often accompanies having overlooked (or ignored) the now-obvious ingredient that contributed to an unfortunate event. Often the sentiment is one that implies that preventing the mishap was within the speaker’s power but for the making of an ill-informed decision. Implied is the wish that things would be different “if I could do it again…”

Today, organizations all over the world are looking ahead to May 25, 2018, the date that Europe’s General Data Protection Regulation (GDPR) takes effect, and are trying to put in place the means to avoid having to utter those words. They are reading the law, huddling with consultants, and checking with their legal and technical teams so that when May 24 dawns they can go to bed confident they’ve done all they can do.

But there’s evidence that the time and money being spent today may not be going to the right places, and that many companies, despite earnest efforts to prepare in advance, will fall short of GDPR compliance.

The BBC reports that a recent survey of board members of 105 companies listed on the FTSE350, the largest 350 British companies on the London Stock Exchange, reveals that one in 10 lacks any plans for dealing with a cyberattack, and that more than two-thirds are untrained for such an event, despite the fact that more than half acknowledge that a cyberattack is a primary threat to their organization.

Read that again. The survey didn’t find that one in 10 organizations believes it is unprepared for an attack or lacks confidence in its preparedness. One in 10 companies lacks any plan for dealing with a cyberattack. In the first weeks of 2018, it is unfathomable to consider that 10% of large, global corporations have no plan for dealing with the inevitability of an attack on their networks and an attempt to access data.

What reasoning could there possibly be for dereliction of duty of this kind? With no specific knowledge or insight, I can only speculate. But it’s human nature to make no decision when overwhelmed with an abundance of information. Clearly, even in the age of big data analytics, there are successful businesses and business leaders who find themselves in that situation. They will be in for a rude awakening if, after GDPR takes effect, they experience a data breach and — with no plan on file to prove a good-faith effort at prevention — suffer a steep reputational and financial blow.

Whatever the reason —  paralysis of where to start/how to face an invisible threat, misguided “can’t happen to me” delusion, or just compacted at the bottom of a list of more pressing business critical functions — ignoring the very real possibility of coming under the hammer of the European Commission and writing a check equal to 4% of gross global revenue cannot be taken lightly.  

There is another cliché appropriate to this situation: forewarned is forearmed. However, with repeated and massive alarms raised and extensive discussion of the issues, forearmed has at this point eclipsed forewarned as an imperative. With so many companies seemingly following horror movie tropes of running toward a threat or simply not evaluating the situation with anything resembling common sense, there are three areas that, if given focus and careful consideration, can not only serve to prevent an organization from falling under the non-compliance blade but can improve overall security posture against any compromise or loss:

  • Communication. Start by ensuring that both business and IT are working toward a common goal of safe and frictionless operations with a clear understanding of how to document the roles of stakeholders in advance of material compromise. This includes discussions, role definition, and process development for executive, legal, communications, security, HR, and even the corporate board.
  • Connect the dots. This will involve mapping the business environment and assessing risk, from infrastructure to the critical assets most likely to be targeted and understanding all the ways in which exposure can occur.
  • Continuous evaluation. Once both the risk has been measured and the roles have been defined, it’s necessary to validate the process and plans — repeatedly. From technologies that can test and simulate attacks, to tabletop exercises that play out response plans/responsibilities, to engagement with services firms to root out vulnerability, it’s important to discover both the points of exposure and the impact of change to keep organizations from security atrophy and continuously in compliance.

Related Content:

Danelle is vice president of strategy at SafeBreach. She has more than 15 years of experience bringing new technologies to market. Prior to SafeBreach, Danelle led strategy and marketing at Adallom, a cloud security company acquired by Microsoft. She was also responsible for … View Full Bio

Article source: https://www.darkreading.com/cloud/gdpr-ready-or-not-here-it-comes/a/d-id/1330861?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Financial Impact of Cloud Failure Could Hit $2.8B in Insured Loss

A new report highlights the potential financial damage of downtime at top cloud services providers.

The public cloud industry is projected to continue growing revenue at a rate of 36% each year through 2026, according to a new report from AIR and Lloyd’s of London. As demand for cloud services grows, so too do the potential consequences of cloud failure. Downtime at a major cloud provider could hit $19 billion in ground-up losses, researchers report.

New data from AIR and Lloyd’s of London demonstrates the financial impact of downtime incidents at several top cloud service providers. Their report shows how 12.4 million businesses in the US, which it states is the most established cyber insurance market, would be affected if cloud systems were to fail for a span of several hours to eleven days.

This is intended to help insurers struggling to understand their exposure to systemic risk. Its goal is to help them prepare for an extreme event with potential to generate millions of claims.

Today’s insurance industry would be little help if a cloud service provider failed. In the event a cyber incident took a US-based top-three cloud provider offline for three to six days, researchers predict it would cost $6.9 billion to $14.7 billion USD in ground-up losses and $1.5 billion to $2.8 billion USD in insured loss.

The gap is due to cyber insurers’ low penetration rates, especially among small- and medium-sized businesses, in addition to limited coverage. Most policies have low limits and long waiting periods, with about eight to 24 hours of downtime before coverage starts to kick in. Researchers say this is an opportunity for insurers to provide coverage for extreme scenarios.

The cloud industry is highly concentrated, researchers say, with the top 15 cloud providers making up 70% of industry market share. More businesses are using these services: 37% of companies are expected to use infrastructure-as-a-service as their primary environment in 2018, and the number primarily relying on traditional on-prem infrastructure will drop to 43%.

As dependence on cloud continues to grow, cyber insurers should be wary of threats with the power to bring it down. The report classifies four types of “silent” threats: environmental (natural disasters, critical infrastructure failure), adversarial (actors with malicious intent), accidental (employee mistakes), and structural (equipment and software failure).

David Bradford, chief strategy officer and director of strategic partner development at Advisen, calls cloud computing “a huge risk that [insurers] haven’t really come to grips with” and “one of the few things putting the breaks on the cyber marketplace.”

“If a large cloud provider goes down, the insurer would be responsible for thousands of different companies using that service,” he says.

Researchers report the most likely causes of interrupted cloud services include malicious attacks by external actors, employee errors, and hardware and software failures. Data indicates 50% of businesses targeted in a DDoS attack were also the victims of theft, and 36% of companies hit in these incidents were infected with malware.

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/cloud/financial-impact-of-cloud-failure-could-hit-$28b-in-insured-loss/d/d-id/1330891?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Bell Canada Hit with 2nd Breach in 8 Months

Less than 100,000 customers affected in latest incident.

Police are investigating a data breach at Canada’s largest telecom provider, the Canadian Press reports. While the exposed data appears only to be names and email addresses on fewer than 100,000 customers – no financial information – this is the second data breach reported by Bell Canada in the past eight months.

In May, the telecom company announced a breach of 1.9 million customer email addresses, as well as 1,700 names and phone numbers. Ten years ago, Bell Canada also experienced a security incident, in which personal data on 3.4 million customers was stolen, but later recovered.

Bell Canada told customers, in breach notifications, that the company had instituted new security and identity and access management measures, according to the Canadian Press.

For more information, read here

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/bell-canada-hit-with-2nd-breach-in-8-months/d/d-id/1330894?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Security Automation: Time to Start Thinking More Strategically

To benefit from automation, we need to review incident response processes to find the areas where security analysts can engage in more critical thought and problem-solving.

Automation is being hailed as a way to take some of the heavy lifting away from overworked security operations teams. Security vendors are integrating automation into their point solutions to automate tasks such as security policy orchestration, change and configuration management, incident response playbooks, and other labor-intensive tasks.

This is a good start toward solving some of the challenges of managing the modern security stack. But we need to think more strategically about automation if we’re truly going to solve cybersecurity workforce challenges and gain any kind of edge over hackers.

Most automation takes place at the front end of the cycle: the detection and prioritization of security alerts. A combination of threat intelligence feeds, SIEMs, and incident response platforms generate event and incident data and perform some level of automation (correlation, orchestration, change management, etc.). This automation is helpful, but I hear, on average,  from security teams that they are only spending about 30% of their time on the front end of the cycle. It’s what happens after a threat is detected, prioritized, and sent to the operations team that the real work begins.

In most organizations I’ve worked with, I see an estimated 40% of a team’s resources being poured into manual investigation of incidents. This is often the most painstaking, lengthy part of the security life cycle. Analysts tasked with investigating and remediating security alerts often see more than 1,000 alerts per week from the more than 40 vendors deployed throughout their complex environment. The introduction of threat intelligence compounds this problem, as a single feed can generate more than 3.5 million indicators per month. Given the volume of data that must be evaluated and investigated, the average enterprise is ultimately throwing away more than 90% of its security data.

The remaining 30% of their time is focused on mitigation and reporting of the incident. These last two steps are the most important for learning from an incident and being better prepared for a future incident — yet most teams simply do not have the time or infrastructure to properly follow through on them. Once the lengthy investigation process is concluded, the results of that investigation are retained as independent, isolated reports. The technical details of the security incident are not stored or structured in a way that allows for automated correlations and are often missing the organizational context. Even when enterprises are creating their own indicators, they are manually maintaining lists of malicious IPs or domains in spreadsheets or text files rather than feeding those insights back into the system to be applied to future threats.

It’s not enough to simply introduce automation. In order to extract the most benefit from automation, we need to holistically review incident response processes to find the areas where security analysts can engage in more critical thought and problem-solving.

Part of that means finding ways to automate the actual intelligence and do more of the analytical work in order to allow analysts to make quicker, more informed decisions. Beyond automating process elements, look for ways to automate correlation rules, historical analysis and coordinated communication between security devices. Intelligence automation will bring incident response to the next level. 

Related Content:

Liz Maida is instrumental in building and leading the company and its technology, which is founded on core elements of her graduate school research examining the application of graph theory to network interconnection. She was formerly a senior director at Akamai Technologies, … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/security-automation-time-to-start-thinking-more-strategically-/a/d-id/1330859?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hawaii Gov. couldn’t flag false missile alert on Twitter – didn’t know password

You’ve probably read about the ins and outs of that 38-minute lag between Hawaii’s false ballistics missile alert and Hawaii’s Emergency Management Agency’s (HI-EMA’s) “false alarm!” correction, right?

You may remember how HI-EMA said there wasn’t a system in place to correct the initial error, and how it had to “double back and work with the Federal Emergency Management Agency (FEMA) [to create the false alarm alert], and that’s what took time.” (Which, by the way, FEMA subsequently said was incorrect: states are authorized to cancel or retract warning messages on their own).

Well, here’s a brand-new raison d’être for the infamous 38 minutes, and it comes fresh from Hawaii Gov. David Ige. Namely, even though the governor knew it was a false alarm within two minutes of it being sent, he couldn’t update the public via Twitter because he didn’t know what his password was.

According to the Honolulu Star Advertiser, Ige was asked about that delay on Monday as he met with reporters after his State of the State address.

Well, see, here’s the thing, Ige said: he didn’t actually know how to log onto Twitter:

I was in the process of making calls to the leadership team both in Hawaii Emergency Management as well as others.

I have to confess that I don’t know my Twitter account log-ons and the passwords, so certainly that’s one of the changes that I’ve made. I’ve been putting that on my phone so that we can access the social media directly.

Yes, you definitely do want to access the social media directly when you’re in a position such as governor. Or, well, at least, somebody in the office should really know how to get into the account.

As the newspaper notes, a lot of politicians – and celebrities, for that matter – have staff who handle all that for their bosses by posting or tweeting on their behalf. Unfortunately, that often means that there are a lot of people sharing login credentials for very tempting accounts that hijackers love to target. A few years ago, Twitter came up with a tool, TweetDeck Teams, to enable teams to delegate different access levels to team mates for as long as they need it. Then, when they don’t, zip! You can take it away.

So, Governor Ige, if we can be so bold as to offer a bit of advice, that’s one tool you might want to consider, in conjunction with sharing access to your account with your staffers so as to avoid another situation like that 38 minutes.

The tool also makes it possible for anyone sharing an account to use Twitter’s two-factor authentication, or what it calls “login verification”.

That will send a one-time login code to a user’s phone that they need to enter in addition to a username and password. It’s another layer of protection against would-be account hijackers, since they’d need not only your login credentials but also your phone to take over your feed.

When it comes to getting your Twitter password safely into your phone for easy access, password managers can come in handy. If you don’t already have one on your phone, you might want to take a look at our guide to getting started with LastPass, Keepass or with Smart Lock and iCloud Keychain.

Just please, promise that nobody in your office is going to jot down your Twitter login credentials on a sticky note. That one hasn’t worked out well for HI-EMA in the past!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/AM-eg7773iU/

Serious ‘category one’ cyberattack not far off – warns security chief

This week, the head of Britain’s National Cyber Security Centre (NCSC), Ciaran Martin, said something rather alarming in a newspaper interview that generated plenty of headline heat – the UK has never suffered the most serious category one (C1) cyberattack but it is only a matter of time before it does.

I think it is a matter of when, not if and we will be fortunate to come to the end of the decade without having to trigger a category one attack.

It’s the sort of warning people would probably rather not think about but undoubtedly applies in any developed country.

For anyone unsure what a C1 cyberattack is, the NCSC puts it at the top of the following three-stage definition sent to Naked Security:

C1 – “National Emergency – an incident or threat which is causing or may cause serious damage including loss or disruption of critical systems or services.”

Interestingly, this includes not only attacks on critical systems such as power utilities but the democratic process, for example through disinformation, fake news and online voter fraud.

To date, only the US and France have suffered a C1 attack, in both cases involving alleged assaults by foreign nations on their national elections.

C2 – “A significant incident or threat requiring coordinated cross-government response.”

The best example of a C2 would be last year’s WannaCry attack, which disabled computers in enough NHS hospitals that operations had to be cancelled. Since it was founded, the NCSC has recorded 34 of these.

C3 – “Sophisticated network intrusion, cybercriminal campaign for financial gain, or the large scale posting of personal employee information.”

These attacks primarily target single companies, for example through large-scale ransomware or data breaches. To date, the NCSC knows of 762.

On closer inspection, warnings about such cyberattacks are not new if you read the NCSC’s annual report from last October or remember the pointed warnings about Russia’s alleged intentions only weeks later.

It’s more a question of emphasis – by drawing attention to the threat he’s spelling out reality with more urgency.

So if a C1 attack is pretty much a certainty, then the game is really about prediction. There is no point telling citizens of the UK (or those in any country for that matter) about a serious cyberattack after the event when the whole point is to boost preparedness.

The NCSC itself receives real-time reports from organisations via something called the Cyber Security Information Sharing Partnership (CiSP), but this requires registration.

Any C1 cyberattack on the UK would appear on its radar through this channel or from reports submitted by the public sector.

What then?

Although Naked Security understands there are no plans for it at the current time, one possibility is to use a threat warning index similar to that used by the UK and US to alert people to imminent terrorist attacks. In the case of international terrorism in the UK, this has been at “severe” or “higher” for virtually all of its existence.

Implementing something similar for cyberattacks would be complex but, in some form, might be inevitable if we are to start taking Martin at his word.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Nyv81-_pc3g/

Apple’s Tim Cook doesn’t want his nephew on social media

Apple CEO Tim Cook is the latest high-tech bigwig to admit to plugging his nose at the overuse of technology.

In Cook’s case, we’re talking about social media, specifically as it applies to his nephew. He doesn’t have children himself, but Cook doesn’t want his nephew (who’s apparently around the age of 13) on it, he said last week at a school coding initiative.

As The Guardian reports, Cook was talking at the UK’s Harlow college, in Essex, which is one of 70 institutions across Europe that will use Apple’s Everyone Can Code curriculum: a year-long curriculum of coding learned through the use of games, lessons and interactive materials.

As The Guardian tells it, every student is given an iPad loaded chock-a-block with coding apps and tools that they use as teachers guide them through the concepts of coding. It might seem contradictory, but it was during Everyone Can Code at Harlow college that Cook said that overuse of technology can be a problem. The Guardian quoted him:

I don’t have a kid, but I have a nephew that I put some boundaries on. There are some things that I won’t allow; I don’t want them on a social network.

Technology shouldn’t dominate, he said, even in computer-aided courses such as graphic design:

I don’t believe in overuse [of technology]. I’m not a person that says we’ve achieved success if you’re using it all the time. I don’t subscribe to that at all.

There are are still concepts that you want to talk about and understand. In a course on literature, do I think you should use technology a lot? Probably not.

As it is, a recent report from the British Children’s Commissioner that looked at social media use among 8- to 12-year-olds found that children aren’t getting enough guidance to cope with the emotional demands that social media puts on them.

Facebook, for its part, last month admitted that social media can be bad for you.

On the plus side, there are those boosts in self-affirmation you get from friends on social media when they like or comment on your content. On the negative side, those boosts are addictive. Facebook ex-president Sean Parker recently said Facebook’s creators all knew, at the dawn of the social network, that they were exploiting a “vulnerability in human psychology” – one that Facebook founders went ahead and exploited anyway.

There have also been multiple studies that have looked at the dark side of Facebook. Five themes emerged from one such: managing inappropriate or annoying content, being tethered to Facebook, perceived lack of privacy and control, social comparison and jealousy, and relationship tension.

We’ve been witnessing quite a bit of recoil from social media creators as they behold the creature that they’ve created. There was Chamath Palihapitiya, former vice-president of user growth at Facebook, who said that he regrets his part in building tools that destroy “the social fabric of how society works.”

The short-term, dopamine-driven feedback loops that we have created are destroying how society works. No civil discourse, no cooperation, misinformation, mistruth.

Other ex-Facebookers who’ve stepped back to question the repercussions of what they’ve created include Facebook “like” button co-creator Justin Rosenstein and former Facebook product manager Leah Pearlman, who have both implemented measures to curb their social media dependence.

It’s not just Facebook; in the midst of the current analysis of fake news, Snapchat CEO Evan Spiegel, for one, has also been doing some introspection.

Mark Zuckerberg even devoted his yearly personal challenge to “fixing” Facebook, afflicted as it is by abuse and hate, nation states that use it as a propaganda tool, and the danger that it can turn users into passive, miserable couch potatoes.

But although Cook doesn’t want his nephew to get sucked into the social media maw, he’s gung-ho about getting youngsters to learn code – as in, forget the Parlez-vous français? classes. Coding is more universal, Cook said:

I think if you had to make a choice, it’s more important to learn coding than a foreign language. I know people who disagree with me on that. But coding is a global language; it’s the way you can converse with 7 billion people.

True, you can “converse” with people via code. You can communicate, after a fashion. But can you talk to them?

…or do you need to turn yet again to social media, and Google translate, for that?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2-PGyk72aXQ/