STE WILLIAMS

Microsoft Friday false positive: Bluber-A ballsup makes sysadmins blub

Enterprises were faced with all sorts of inconvenience on Friday after a Microsoft security tool incorrectly flagged up benign files as infected with a worm.

Microsoft Defender’s false positive resulted in false alarms that files were infected by Bluber-A, a previously obscure cyber-pathogen. Redmond’s security gnomes reacted quickly by pulling the rogue definition file and pushing out a fresh update, as explained in a note attached to the Bluber write-up.

On March 31, 2017, an incorrect detection for our cloud-based protection for Worm:Win32/Bluber.A was identified and immediately fixed. To ensure that this issue is remediated, you can do a forced daily update to download your Microsoft antimalware and antispyware software. The fix has been deployed in signature build 1.239.530.0 on March 31, 2017, 2:50 PM PDT.

False positives are a well known Achilles’ heel of security scanner packages. All vendors experience the problem from time to time. Microsoft – as the creator of Windows – ought to be better placed than most to avoid such pratfalls but the latest incident is far from unprecedented (previous examples here and here). Redmond responded quickly but still not promptly enough for one Reg reader who got roped in to deal with the problem last Friday.

“Friday afternoon wasted [on] this unnecessary crap,” reader Michelle told El Reg. “Thought we had a vicious worm spreading throughout the organisation at high speed.  Turns out that we did… Microsoft updates. :-(“

The issue also generated a animated thread on Reddit. We asked Microsoft to comment on the snafu but we well told that this would have to come from its US office. We’ll update this story as and when we hear more. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/03/ms_defender_bluber_false_alarm/

Power plant cyber threat: Lock up your ICSs and SCADAs

Nuclear power stations have been told to tighten their defences after government officials warned of a “credible” cyber threat.

Intel agencies are warning that terrorists, foreign spies and hacktivists are all looking to exploit “vulnerabilities” in the nuclear industry’s internet defences, The Telegraph reports. Security bugs in SCADA systems and associated computer networks are becoming increasingly commonplace. Exploiting them successfully is certainly possible, but far from trivial.

Energy minister Jesse Norman told the paper that civil nuclear strategy published in February already provides guidance about protecting against cyber threats.

John Bambenek, threat intelligence manager at Fidelis Cybersecurity, said that the call for increased vigilance made sense, adding that there was no need to press the panic button.

“It should be noted that the reports suggest terrorist groups want to develop capabilities to attack energy and nuclear facilities, but do not yet have that ability,” Bambenek said. “However, that doesn’t mean vigilance isn’t due. Utility operators need to ensure that their critical systems do not have direct internet access and controls are in place so that no one system could cause a catastrophic outage.

“The power outages in Ukraine – that have been attributed to the Russian government – show us that even commodity tools can be used against critical infrastructure to great effect. Operators need to ensure their safety testing includes scenarios where there are machines controlled by adversary powers to ensure controls still protect against failures. In addition, what is almost more important than monitoring inbound network traffic is monitoring outbound traffic which often yields more valuable intelligence on potentially compromised devices inside a utility company.”

Peter Carlisle, VP EMEA at Thales e-Security, added: “Cyberattacks against critical national infrastructure are set to increase dramatically as criminals develop increasingly heinous methods to jeopardise Britain’s national security.

“From power stations to the transport network, the risk to the public remains severe, especially if hackers are able to gain access to electronic systems.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/03/power_plant_cyber_threat_warning/

Tax Deadline Leads to Heightened Phishing Email Activities

IRS warns tax professionals to watch out for phishing email scams attempting to steal user credentials.

As the April 18 tax deadline nears, identity thieves have started getting more active in attempts to steal taxpayer data and as a result authorities have asked tax professionals to be on their guard. The Internal Revenue Service (IRS), state tax agencies and the tax industry have warned of phishing email scams that will try to steal usernames and passwords for IRS e-Services.

The IRS has warned of scam emails “signed” by “IRS gov e-Services” and using subject lines such as “e-Service Account is Blocked,” “Your Account is Closed,” etc. that lure unsuspecting victims into opening the link and directing them to fake e-Services login page. This allows scammers to steal credentials of the victims.

The IRS recommends contacting the e-Services Help Desk for any information related to account closure and to visit the IRS site for clarity.

More information here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/endpoint/tax-deadline-leads-to-heightened-phishing-email-activities/d/d-id/1328541?_mc=RSS_DR_EDT

Georgia Brothers Jailed for $540,000 Corporate Fraud

The two misused corporate registration information to order electronics from small businesses.

A US District Court has handed two Georgia residents prison terms for engaging in a corporate fraud scheme which cost the victims over $500,000 in losses. Antonio Sandridge was sent to prison for 27 months and his brother Rodney for 42 months, and both ordered to pay a fine, too.

A US Department of Justice (DoJ) press release says the two men filed new registration details for existing companies on the Georgia Secretary of State’s site, and used the new information to order electronic equipment from victim companies using false credit applications. The brothers received the goods at the address provided by them and resold them without paying the senders, who were small businesses and allegedly suffered immensely.

The culprits cheated 16 companies of over $540,000 worth of computer equipment between 2012 and 2014, says DoJ.

DoJ adds that the Sandridges are repeat offenders who were jailed in 2006 for a similar scheme. 

Click here for more.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/georgia-brothers-jailed-for-$540000-corporate-fraud-/d/d-id/1328540?_mc=RSS_DR_EDT

More than Half of Security Pros Rarely Change their Social Network Passwords

Survey finds IT security professionals don’t practice what they preach at work when it comes to their social network passwords.

Some security professionals apparently find it tough to maintain safe password practices outside of work, with 53% percent acknowledging that they either haven’t changed their social network passwords in more than a year – or at all, according to a report released today by security firm Thycotic.

According to the survey of nearly 300 security professionals conducted at the RSA Conference in San Francisco in February, 33% of security pros say they have not changed their social network passwords in more than one year, and 20% have never changed their passwords. And on top of that, nearly 30% of survey participants rely on birthdays, addresses, pet names, and children names for their social network passwords, the survey found.

These practices run counter to the industry’s often touted mantra of the need to frequently change passwords and make them complex as possible. Needless to say, failure to engage in these practices can potentially lead to cybercriminals not only infiltrating the social networks of security pros but also possibly social-engineering or phishing their way into their work accounts.

Although 45% of survey respondents believe that at least half of company-related cyberattacks involve privileged passwords, Joseph Carson, Thycotic’s chief security scientist, tells Dark Reading he personally believes the figure is closer to 63% based on his digital forensics research and ethical hacking.

And of that 63% figure of all breaches involving privileged passwords, Carson estimates 30% come from IT administrators’ passwords and 10% from someone with some responsibility in security.

“Although 10% may not seem like a high figure, the biggest cost to a company financially will be from this 10% because of the privileges they hold,” Carson says. “The difference between a security breach and a security catastrophe comes down to the level of authorization that the person had.”

Do What I Say, Not as I Do

To understand why security professionals don’t always practice what they preach when it comes to protecting passwords outside of work requires some insight into the particular challenges they face.

Typically, security pros are aware of the potential dangers of single sign-on passwords and will have a separate password for each account they hold, both work-related and personal. In Carson’s case, he has over 400 personal and work-related accounts where he uses a separate password.

In order to help him manage the hundreds of passwords, Carson says he uses password management tools like password vaults. But the vast majority of his fellow IT security professionals do not use such tools. He noted in a benchmark survey taken over a year ago with more than 1,000 security professionals that only 10% to 20% of survey participants indicated they used a password vault or other password management tools.

As a result, in some ways, it may not be so surprising that security professionals find it hard to maintain the same level of vigilance with their personal accounts as they perform with work-related accounts, he says.

“There are many known cases of data breaches from compromised credentials and passwords from security professionals resulting from malware and phishing scams delivered via social networks,” Carson says.

Morey Haber, vice president of technology at security firm BeyondTrust, says he is not surprised by the findings in Thycotic’s RSA survey.

“Most social media accounts require best practices for password complexity but falter when it comes to other security disciplines. For example, they fail to expire passwords after 90 days, require a reset, and allow browsers to ‘Remember Me’ for cached authentications for an infinite duration,” Haber says. “Since these additional security controls are what most people rely on to reset passwords on a periodic basis, I can only assume the transparent approach makes even the best security professionals lax for social media account password changes. I can only hope they follow at least best practices for password reuse, and each social media account has a different password in case one is compromised.”

He says while it’s rare for a breach of a security professional’s account to be attributed as the primary attack vector, the likelihood of their account being compromised due to Pass-the-Hash or other hacking techniques is higher if they log into a compromised system, access from an unsecured remote location, or have legacy accounts that have never had their passwords changed. “The longer a password goes stale, the more likely it will be compromised,” Haber says.

Ironically, 25% of the Thycotic survey respondents say that they will change their password at work only when the system alerts them. Such an attitude may attribute to the more than 3 billion user credentials and passwords that were stolen in 2016, according to the Thycotic and Cybersecurity Ventures’ Password report.

Related Content:

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/more-than-half-of-security-pros-rarely-change-their-social-network-passwords-/d/d-id/1328538?_mc=RSS_DR_EDT

Russian-Speaking APT Recycles Code Used in ’90s Cyberattacks Against US

Researchers discover connection between Turla cyber espionage gang and wave of attacks against US government agencies in the 1990’s.

KASPERSKY SECURITY ANALYST SUMMIT 2017 –  St. Maarten –  Some security researchers long have suspected that the hacker group behind a wave of cyber espionage attacks in the mid- to late 1990’s against NASA, the US military, Department of Energy, universities, and other US government agencies is the very same group known as Turla, aka Venomous Bear, Uroburos, and Snake, an especially stealthy and innovative Russian-speaking attack team that has been active since 2007. There has been no solid technical evidence to make that connection – until now.

Researchers from Kaspersky Lab and Kings College London here today announced that they have been able to connect the dots from the Moonlight Maze attackers from the ’90s and the currently active Turla group, a cyber espionage team that, among other novel methods, hijacks unencrypted satellite links to help quietly exfiltrate data stolen from its victims. It appears the two groups may be one and the same, according to the researchers, which would make Turla/Moonlight Maze one of the longest-running attack groups alongside the Equation Group. They discovered that Turla has recycled and reused code it may have had in its arsenal all these years, employing an open-source, stealthy, data extraction tool-based backdoor – known today as Penquin Turla – that shares code with another backdoor they used in the ’90s attack wave.

Kings College’s Thomas Rid, in his 2016 book “Rise of the Machines,” had already pointed out connections between the two generations of attacks, but the researchers decided to dig further and root out some technical proof. The team was able to obtain a valuable relic from the Moonlight Maze attacks: an old hijacked server one of the UK victims had saved over the past two decades since the FBI and US Department of Defense had found forensic evidence showing a link to Russian ISPs. Rid, his colleague at Kings College Daniel Moore, and Kaspersky researchers Costin Raiu and Juan Andres Guerrero-Saade then spent nine months analyzing and studying logs and artifacts from the server for clues that could more definitively prove that the ’90s-era attack group lives today as Turla. The attackers that infiltrated US government and research networks back then had used the server as a proxy. The server provided the researchers a snapshot of time: 1998-1999.

Moonlight Maze exploited open-source Unix tools to target Sun Solaris-based Unix servers, which were popular back in the day in those environments. The researchers spotted the ties between the Moonlight Maze backdoor, which was based on the open-source LOKI2 program that dates back to 1996, with Penquin Turla, a Linux-based backdoor tool Kaspersky researchers first found in 2014. They found something they hadn’t first noticed when they studied Penquin nearly three years ago: it also is based on LOKI2. 

Kaspersky Lab as a policy does not identify cyber espionage groups. Guerrero-Saade, senior security researcher with Kaspersky, confirmed that Turla gang’s artifacts feature Russian-speaking elements and Russian IPs connecting to the attacked machine, but declined to comment on whether Turla is a Russian state actor. “We found small Russian-language artifacts and connections to Russian IPs,” he says, adding that Moore concluded that the logs jibed with the Russian time zone.

Meanwhile, the researchers had plenty of logs to peruse and study from the old server, he says. “No one working on the incident [in the 1990s] ever got to see how it worked … We now have a comprehensive glimpse at how they were carrying out their operations,” Guerrero-Saade says. It wasn’t until 1999 that word of the FBI’s investigation into the attacks leaked publicly, but most of the information surrounding the attacks has remained classified. The FBI had destroyed much of the traces of the attacks as part of its standard procedure for evidence disposal.

Among the more interesting finds in the logs, according to Guerrero-Saade, was that Moonlight Maze had accidentally trained its own attack tools against itself multiple times. The attackers inadvertently infected their own machines with their sniffer and sent their own sniffed traffic to one of the servers. “This happened several instances,” he says. 

So Moonlight Maze inadvertently recorded its own live terminal sessions on its victims’ servers. That information ultimately got sent back to HRtest, the UK company’s old server that had been used by the attackers as a strategic relay system.

Guerrero-Saade says the team hopes to solicit help from other researchers to find further connections and clues to confirm that Moonlight Maze and Turla are one and the same. But so far, the new findings seem to back that up.

“If we are right – and I think we’re in the right direction – we’re talking about a 20-year-old threat actor,” Guerrero-Saade says. “That would put them in the league of titans, which was only filled by the Equation Group until now.”

But how times have changed for Moonlight Maze/Turla: “Moonlight Maze was trying to find its car keys in ’96,” he says of the group’s nascent phase. Flash forward to now, with Turla able to mask a decades-old backdoor as a new one that continues to mostly evade detection. “Watching the tool evolve and it becomes one of their favorites. So they start to strip it down and add other functionality … and it becomes a main part of their arsenal.”

Second Wave

Penquin Turla today is typically used in a second wave of attacks, using Unix servers as a channel for exfiltration. “I think there is a present-day security concern we need to address: How can it be that a 15-year-old backdoor is still capable of being effective on modern Linux systems,” Guerrero-Saade says.

Turla long has been recognized as one of the more sophisticated and stealthy attack groups. It’s constantly retooling its malware and file names, and other researchers have spotted other examples of this constant reinvention. Take Carbon, another backdoor from the Turla group. In the past three years since the creation of Carbon, researchers at ESET have identified eight active versions of this backdoor. Carbon – which Guerrero-Saade says is not related to the Penquin Turla backdoor – also has been in use by Turla for several years.

Jean-Ian Boutin, senior malware researcher at ESET, says Turla is unlike other Russian-speaking groups. “The tools they are making make more effort to stay under the radar. When information is published about them, they usually change their tactics, whereas APT 28 [aka Fancy Bear] stays on course” even if it’s outed, he notes. APT 28 is thought to be the Russian GRU, its main intelligence directorate.

Another MO with Turla appears to hint at a Moonlight Maze-Turla connection, too. Turla’s Carbon resembles another of its tools, the rootkit Uroburos – an older tool, according to Boutin. The two employ similar communications frameworks, with identical structures and virtual tables. The catch is, Carbon has fewer communications channels, so ESET believes it may be a light version of Uroburos, sans the kernel components and exploits. Like Kaspersky Lab, ESET doesn’t attribute attacks to specific organizations.

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: http://www.darkreading.com/threat-intelligence/russian-speaking-apt-recycles-code-used-in-90s-cyberattacks-against-us/d/d-id/1328539?_mc=RSS_DR_EDT

To Attract and Retain Better Employees, Respect Their Data

A lack of privacy erodes trust that employees should have in management.

It’s the first day of your job and you’re filling in the I-9 ID verification form, scanning your passport, and completing a direct deposit sheet. This sounds great until someone realizes the human resources folder is open to the “Authenticated Users” group, meaning every employee and contractor in the company has easy access to your information.

Employees tend to believe their data will remain private because they trust their employer to keep it safe — a faith that is sometimes misplaced. This US tax season, 130 organizations and counting fell victim to W-2 business email compromise scams in which employees were tricked into releasing personal information of other employees, affecting 120,000 tax payers. This lack of privacy negates any trust that employees have in management to keep their personal data safe.

Employee Data Ends Up in the Darndest Places
Employees place trust in their employers as soon as they hand over their personally identifiable information — name, address, Social Security number, bank account information — and agree to background checks. This data should be considered extremely sensitive because in the wrong hands it can be used to harm employees in many ways, including identity theft, taking out credit, or filing false tax returns, as happened in the American Type Culture Collection W-2 leak this year.

Employees trust their employers to store data in a private and secure manner, but new employees typically provide sensitive information without asking, “How long will you hold on to this? Who will have access to it? Who will review that access? Will you know when something goes wrong?”

Just like transactional data, much of this information is stored in databases or corporate HR systems either on-premises or in the cloud. One mistake organizations make is that they fail to realize personal information often finds its way into files and emails — a PDF of a W-4, a driver’s license image saved in an email, or, worse, a spreadsheet with many employee records. These files are then stored among the millions of other files in file shares, SharePoint, and Exchange platforms on-premises and in the cloud.

These file and email stores were designed for easy collaboration but lack the security controls to protect sensitive information and meet regulatory compliance needs. Just as a new employee may not question her new employer’s security practices, in our eagerness to create and share information quickly, too few have questioned the adequacy of the controls surrounding our information.

Unfortunately, most organizations don’t actually know where all their employee data is stored or how it’s being used. A recent Forrester Consulting survey commissioned by Varonis found that 41% of security professionals know where employee data is located and 41% classify it based on its sensitivity. Only 45% audit all use of this data and analyze it for abuse, the same results we found in our 2017 RSA booth survey.

It’s worth noting that Forrester found only 38% of respondents enforce a least-privilege model against this data. This means that 62% of organizations expose their employee data to more individuals than need access, increasing the risk for misuse.

The Cost of Stolen Employee Data
In addition to the hard dollar costs associated with a breach, including cybersecurity insurance premium hikes, damages, and regulatory fines, there are many other costs that are difficult to quantify: brand damage and reputational damage. One cost that organizations may fail to consider: employee trust. Would you choose to go work for a company that was in the headlines because its W-2s were breached over one that hasn’t? 

Mitigating the Risks and Attracting Top Talent
Companies that say “We not only say that we take our employee data seriously but here’s exactly how we do it” will have an advantage in the labor market to hire and retain the best employees. This is one way that an effective data security strategy can drive revenue and growth. 

There are five key areas every organization needs to focus on when it comes to protecting all of its sensitive data.

  • Classification: It’s imperative that you know where your employee data resides so you can begin to restrict access and monitor for abuse. Most organizations find that manual tagging or classification efforts are insufficient. An automated classification system will look for potential sensitive data, including employee information found in HR documents.
  • Least privilege: Limit access using the principle of least privilege or “need-to-know” — who has a legitimate need to access that data today? To enforce a least-privilege model means to continually make sure that the list of people who have access need access. People’s roles change.
  • Monitor your data: Use of sensitive data must be monitored. It’s impossible to detect abuse and figure out who should have access if the asset isn’t being monitored. If I have access to employee data I never touch because it doesn’t apply to my job anymore, that’s relatively easy to identify based on my access behavior. Otherwise, you’re relying on someone to notice and mention that I don’t need that access, and I usually have other things to do. Monitoring data usage is also key to analyzing and alerting on abuse. Sophisticated user behavior analytics can discern when access is suspicious.
  • Retention policies: Data you don’t need any more is at risk for being stolen or misused. Almost every organization I’ve worked with has policies for retaining employee data and a system for enforcing those policies, including automatic reviews of stale data.
  • Employee training: No matter what controls and technologies you use, make sure that your employees understand the value of the assets they use. Any employee who comes into contact with potentially sensitive information must get training on the systems and controls that protect that data, how to make sure they’re enforced, and the risks associated with mishandling that data. Make it clear that you respect your employees’ data and your employees will know you respect them.]

[Check out the two-day Dark Reading Cybersecurity Crash Course at Interop ITX, May 15 16, where Dark Reading editors and some of the industry’s top cybersecurity experts will share the latest data security trends and best practices.]

Related Content:

Brian Vecci is a 19-year veteran of information technology and data security, including holding a CISSP certification. He has served in applications development, system architecture, project management, and business analyst roles in financial services, legal technology, and … View Full Bio

Article source: http://www.darkreading.com/endpoint/privacy/to-attract-and-retain-better-employees-respect-their-data/a/d-id/1328534?_mc=RSS_DR_EDT

Reactive to Proactive: 7 Principles Of Intelligence-Driven Defense

Black Hat Asia keynote speaker and Net Square CEO Saumil Shah says bug bounty programs and reactive security techniques aren’t enough to protect your business.

BLACK HAT ASIA – SINGAPORE – “Bugs are around, they’re going to be around forever. That’s fine,” admitted Net Square CEO Saumil Shah in his keynote “The Seven Axioms of Security” at Black Hat Asia 2017. This isn’t because all software is buggy, he noted, but because today’s technology is complex.

Shah described how each of today’s systems has a nearly infinite amount of space that cybercriminals can traverse with any manner of non-architectural means.

“If you think you’re going to catch them in all these combinations and permutations, you seriously need to rethink your battle,” he noted.

For more than a decade and a half, the industry has primarily used a reactive approach to security, Shah explained. Businesses tried to “buy back” bugs after exploits, which led to the creation of bug bounty programs. These have become so high-stakes they are starting to backfire, Shah said, using the term “bug purchase programs.” There’s no end to the cost when businesses are willing to pay millions.

“We wait for things to happen and then we react,” he said of today’s security teams. “The industry of defending has now become largely compliance-driven.” Products are marketed as solutions that reduce risk but in reality operate on three principles: rules, signatures, and updates. These are still reactive, said Shah, and they aren’t enough.

“Existing defense measures do not match hacker tactics anymore,” he continued. “Attackers don’t follow standards and certifications. They do whatever they please.”

It’s time for leaders to become less reactive, and more proactive, in their approach to security, Shah explained. This is no easy feat, he said, because businesses’ leaders rarely give security teams the budgets they need and usually don’t understand their priorities.

To point his listeners in the right direction, Shah illustrated his call to action with seven principles security teams should adopt with the goal of intelligence-driven defense in mind: 

  • The CISO’s job is to defend: CISOs should be defending and keeping systems clean. Compliance is not part of their role. Truthfully, says Shah, compliance takes up the majority of the CISO’s time. It would be more effective to split the role of the CISO into two positions: a security-focused officer who prioritizes defense, and a chief compliance officer who handles the cost of doing business.
  • Intelligence begins with data collection: “There is no price you can put on historical data,” said Shah. If data can be correlated, he explained, businesses should collect and save it. Start with a security data warehouse and gather data from sources of security intelligence. This may come from third-party vendors but ideally should come from the organization. “It’s time that organizations who have the muscle grow their own security in-house to suit the needs of their organization,” Shah emphasized. “No one product is going to fit the bill.
  • Test realistically: Systems can exist in secure and hacked states at the same time, Shah explained. You’ll only know what’s going on if you test, and you should test systems under real-life circumstances.
  • Keep metrics: Make a list of what is quantifiable in your process and keep metrics for them. Metrics demonstrate success and failure; they can also justify budget. You need facts to defend your strategy, Shah explained.
  • Learn from users: We can’t apply the same security measures to all end users – who range from hopeless (those who tweet photos of their debit cards) to rock stars (those whom we can learn from) – Shah explained. Security leaders should identify users who are uninformed but willing to improve, and guide them to be more productive.
  • The best defense is unexpected: “Is your infosec team doing something creative every day?” Shah asked.
  • Progress should be visible: While defenses themselves should be unexpected to attackers, it’s important to make protective measures visible to the business. Improve users’ security knowledge and record money saved.

“If you can demonstrate money savings through defense, that’s money earned,” he emphasized. “It enables you to control your budget.”

Related Content:

Kelly is an associate editor for InformationWeek. She most recently reported on financial tech for Insurance Technology, before which she was a staff writer for InformationWeek and InformationWeek Education. When she’s not catching up on the latest in tech, Kelly enjoys … View Full Bio

Article source: http://www.darkreading.com/reactive-to-proactive-7-principles-of-intelligence-driven-defense/d/d-id/1328542?_mc=RSS_DR_EDT

Twitter users hit out at confusing revamp of @ mentions

Here’s what sounds, at first blush, like a positive mini-move from the micro-blog-opolis: Twitter’s no longer counting people’s @usernames toward the character count when we reply to tweets, it said on Thursday.

No more discussions cluttered up with @usernames at the top of a discussion, Twitter enthused. Users will be able to carry on conversations with multiple people without worrying that account names will take up all the real estate allotted for their tweets, Twitter said.

For better or worse, of course!

Twitter founder Jack Dorsey explained in a tweet that the update offers “a cleaner focus on the text of a conversation instead of addressing syntax”.

That’s not how a fair number of Twitter users are seeing it. Rather, it strikes some as more of a bug. In fact, users have been complaining about the new system since it was beta tested in October because it removes most of the names included in the conversation, thus making it tough to keep track of who’s involved in the discussion.

Plenty of users pointed out that this is emphatically not the type of improvement they want to see on Twitter. What they’d far prefer is deactivation of abusive accounts, the ability to edit tweets, and/or banning Nazis.

This is far from the first time that Twitter’s tried to distance itself from its 140-character Tweet limit.

Users have come up with their own workarounds: for example, “tweetstorms” break up longer thoughts into multiple consecutive tweets.

“Twitter canoes” – ongoing conversations – are another useful workaround. Until Twitter’s latest move, they made it easy to drop or remove somebody from the canoe if they fork off in a new direction that the original participants aren’t up for. All you had to do is delete the username of somebody from the text of a tweet.

Now? Those courtesy drops are sunken three taps away. Now you’ll have to click the “reply to” button to see who else is in on a Twitter conversation.

That’s not a plus when it comes to Twitter’s persistent problems with abuse and trolling. What Twitter has done will make it harder for us to see if someone who’s been trolling us is in a reply chain before we respond to a tweet.

It’s like showing up at a party in a house where all the lights are out. Are you sure you want to hang out and chat with those people? It’s always helpful to know who’s actually there before you plunge in.

Beyond that, from the get-go, it’s tough to know that you’re even in a conversation at all, given that the “Others” label has been stripped out. In the header of a multiple-party tweet, you can see who’s quoted and who replied, but that’s it: anybody else who’s still on the canoe isn’t visible.

Tech Crunch’s Matthew Panzarino, who was in on the beta test in October, posted an example of the new reply setup, if you want to get an idea of what it looks like.

He calls it a mess. Plenty of other users agree.

Panzarino’s suggestion: remove the usernames from the character count, but leave them in the text, where participants can see them, similar to how a photo can be seen in a tweet but doesn’t count toward the maximum character count.

Twitter users, have you been fumbling around blindly in Twitter canoes since the change took place? We’d love to hear your thoughts, if you’d care to share in the comments section below.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wn4FpE-Vqdg/

Europe supplants US as biggest source of child abuse hubs

Europe now hosts majority of child sex abuse images (60 per cent), pushing North America into second place (37 per cent), according to an annual report from the Internet Watch Foundation (IWF).

In contrast, UK now hosts less than 0.1 per cent of child sexual abuse imagery globally, something IWF credits as down to a “zero tolerance approach” by the UK internet industry.

The majority of all child sexual abuse URLs identified globally in 2016 (92 per cent) were hosted in five countries: the Netherlands (37 per cent), the United States (22 per cent), Canada (15 per cent), France (11 per cent), and Russia (7 per cent).

The biggest increase seen was in the Netherlands, which went from hosting 12,703 abuse URLs to 20,972.

Image hosting sites (72 per cent) and cyberlockers (11 per cent) were the most abused services. Social networks are among the least abused site types (1 per cent).

Susie Hargreaves OBE, IWF chief exec, said: “The shift of child sexual abuse imagery hosting to Europe shows a reversal from previous years. Criminals need to use good internet hosting services which offer speed, affordability, availability and access. Services which cost nothing, and allow people to remain anonymous, are attractive.”

She added: “The IWF offers a quick and effective system of self-regulation; we work with our Members to make the internet safer and we do this on the global stage.”

Criminals are increasingly using masking techniques to hide child sexual abuse images and videos on the internet. Clues for paedophiles on how to find this illicit content are left elsewhere, normally in internet forums.

In 2016, 1,572 websites were found to be using this method to hide child sexual abuse imagery, more than twice the 743 disguised websites identified in 2015. In 2013, just 353 websites were found that used this technique.

The IWF further reports that paedophiles are using newly released domain names to host child abuse content. Five top-level domains (.com .net .se .io .cc) accounted for 80 per cent of all webpages identified as containing child sexual abuse images and videos, the IWF reports. ®

* The IWF has a reporting tool here and also provides countries with a local, customised IWF Portal, which it says “provides a safe and anonymous way to send reports directly to our analysts in the UK”, and says people can email [email protected] for details.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/03/iwf_annual_report/