STE WILLIAMS

Eight years’ worth of police evidence wiped out in ransomware attack

Texas police in the town of Cockrell Hill have lost eight years’ worth of digital evidence after getting hit by a ransomware attack in December and refusing to pay up.

According to a news release posted by local station WFAA, this attack came about the same way that so many do: somebody in the department clicked on an email that had been doctored to look like it was coming from a legitimate, department-issued email address. The email planted a virus that then corrupted all files on the server.

The FBI’s Cybercrimes unit and the police department’s IT support staff determined that the best way to scrub all remnants of the virus was to wipe the server of all affected files.

So that’s what they did: they destroyed all Microsoft Office documents – including Word and Excel files – as well as all bodycam video, some photos, some in-car video, and some police department surveillance video, dating back as early as 2009.

Dallas Police Chief Stephen M Barlag said in a letter sent to the Dallas County district attorney’s office that the department had tried to save digital evidence from criminal cases, but the lost material is gone for good.

Every attempt was made to recover any potential digital evidence in criminal cases, however if requests are made for said material and it has been lost, there is no chance of recovery or producing the material.

Cockrell police don’t know how much digital data is lost, but Barlag stressed that they’ve still got hard copies of all documents and “the vast majority” of the videos and photographs on CD or DVD.

The digital data wasn’t being backed up automatically, Barlag said. Or rather, it was, but automatic backup didn’t kick in until after the server got infected, “so it just backed up infected files”. He added that of the lost files, “none of this was critical information”.

At least one defense attorney begs to differ. J Collin Beggs, a Dallas criminal defense lawyer said: “Well, that depends on what side of the jail cell you’re sitting.”

Beggs has been asking for video evidence in a client’s case since the summer. The lost evidence came to light when Beggs questioned a police detective in court.

Why not just pay the ransom?

According to the department’s news release, the malware triggered a webpage that told police employees that their files were locked and that they’d get a decryption key if they forked over Bitcoins and transfer fees that amounted to nearly $4,000.

Don’t do it, said the FBI Cybercrimes unit: paying is no guarantee you’ll ever see that decryption key.

We were told by the FBI that paying doesn’t always get you your information back. They told us that some people whose files are infected pay, and they get their files back, but sometimes it doesn’t work. So we decided it was not worth it to pay, and potentially, not get anything back anyway.

This is all true, much to the chagrin, we’re sure, of the “honorable” ransomware disseminators. After all, they have a “brand” to protect. Most well-known ransomware brands have made sure you’ll get a key when you pay the ransom, in order to maintain a reputation that it’s worth paying up.

In fact, you could say that was what the CryptoLocker crew brought to the ransomware party. Crooks hadn’t made any money before because they either got the crypto wrong or failed to deal with payment for, and delivery of, the key.

The “honor among thieves” reputation of ransomware crooks has been ruined recently by newcomers who either screw up the crypto, thus providing free recovery, or who ruin the recovery and fail to return the files after taking payment.

We’ve coined this “boneidleware”: wannabe ransomware thrown up by lazy crooks who take the money and run.

Police departments, just like the hospitals, colleges, TV stations and other organizations that have been victimized by ransomware, have had different reactions. Not all police departments have snubbed the call of the crooks who kidnapped their files, be they makers of ransomware or boneidleware.

For example, in November 2013, a Swansea, Massachusetts, police department paid CryptoLocker crooks $750 for a decryption key after they were attacked.

Paying crooks ransom money rankles, says Sheriff Todd Brackett of Lincoln County, Maine, whose system was frozen in 2015: “My initial reaction was ‘No way!’ We are cops. We generally don’t pay ransoms.” After “48 long hours,” Brackett reluctantly paid, he told NBC News, with a big sigh.

Other police departments have held fast. In Durham, New Hampshire, the police chief refused to pay. The files were deleted. He was, however, able to recover most of them from a backup system.

The same goes for the Collinsville, Alabama, police department: the chief refused to pay when attacked in 2014. He never saw the files again.

It’s not an easy choice. Do we applaud cops for refusing to pay, even if it spoils some of the cases they’re working on? Even if this means that some criminals wind up going free, given that the evidence to convict has been wiped clean?

And what about chain of custody? Shouldn’t that evidence have been auto-backed up? Protected from modification or loss?

Those are, unfortunately, Monday morning quarterback questions. What’s more important is to ask them before any data gets locked up by crooks. In the meantime, here’s a recap of our advice on preventing and recovering from attacks, be they ransomware or other nasties:

What to do?

Here are some links we think you’ll find useful:


LISTEN NOW

(Audio player above not working? Listen on Soundcloud or access via iTunes.)

(Paul Ducklin contributed to this report.)


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Qu_JgvI4umc/

Draft of Trump’s cybersecurity plan emerges. Here’s what experts think

Editor’s note: This article will be updated as developments unfold.

President Donald Trump hadn’t yet signed it at the time of this writing, but details have emerged regarding his planned executive order on cybersecurity.

Speculation has increased in recent days as to what Trump will do, and he has certainly gotten plenty of advice from security practitioners. Now we have some more insight into his plans, in the form of this executive order draft, which was obtained by The Washington Post.

The executive order includes provisions to:

  • Have the US military review what schools are teaching students about cybersecurity
  • Consolidate responsibility for protecting the government by giving ultimate control to the White House Budget and Management office. (Note: every government agency is currently in charge of defending itself. This has proved problematic in recent years, because each agency now has different procedures for individual networks instead of a more uniform program.)
  • Place blame for any network security incident squarely on the shoulders of the affected agency’s head.

“I will hold my cabinet secretaries and agency heads accountable, totally accountable for the cybersecurity of their organization,” Trump told reporters yesterday.

A review of all government networks

The draft order calls for a total review of the most critical vulnerabilities in US military, intelligence and civilian government computer networks. This would include examining networks of internet service providers, private-sector companies used by the government and data centers. The White House wants “initial recommendations” within 60 days of the order’s signing.

Meanwhile, the administration wants the Department of Education to start sharing information with the Department of Defense and the Department of Homeland Security on what children are learning about cybersecurity, math and computer science in general. The draft says the goal is “to understand the full scope of US efforts to educate and train the workforce of the future”.

Trump said yesterday that son-in-law and senior advisor Jared Kushner will lead the effort along with former New York Mayor Rudolph Giuliani and homeland security adviser Tom Bossert.

What security experts think

Naked Security reached out to security experts for their initial take on the draft order.

Mike Bailey, a senior Red Team engineer at one of the world’s largest banks, said the plan is very ambitious, particularly the part consolidating complete oversight into one group.

It seems like a great idea, but as most things go in the government sector, it will more than likely just cause strife and infighting between agencies. Long overdue is the need to work with the commercial and private world to secure our nations IT infrastructures. As everyone in the industry is aware, the private sector is far outpacing government efforts, so I applaud the recognition of the need to reach out and work together.

As with most of the things this administration has done so far, Bailey said the plan is grandiose and disruptive, but that it appears some serious thought was put into it and that it will “hopefully have a bit of teeth”.

Lawrence M Walsh, CEO and chief analyst at New York-based business strategy firm the 2112 Group, said his concern is that this latest push for better cybersecurity will turn into another money grab where government agencies throw cash to companies that are eager to sell a product.

“Previous iterations of this approach resulted in a lot of money being spent and little improvement in government security posture,” Walsh said, adding that security without a defined goal, standards and plan will almost always come up short of expectations.

At the time of writing, there was no word on when President Trump would sign the order.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/oPcHKcaumiM/

News in brief: 2.5m gamers’ details stolen; Google chokes on NHS traffic; Facebook moves on fake news

Xbox and PlayStation gamers’ details stolen

Hard on the heels of the news that a game forum for The Witcher has suffered a breach comes the news – from the same source – that 2.5m Xbox and PlayStation gamers’ details have been stolen from another gaming forum.

Security research Troy Hunt reported via his Have I Been Pwned breach-tracking website that the breach from the ISO forums, which took place in 2015, has led to the exposure of personal details including email addresses, passwords and IP addresses.

As with the breach reported yesterday, users are wondering why it takes so long for users to find out. We discussed why this happens last year, and meanwhile, our advice on what to do if your details are stolen stands: watch out for phishing attempts, change your password and security questions and enable two-factor authentication.

Google mistakes NHS for a botnet

Google apparently mistook the NHS’s 1.5m users as a botnet on Wednesday thanks to the volume of queries staff were pushing to the search engine, the Register has reported.

According to an email seen by the Register, the search engine was throwing up Captchas requiring users to confirm that they were in fact human users rather than a botnet: “Google is intermittently blocking access due to the amount of traffic from NHS Trusts nationally. This is causing Google to think it is suffering from a cyber-attack.”

NHS staff were advised to use Microsoft’s search engine, Bing, for the duration instead of Google.

Facebook tweaks newsfeed to beat fake news

As concern rises about the influence of “fake news“, Facebook has said it is tweaking its newsfeed algorithm “to show you more authentic and timely stories”.

Facebook says it will be prioritising stories based on new universal signals, such as whether a page sharing a story is trying to game the newsfeed by asking for likes, comments and shares, as well as signals such as whether a friend has commented on a story.

The tweaks to the algorithm will also temporarily boost stories that are getting a lot of engagement, so, as Facebook puts it, “if your favorite soccer team just won a game, we might show you posts about the game higher up in the newsfeed because people are talking about it more broadly on Facebook”.

Catch up with all of today’s stories on Naked Security


 

 

 

 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/S5BNVhYT8i0/

Federal Reserve staffer caught mining bitcoins at work fined $5,000

After using a server at the Federal Reserve to mine virtual currency, planting remote-access software so he could keep an eye on the workings from the comfort of home, and then trying (unsuccessfully) to cover his tracks by remotely deleting the software, a communications analyst for the Federal Reserve’s Board of Governors in Washington wound up getting fired.

The ex-employee, Nicholas Berthaume, pleaded guilty in October to one misdemeanor count of unlawful conversion of government property – namely, for installing unauthorized software on the server.

On Friday, Berthaume was fined $5,000 and given a year of probation, according to a statement from the Federal Reserve’s Office of Inspector General (OIG).

According to the plea agreement, Berthaume installed bitcoin mining software on a server at the Fed’s board. Given the bitcoin network’s anonymity, the central bank hasn’t figured out how much he managed to rake in courtesy of the system’s computing power.

We don’t know how much electricity he used, but we do know that when it comes to mining bitcoins these days, the low-hanging fruit have all been plucked long ago. It takes some serious kW to get at the high-hanging plums.

Fun bitcoin mining electrical fact: according to Bitcoin.com, the global bitcoin mining economy currently consumes nearly $500m in electrical and operational costs and is on track to consume as much electricity as Denmark by 2020.

That’s a lot of heat, but it’s being turned to virtuous green in at least some cases: warming homes and warehouses, say, or accelerating rum barrel aging.

So yes, Berthaume was undoubtedly heating up the server room. To compound that environment crime, of course, was that remote-access software installation.

While the Fed didn’t manage to figure out how much Berthaume made from his mining, it did manage to spot the fact that he tweaked security safeguards so that he could remotely access the server from home.

When confronted, Berthaume tried to deny it. Then, he remotely deleted the software to try to cover his tracks.

#Fail. Forensics confirmed that he was involved, and he was terminated. He ultimately confessed.

Fed inspector-general Mark Bialek said that the board’s data is now completely secure and that it’s subsequently implemented better security.

Berthaume’s actions did not result in a loss of Board information, and we have been informed that the Board has implemented security enhancements as a result of this incident. Additionally, Berthaume’s voluntary admission of guilt and his full cooperation were critical to the timely closure of this matter.

There was a similar case Down Under a few years back, at the Australian Broadcasting Corporation (ABC). An IT staffer got it into his head to slip a bit of bitcoin mining software on to the servers and was using the idle CPU cycles to generate the virtual currency.

Or, at least, that’s what he might have done, were the software not spotted within minutes. The notion got rattled out of his head, and, well, that’s about it. Specifically, his access to production systems was throttled, and he was put under “close supervision” by a manager.

Their punishments differed, but both cases of mining involved employees who decided to appropriate “unused” CPU cycles, regardless of issues around security, authority and operating cost.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RO_rWz8LakI/

Easing of security checks at remote Scottish airports ‘a risk’

Anyone travelling to Glasgow from one of three Scottish airports can now board the plane without a baggage checks or body search. And if their final destination is Glasgow, the may not be searched at any point in their journey.

Telegraph Travel reports that, while the ban on sharp objects, firearms and liquids in containers more than 100ml still applies, “passengers are only required to verbally confirm they are not carrying these items before boarding flights”.

The three airports – Barra (pictured above), Campbeltown and Tiree – are all small, remote airports run by HIAL (Highlands and Islands Airports Ltd). A HIAL spokesperson told Telegraph Travel that new measures, which have been agreed with the approval of the Civil Aviation Authority (CAA) and the Department for Transport (DfT), are “proportionate to the size of aircraft involved and number of passengers travelling”.

Putting Scotland at risk?

Airport workers’ union Prospect disagrees. Prospect negotiator David Avery is quoted in several articles as saying that this change is “unreasonable and disproportionate and puts staff and passengers at risk”.

Scotland’s Press and Journal reports that the union has pointed out that the flight path into Glasgow is close to a number of potential targets. These include the nuclear power facilities at Hunterston, the large oil terminal and facilities at Finnart on Loch Long, and Ministry of Defence establishments at Coulport, Faslane and Glen Douglas. Avery confirms: “Lowering security at Highlands and Islands airports could make these sites, and the airports themselves, far more likely to be potential targets.”

Why?

We asked the CAA for a comment on how they came to the conclusion that the new security measures were appropriate. They pointed us in the direction of the DfT, who responded with:

Following an in-depth review of the security measures at airports in the Scottish Highlands and Islands, passenger screening at airports at Campbeltown and Tiree will be streamlined for customers flying to Glasgow airport. All passengers taking connecting flights from Glasgow to other destinations will need to go through standard screening for UK airports. For security reasons, we are unable to comment further on the precise nature of the changes.

We’re still none the wiser.

The flights from these airports only carry a small number of passengers, many of whom are most likely regular local travellers and holidaymakers. And if you’ve every visited any of the many remote Scottish airports you’ll know that the experience there is very different from the hustle and bustle of mainland airports across the UK and indeed the world.

But an airplane, however small, is still an airplane … and would be a significant security threat in the hands of the wrong person.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/oHiHPoH3qS4/

Fear not, Europe’s Privacy Shield is Trump-proof – ex-FTC bigwig

The transatlantic Privacy Shield data transfer agreement is not at risk from Trump’s executive actions, former FTC Commissioner Julie Brill has promised.

In an article on her law firm’s blog, Brill notes that the recent executive order (EO) from the Oval Office, which expressly limited privacy rights to US citizens only, does not impact the critical agreement between the European Union and the United States.

How come? Three reasons:

  1. The Privacy Act applies only to government databases, whereas the Privacy Shield covers corporate databases.
  2. No presidential Executive Order can override existing laws written by Congress – and Congress has already approved the Judicial Redress Act that grants EU citizens the right to use the US courts in the case of misuse of data.
  3. The other mechanism set up to make the Privacy Shield work legally – an Ombudsman that will look into any requests from Europe about access to data by the US government – remains in place.

Brill played an active role in developing the Privacy Shield with other US government agencies and their counterparts in the European Union, and so has as good an understanding of the law as anyone. The FTC is expected to act as a key enforcer of the agreement.

In arguing why the agreement still holds, despite’s Trump’s actions, Brill and her coauthor Bret Cohen also give mention to another key component – the Attorney General’s designation of specific countries that are covered by the Judicial Redress Act.

That Act and the accompanying Attorney General list officially become law today, Wednesday February 1, 2017 – and the Trump Administration has done nothing to prevent or stymie what is now a legal reality.

And so the Privacy Shield is up and running, despite President Trump’s isolationist approach. And a good job it is too, since every large internet company, including Facebook and Google, are heavily reliant on it to provide them with a legal foundation on which to offer their services outside the United States.

Not so fast

All that said, Brill and Cohen feel obliged to include some caveats – just as European Union officials did last week when they saw the text included in Trump’s Enhancing Public Safety in the Interior of the United States order.

“Going forward, it will be important to pay attention to European officials’ reaction to the EO,” they wrote. “It will also be important to watch how the EO may impact the Attorney General’s designations of countries covered under the Judicial Redress Act or countries that could receive such designation in the future.”

The EU made a similar statement: “We will continue to monitor the implementation of both instruments and are following closely any changes in the US that might have an effect on Europeans’ data protection rights.”

In other words, it is possible that President Trump’s pick for Attorney General, Jeff Sessions, could decide at a later date to revoke some countries’ – or the EU’s – designations under the Judicial Redress Act: a decision that would wreak immediate havoc to Privacy Shield.

While Sessions appears to be more of a racist than a xenophobe, he has also proven to be fiercely loyal to Trump. The president has already made it plain that he is prepared to fire any Attorney General who does not agree to his executive orders, even if they doubt those orders’ legality.

To that end, government officials in both the US and Europe – as well as the management teams at every major online corporation – will be hoping that Donald Trump never hears about the Privacy Shield.

Not so fast a second time

That may still only be half the problem, however, as Lawfare’s Adam Klein and Carrie Cordero point out on another post here on The Register.

The combination of the very old Privacy Act (written in 1974, since which time Europe has rewritten its privacy rules three times) and Trump’s wide executive order could see government agencies insist on access to European citizens’ personal data, having met a very low threshold of proof – a mere “risk to public safety” would be enough, and some agencies are likely to view that very broadly.

Trump’s order actively exhorts government agencies to share such information between themselves – and that could mean an individual’s personal details made available to huge numbers of government officials without any concern given to privacy laws.

One of the key aspects of the Privacy Act is that an individual’s consent has to be sought before personally identifiable material can be shared (subject to a few important exceptions). But if someone is deemed to be outside of that Act, their personal information can not only be readily shared, but the individual in question would not know about it.

In that sense, the value of an Ombudsman is questionable: if someone doesn’t know their personal data is being shared, how are they supposed to question it?

It is possible that the data protection authorities in Europe will take issue with this catch-22 situation when they carry out an annual audit of the new system in just under six months’ time.

Hopefully by then the Trump Administration will have been sufficiently persuaded not to write and sign executive orders without first running them through the machinery of government. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/01/former_ftc_com_brill_says_privacy_shield_not_impacted/

A New Mantra For Cybersecurity: ‘Simulate, Simulate, Simulate!’

What security teams can learn from the Apollo 13 space program, a global pandemic, and major infrastructure disruptions to identify their best responses to attacks.

 More on Security Live at Interop ITX

Over the long holidays in December (and thanks to the massive California storms) I had the chance to re-watch some great movies, including Apollo 13 – one of my all-time favorites. Apollo 13 is well-directed, has a great cast of characters (including the amazing Gary Sinise), but most importantly, it features brilliant engineering.

For those who are not familiar with the story, Apollo 13 was the seventh manned mission in the US space program, and was intended to land on the moon. Apollo 13 launched on April 11, 1970 to little fanfare until, two days later, an oxygen tank exploded. The crew abandoned plans to land on the moon, and instead focused on the new objective of returning safely to Earth despite malfunctioning equipment, limited power, loss of heat, and lack of potable water.

Badge from the ill-fated Moon landing 11-17 April 1970 Image Source: Shutterstock

Badge from the ill-fated Moon landing 11-17 April 1970
Image Source: Shutterstock

In April 2015, Lee Hutchison wrote an article about the Apollo 13 in Ars Technica, and analyzed what went wrong based on expert perspective from Apollo flight controller Sy Liebergot. It’s a geeky but enlightening article about everything you would ever want to know about oxygen tanks, lunar modules, command modules, flight parameters and Apollo 13. I encourage you to read it. The most poignant part of the article was this:

“The thing that saved Apollo 13 more than anything else was the fact that the controllers and the crew had both conducted hundreds—literally hundreds—of simulated missions. Each controller, plus that controller’s support staff, had finely detailed knowledge of the systems in their area of expertise, typically down to the circuit level. The Apollo crews, in addition to knowing their mission plans forward and backward, were all brilliant test pilots trained to remain calm in crisis (or “dynamic situations,” as they’re called). They trained to carry out difficult procedures even while under extreme emotional and physical stress.…. The NASA mindset of simulate, simulate, simulate meant that when things did go wrong, even something of the magnitude of the Apollo 13 explosion, there was always some kind of contingency plan worked out in advance.”

In other words, simulations identify gaps and prepare teams for when sh*t hits the fan.

This is not just limited to NASA. In the fall of 2002, Congress mandated that the National Infrastructure Simulation and Analysis Center, or NISAC (officially founded in 1999 as a collaboration between two national laboratories, Sandia and Los Alamos), model disruptions to infrastructure – fuel supply lines, the electrical grid, food supply chains and more. After 9/11, Congress wanted to understand the impact of infrastructure disruptions – how much they might cost, how many lives would be lost, and how the government would respond.

In 2005, when the nation and the world was experiencing the bird flu crisis, NISAC was asked to simulate what a global pandemic would look like, and how best to respond. Based on simulations of complex economic, cultural, and geographic systems, a Sandia scientist named Robert Glass theorized that a pandemic like the bird flu “exhibits many similarities to that of a forest fire: You catch it from your neighbors.” He demonstrated that high school students would be the biggest transmitters, and recommended that thinning out infected school age kids by closing schools (rather than closing borders) would be a better way to prevent the pandemic from spreading.

This is what breach or adversary simulations allow you to do in cybersecurity as well. Breach simulations is an emerging technology that simulates hacker breach methods to gain the hacker’s perspective. Simulators placed in various security zones and on endpoints play continuous war games against each other to challenge security controls and identify enterprise risks. Unlike vulnerability management systems, breach simulations are safe (simulators only attack one another), focuses on the complete breadth of hacker techniques instead of just vulnerabilities, and showcases the kill chain impact.

Breach simulations may not help you address the thousands of alerts your SOC team has to resolve every day, but you’ll be able to strategically simulate what can occur in your environment, and identify the best option to respond to potential attackers. The benefit is that you can then choose the best possible compensating control to break the kill chain or stop the attackers in their tracks (just like NISAC and the flu pandemic).

For example, if you can’t stop users from clicking on links and thus prevent infiltration, you can compensate and prevent lateral movement via very stringent segmentation and access control policies. Over time, as you proactively identify gaps and challenge your people, technology and processes, you’ll be able to improve your overall security. This is a different mindset – the proactive and continuous versus the tactical and reactive.

As we start a New Year and face another 365 days of never-ending cybersecurity headaches, consider the “simulate, simulate, simulate” mantra in your cybersecurity strategy. The only way we improve is by challenging ourselves and putting ourselves in the footsteps of the adversary – let’s simulate our adversary and increase our probability of success.

Related Content:

 

 

Danelle is vice president of strategy at SafeBreach. She has more than 15 years of experience bringing new technologies to market. Prior to SafeBreach, Danelle led strategy and marketing at Adallom, a cloud security company acquired by Microsoft. She was also responsible for … View Full Bio

Article source: http://www.darkreading.com/threat-intelligence/a-new-mantra-for-cybersecurity-simulate-simulate-simulate!/a/d-id/1328033?_mc=RSS_DR_EDT

The Interconnected Nature Of International Cybercrime

How burgeoning hackers are honing their craft across language barriers from top tier cybercriminal ecosystems and forums of the Deep and Dark Web.

 More on Security Live at Interop ITX

Flashpoint analysts monitoring a top-tier Russian hacking forum recently observed an actor who goes by the pseudonym “flokibot,” developing a Trojan known as “Floki Bot.” While the malware uses source code from the ZeuS Trojan, the actor reinvented the initial dropper process injection to instead target point-of-sale (PoS) terminals. The Floki Bot Trojan is not only representative of the increasingly-collaborative nature of cybercrime, it also illustrates the growing presence of “connectors” within the Deep and Dark Web.

Flashpoint defines “connectors” as individuals who interact on Deep Dark Web forums maintained outside of their country of residence. These individuals make efforts to communicate outside their native language in order to obtain and import knowledge and tools back to their native communities.

Flokibot is a prime example of a connector who brings capabilities from top-tier cybercriminal ecosystems to the burgeoning Brazilian underground. While flokibot is active on number of top-tier Russian-hacking and English-language forums, the actor appears to use translation tools and/or intermediaries to communicate, and is most likely not a native Russian- nor English-speaker. In fact, the actor’s use of Portuguese, IP address, user-agent, and compromised victims all indicate that flokibot may be Brazilian.

“Connectors:” A Rising Underground Trend
While flokibot is one notable example, Flashpoint considers the presence of connectors to be a rising trend within the cybercriminal underground. This assessment is based upon a heuristic analysis of actors across several seemingly-disparate Deep and Dark Web forums. The proliferation of open-source learning and translation tools has allowed burgeoning hackers and cybercriminals to communicate across language barriers into Deep and Dark Web forums from which more advanced malware development and tools have been known to emerge.

For those seeking to combat cybercrime, the rising prevalence of connectors is problematic in many ways. First, connectors appear to be contributing to an increase in the number of sophisticated malware samples surfacing from regions that have historically not been prone to cybercrime of this nature. In addition, connectors have also been known to perpetuate fraud schemes across international borders. Although these crimes may not necessarily require technical expertise, many do require physical or privileged access to the targeted institutions and can include insider threats, ATM skimmer installations, and bank drops.

Fraud has not only grown more common as a result, but the perceived profitability of certain fraud schemes continues to attract newer, less-experienced actors eager to capitalize on cybercrime and learn from others within Deep and Dark Web forums. While many would anticipate the profitability of these fraud schemes to consequently decrease, the collaboration of connectors is instead driving innovation. As flokibot has illustrated, cybercriminals are collaborating across regions, advancing their skills, and adapting new techniques to victimize and capitalize on larger populations.

The Growing Pool of Victims
While the expansion of Internet infrastructure in developing countries has indeed spawned connectors, it has also created a larger, more vulnerable population of potential victims that may be less aware of common fraud schemes. The growing amount of Internet-connected users relying upon the virtualization of commercial activities – such as banking and commerce – has rendered even more individuals susceptible to phishing and other cybercriminal schemes. Although many countries have begun enforcing strict legislation to combat cybercrime, many others – particularly developing countries – have yet to implement effective controls. However, as Internet users and government agencies become increasingly aware of common fraud tactics, connectors will likely look externally to develop new skills for launching different types of cybercriminal schemes.

Above all else, it’s crucial to recognize that the presence of “connectors” on the Deep and Dark Web is steadily growing larger and more influential. Cybercrime’s profitability will keep attracting new entrants into communities outside of their native language and nationality. Additionally, sophisticated actors will continue searching for partners to help them perpetrate increasingly advanced fraud schemes and penetrate new markets. In order to both deter and mitigate the risks associated with connectors and cybercrime, intelligence professionals, security teams, and law enforcement officials alike must be agile and proactive in monitoring the cybercriminal landscape. Otherwise, connectors will continue to evade detection, exert a substantial influence over the Deep and Dark Web ecosystem, and exacerbate the risks of future cybercrime. 

Related Content:

 

Ian W. Gray is a cyber intelligence analyst at Flashpoint, where he focuses on producing strategic and business risk intelligence reports on emerging cybercrime and hacktivist threats. Ian is also a military reservist with extensive knowledge of the maritime domain and … View Full Bio

Article source: http://www.darkreading.com/cloud/the-interconnected-nature-of-international-cybercrime/a/d-id/1328031?_mc=RSS_DR_EDT

Facebook takes steps to boost password recovery security

Facebook has announced a new technology it believes could overhaul the insecure mess of recovering an account where the password or user credential has been forgotten or compromised.

Called Delegated Recovery, the principle behind it is simple enough: use Facebook (or another trusted provider) as the re-authentication mechanism.

As anyone who has ever done it will know, current methods for regaining access to an account, for example resetting a password, don’t inspire much confidence.

Common mechanisms involve sending a password reset link sent to a registered email address, answering a recovery question, or simply being prompted by a password hint. Each has widely acknowledged weaknesses, starting with the glaring insecurity of email as a channel for initiating a reset and the laughable ease with which security questions can be guessed.

As Facebook’s Brad Hill reportedly said of an unnamed online bank account recovery system during his USENIX Enigma conference presentation on Delegated Recovery: “It asked me what my favourite colour was, and it let me guess as many times as I wanted.”

A decade ago it looked as if federated authentication – accessing different services using a single identity such as Google or Microsoft – might provide an answer but their uptake among consumer-oriented providers remains modest.

Delegated Recovery strips the problem back to a simpler level. In this design, Facebook is used to generate and store an encrypted recovery token for a given website. Should a credential such as a password be forgotten, a time-stamped token (countersigned by the provider’s private key) is sent to restore access after the user re-authenticates to Facebook.

This whole process should take seconds with a browser over HTTPS, Hill told the audience.

For now, Delegated Recovery has the catch that it is only available for GitHub users. Facebook hopes other websites and identity providers will start using the protocol in time, extending its usefulness.

The initiative serves a reminder that there is more to password security than length, complexity or how often a password if re-used across different sites. Even the best passwords are vulnerable is the reset system is open to compromise.

That said, Delegated Recovery demands that Facebook users properly secure their account to avoid simply shifting the weakness. This can be done by turning on Facebook’s two-step login approvals verification security in security settings, which defaults to sending one-time SMS codes.

Alternatively, as of last week, more secure FIDO U2F tokens such as the YubiKey can be used on supported browsers such as Chrome. Unlike simpler two-step verification systems, these are true two-factor authentication (2FA) because they fully separate the first factor (something known, such as a password) from the second factor (something in the user’s possession).

We discussed the U2F approach in more detail yesterday – it’s good to see Facebook paying attention to improving security.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jbnSIWC0AIc/

Why you shouldn’t trust baby health monitors

Consumer gadgets to track the health of your baby could be a waste of money, according to new reports from the US – and worse, they may either lead you to believe your child is in danger when he or she is fine, or offer false reassurance when there’s an actual problem. The issue appears to be that a handful of people are treating shop-bought trinkets as a substitute for proper medical assessment.

One case has emerged on the Philly.com site, in which parents took their child to the Children’s Hospital of Philadelphia because its apnea monitor kept going off. The staff were anticipating readings from an industrial-strength apnea monitor; they were surprised when the gadget turned out to be a monitor in the child’s nappy. Happily the child was fine.

And of course, there are many concerns around the security of devices that are connected to the internet, all the more so if it’s a device collecting the data of a child. Mark Noctor of IoT security company Arxan says:

Aside from the idea of a malicious stranger accessing their baby’s medical information being unnerving for any parent, it’s also worth remembering that medical records continue to be a lucrative item on the black market. The partial data delivered by a baby sensor may not be worth as much, but if the device is easy to attack it will still be an attractive target for financially motivated cyber criminals.

There is also the ongoing issue of poorly secured devices being co-opted for botnets. Says Noctor:

The Mirai botnet was able to infect millions of connected devices because so many are still using their factory default login information. It should be standard practice for all devices to force users to change their details when they set the device up. Until this is enforced – by law if necessary – it will be child’s play for even an inexperienced attacker to access and manipulate most devices.

One of the doctors involved has co-written an article in the Journal of the American Medical Association to point out that devices like this are not checked by the Food and Drug Agency for safety or efficacy. They are available in forms from inserts for socks or onesies, nappy attachments and a great many other variants, and they communicate with a smartphone app, sending alerts when such a thing is deemed necessary. They are not, however, clinically approved.

The manufacturer behind the widget in the Philly.com story, Owlet, pointed out in a statement that it has a device going through medical accreditation right now. In addition, it pointed out:

Due to innovations developed by Owlet to lessen false alarms, many users will use the Owlet Sock for several months without ever getting a false alarm, greatly reducing the risk over diagnosis. Additional product enhancements and features include use of wireless technology to eliminate cords as well as a smartphone connectivity integration that fits parents’ lifestyle.

It has also undergone other safety testing to bring it to American standards.

We’ve been here before. Smart watches offering heart-rate monitoring can be a useful guide but are no substitute for clinically calibrated equipment. Ditto home blood pressure testing gear. Indeed, Owlet points to the fact that it uses the same technology as the Apple Watch and Fitbit as a badge of honour.

These gadgets are fine for a rough idea but not for a formal diagnosis. Trust your child’s health to one of them and take no other input and you’re bound to be in trouble. The question is how many false positives will they throw up (Owlet says you can go “months” without a false alarm, which isn’t all that reassuring) – and worse, if they can err in one direction, will they also fail to detect occasionally when there is actually an issue?

There’s a message in here related to the one about using satnavs for road safety rather than watching the road, or social media apps for social life instead of talking to actual people; a connected device is a useful addition to, but not a substitute for, human experience and scrutiny. Whether connecting your child’s health data, flawed or otherwise, to an app is the most secure thing to do is yet another debate.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SjkWQFjlUuI/