STE WILLIAMS

Attackers Add a New Spin to Old Scams

Scammers are figuring out unique ways of abusing cloud services to make their attacks look more genuine, Netskope says.

Cybercriminals have begun abusing legitimate cloud services in new ways to try and sneak attacks past security controls and make their scams appear more convincing to intended victims.

Netskope on Monday said its researchers had observed a recent trend among attackers to send phishing emails and SMS messages with links to malicious sites and content hosted on cloud services such as Amazon Web Services (AWS), Microsoft Azure, Alibaba Cloud, and Google Docs.

The security vendor says it has seen the technique being used to try and direct users to scam pharmacy sites, dating sites, and tech support sites, designed to steal personal information or for blackmailing victims. In other instances, attackers are abusing Google Docs to create and share presentations that contain malicious links.

The scams themselves are old and well-known. What’s new is the use of cloud services to host them.

For cybercriminals, cloud infrastructure services such as AWS and Azure provide a cheap, dynamic hosting option, says Abhinav Singh, threat researcher at Netskope. These services provide all of the native features of the Web, including Web hosting.

“Traditionally, these scams were hosted by registering new domain names or by compromising existing ones,” Singh says. “With cloud’s static object stores, you get access to creating an infinite number of domain names with just a few clicks.”

An object store on a cloud infrastructure-as-a-service platform (IaaS) is a storage system for storing and delivering content to users. Unlike the case with block storage, content stored in object storage can be accessed via the Web or through APIs.

“When you create an object store, you give it a name, and users access the objects using that name,” Singh says. An AWS S3 storage bucket that is named “my-text-bucket,” for instance, would be accessible via my-test-bucket dot s3 dot Amazonaws dotcom, he says.

Hosting a malicious site or content in a cloud object store eliminates the need for attackers to have to keep registering domains as existing ones get discovered and blocked. Attackers can simply use a domain generation algorithm and create a new object store and URL as needed and on a continuous basis, Singh said.

Because these malicious sites are hosted on trusted cloud service infrastructure, any links to them appear more genuine to users and are able to bypass content filters at the same time. Exacerbating the situation is the fact that many cloud service providers appear unable to identify and take down fake and malicious sites quickly enough. “Based on our analysis of these scams, we have seen them to be alive and running even weeks after they went live,” Singh says.

Growing Trend
According to Netskope, attackers are currently using cloud services largely for hosting well-known and long-running scams targeting individuals. But the same techniques could easily be used to target enterprises that use services such as Google Drive, the security vendor said.

“We believe that the trend of abusing cloud services by attackers is only going to increase in the future,” Singh predicts. The use of cloud assets for cryptomining, malware hosting, and phishing is already common, he says.

Organizations need to start educating users about the dangers of blindly clicking on links to cloud services. They also need to put filters and policies in place for detecting and blocking cloud-based attacks, he says. Based on the change in tactics, “it is clear that attackers are trying to adapt to the changing landscape of the Web in order to victimize as many end users as possible,” Singh said.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/cloud/attackers-add-a-new-spin-to-old-scams-/d/d-id/1334626?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Trust the Stack, Not the People

A completely trusted stack lets the enterprise be confident that apps and data are treated and protected wherever they are.

With great power comes great responsibility. Just ask Spider-Man — or a 20-something system administrator running a multimillion-dollar IT environment. Enterprise IT infrastructures today are incredibly powerful tools. Highly dynamic and dangerously efficient, they enable what used to take weeks to now be accomplished — or destroyed — with a couple of mouse clicks.

In the hands of an attacker, abuse of this power can dent a company’s profits, reputation, brand — even threaten its survival. But even good actors with good intentions can make mistakes, with calamitous results. Bottom line: The combination of great power with human fallibility is a recipe for disaster. So, what’s an IT organization to do?

Answer: Trust the stack, not the people.

I’d love to be able to take credit for coining this phrase. But the saying was coined by IBM Distinguished Engineer Jerry Denman, the company’s industry platforms chief cloud architect and vice president. Jerry used the term in a recent public forum to assure customers that IBM’s stack is built on a very trustworthy foundation.

To be clear, the stack here refers to the foundation of compute, network, and storage upon which developers build applications. When construction workers erect a skyscraper, they first build a deep foundation and frame of girders on which to hang the structure. That’s the stack. And the workers who add windows, walls, carpeted spaces, etc., are like the app developers. They shouldn’t have to give the stack a second thought. Its availability is a given.

Not all stacks are created equal. Those most deserving of your trust are built by seasoned security professionals and operations specialists who are intimately involved in the design and architecture of the system. The systems and processes they create — and then automate — are the result of extremely thoughtful consideration.

That said, it’s not even about trusting the people who have knowledge of and build the foundation. Rather, it’s about building trust into the foundation as best you can so that the developers and system administrators who manage that stack don’t have to … well, think too much! To use another analogy, it’s like driving a car. You don’t worry about how the suspension, internal combustion and electric motor are working. All of those, including the safety mechanisms, just work. All you need to focus on is driving.

The Rolls-Royce of trustworthy stacks checks several key boxes. It offers unified, policy-based controls for multicloud infrastructures. Let’s break that down a little. Multicloud infrastructure — that is, infrastructure that spans public, private, and/or hybrid cloud environments — is the target. As I explained in a previous column, a security policy is simply what you decide a priori is the correct behavior versus what is wrong. The security controls for these multicloud infrastructures are based on policies that you’ve predetermined are “the right thing to do,” and you have unified them across those infrastructures. This is unique.

But don’t all IT organizations use controls to secure their stack? Generally, yes. If they use just public clouds such as IBM Cloud or Amazon Web Services, they may have controls for that particular environment. More enlightened organizations might have policy-based controls. But policy-based controls that are unified across multicloud infrastructures? That is unique — and it makes for a truly trustworthy stack.

What are the benefits of protecting the stack with an automated policy, compliance, and reporting solution? Perhaps the most obvious is the ability to assure all parts of your business that there is little to no risk in putting any and all applications and data on said stack. In addition, knowing that the stack is secure allows you to focus on other mission-critical aspects of your infrastructure, such as data protection, data replication, application resiliency, and so forth.

Perhaps less obviously, when you trust the stack over the people running it, it frees you up to allow your most valuable assets — the people you trust — to work on strategic and more complicated problems. That’s because you can now assign the mundane tasks of running your virtual estate to more-junior or less-tenured admins, and in some cases even to outsourced help.

A stack that’s trusted completely allows the enterprise to have total confidence that apps and data are treated and protected regardless of where they are — be that in a VMware on-premises environment, in a VMware hybrid cloud, AWS, containers, or something else. With the right solution, you can ensure that the same security policies and measures are applied across your entire cloud and all the while you are provided a correlated view into all administrator activity.

In the 2002 film of the same name, Spider-Man follows those famous words about great power and great responsibility with, “This is my gift, my curse.” But with the right solution — a completely trusted stack — your highly dynamic, securely automated and efficient IT infrastructure can be all gift, no curse.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

John De Santis has operated at the bleeding edge of innovation and business transformation for over 30 years — with international and US-based experience at venture-backed technology start-ups as well as large global public companies. Today, he leads HyTrust, whose … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/trust-the-stack-not-the-people/a/d-id/1334560?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Mozilla bug throws Tor Browser users into chaos

Update. Shortly ater publishing this article we were able to fetch Firefox 66.0.4, which claims to fix this issue by repairing a broken certificate chain. We haven’t yet received notification of an update to the Tor Browser, but we expect to see one soon. [2019-05-05T22:15Z]

It’s a long weekend here in the UK, so the atmosphere is relaxed…

…except, we suspect, for any British members of the Mozilla Firefox programming squad.

Mozilla is currently stuck in the middle of a cybersecurity blunder involving digital signatures.

The bug reports we’ve seen so far don’t give much more detail than “expired intermediate certificate” problems, but the symptoms are obvious, especially for Tor users.

We didn’t get hit by this bug immediately – we were off the grid yesterday and left our computing kit at home. (Nothing Bear Gryllsy, you understand – we took ourselves off to Bristol on Brunel’s famous Great Western Railway to visit a bicycle show but left our mobile phone behind entirely by mistake.)

But today, not long after firing up the Tor Browser, which is a special version of Firefox with numerous privacy-centric settings turned on and baked into the build, we received a worrying popup warning.

According to the Tor Browser program, one of our browser add-ons could no longer be trusted and had been turned off – the alert didn’t say which one, just that some sort of cybersecurity concern had suddenly arisen.

We were online to look into a couple of untrusted sites, and we’d already started digging around when the warning popped up, which increased our sense of disquiet.

After all, we were already in the middle of various HTTP sessions; we’d been interacting with the sites we wanted to investigate; and we weren’t aware of having allowed those sites to install any new addons.

What had changed?

A trip to the special URL about:addons (ToolsAdd-ons from the menu bar) and a click on the Unsupported tab revealed the following:

NoScript could not be verified for use in Tor Browser and has been disabled.

Ow! Ouch! Owowowowowowowowow!

NoScript is an important security addon that’s officially trusted by Tor, as well as being installed by millions of other regular browser users, including – to judge from comments on this site – a significant number of Naked Security readers.

Had the NoScript repository been hacked? Was a bogus NoScript addon circulating amongst the Tor community? Was there some sort of Firefox vulnerability that allowed rogue sites to sneak bogus addons into your browser without popping up any sort of “Are you sure” or “Do you want to do this” dialog?

We assumed that this was just the sort of cybertreachery that Mozilla’s 2016 “addon signing” feature was meant to catch, and so we took the warning seriously.

Back at Firefox version 44 in early 2016 (we’re currently at 66 – updates come out every 42 days, or 6 weeks), Mozilla decided to stop allowing unsigned addons in the browser, effectively appointing itself as the custodian of addons, in the same sort of way that Google decides who gets into the Play Store.

Two questions immediately came to mind:

  • What had caused this apparently hacked version of NoScript to show up now, and where had it come from?
  • Given the importance of NoScript in the Tor Browser’s default protections, was it still safe to have Tor open at all?

A quick search of NoScript’s own web page, plus a minute or two on the various social media channels, revealed the reason, but not the explanation:

NoScript hadn’t changed and its digital signature was still valid and unexpired…

…but Firefox no longer trusted it, and so Tor Browser wouldn’t (indeed, for most users, couldn’t) load it any more.

The bug is somewhere in Mozilla’s signature verification, not in the addon itself – and the bug seems to affect the validation of every addon in pretty much every version of Firefox.

Indeed, ten or fifteen minutes after Tor scared us, our running copy of Firefox decided that its addons were no longer safe and killed them too. (We only use one third-party addon, a screenshotting tool, but reports suggests that any and all addons that you have will simply get killed off.)

What to do?

Mozilla has pushed out a temporary patch, referred to as a hotfix, but it only works if you have Mozilla’s Studies feature turned on.

Studies is a bit of a euphemism – what it really means is “let Mozilla collect data from your browser, as well as push out test code that’s not yet part of the main release.”

It’s turned on by default, but we – and probably many of our Firefox-using readers, too – have turned it off, on the grounds that the easiest way to ensure that the data that’s collected about you never leaks is simply not to let it get collected in the first place.

And there’s no way to get hotfixes or temporary patches delivered by means of Studies if you have Firefox’s data collection option turned off.

To check if you have Studies acitvated, and to enable it in order to get the hotfix if you wish, go to PreferencesPrivacy SecurityFirefox Data Collection and Use:

An interesting – though hardly unexpected – irony for Tor Browser users is that Studies is not just off by default in the Tor build, but actually omitted entirely on the grounds that Tor users never want to be tracked.

So Tor users can’t get the hotfix and need to turn off “addon signing” altogether instead.

Go to the special “don’t try this at home” page about:config, find the option xpinstall.signatures.required and flip it from true to false:

Here’s what the about:addons page will show, if you have the buggy version of Tor, with “addon signing” turned on (the default setting, above) and turned off (below):

Quick fix

In short:

  • If you use Tor Browser, turn off xpinstall.signatures.required temporarily. Remember to turn it back on when the official fix for this bug comes out.
  • If you use regular Firefox, check whether you have Studies enabled if you want the hotfix. If you ususally have data collection off, remember to turn it off when the official fix for this bug comes out.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/i4kl_OvXSy8/

UK taxman falls foul of GDPR, agrees to wipe 5 million voice recordings used to make biometric IDs

Her Majesty’s Revenue and Customs, aka the tax collector, has agreed to delete five million voice recordings it used to create biometric IDs.

The Voice IDs were used to speed access to its phone line but were created before the implementation of the European General Data Protection Regulation (GDPR) and fell foul of the tougher rules.

HMRC will keep about 1.5m Voice IDs which are in use, but delete around five million where explicit consent was not received and where those people had never used the system since creating the ID.

Tape over mouth, image via Shutterstock

Just keep slurping: HMRC adds two million taxpayers’ voices to biometric database

READ MORE

The Rev’s chief executive, Sir Jonathan Thompson KCB, said in a letter to his data controller:

“I have informed ICO that we have already started to delete all records where we do not hold explicit consent and will complete that work well before ICO’s 5 June 2019 deadline. These total around 5 million customers who enrolled in the Voice ID service before October 2018 and have not called us or used the service since to reconfirm their consent.”

HMRC followed several banks and other organisations in using a “my voice is my password” system to identify account holders. It will continue to use the system but in line with GDPR rules and its own published privacy policy.

Director of Big Brother Watch, Silkie Carlo, said in a statement:

“This is a massive success for Big Brother Watch, restoring data rights for millions of ordinary people around the country. To our knowledge, this is the biggest ever deletion of biometric IDs from a state-held database.

“This sets a vital precedent for biometrics collection and the database state, showing that campaigners and the ICO have real teeth and no Government department is above the law.”

Thompson said in his letter the Revenue will continue to use Voice ID because it is “popular with our customers, is a more secure way of protecting customer data, and enables us to get callers through to an adviser faster.”

The letter is available as a PDF from this page on the HMRC site. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/05/03/hmrc_bashed_for_5m_voice_slurp/

Mystery Git ransomware appears to blank commits, demands Bitcoin to rescue code

Programmers say they’ve been hit by ransomware that seemingly wipes their Git repositories’ commits and replaces them with a ransom note demanding Bitcoin.

An unusual high number of developers have griped online about the effects of the software nasty, with at least two reports seen by El Reg referencing the freeware Sourcetree GUI for Git, made by Atlassian. The repos affected are hosted across a number of platforms, from GitHub and GitLab to Bitbucket, so it’s likely the malware is targeting inadvertently poorly secured repositories rather than a particular vulnerability.

“So I was done fixing a bug tonight. I was using sourcetree to push the changes, as soon as I clicked the commit button my laptop freezed (it usually freezes so im not sure if it was due to malware or the usual one) and i immediately restarted it by long pressing the power button,” posted one person on Reddit.

The netizen added that the ransom note they received referenced gitsbackup[dot]com, and demanded about $560 in crypto-currency to un-fsck the repo.

Another posted on Stack Exchange: “One of my repos was wiped today and just a message left in its place with a bitcoin ransom. I’ve no idea how they accessed my account, can’t really see anything on github security page.”

The user added: “I’m at a bit of a loss just now as what to do, 2 factor has been turned on in github, the main server where the code was used. I’ve removed unused scripts etc changed passwords, currently building a new server droplet and moving everything as a precaution in case the server was accessed.”

A third, Stefan Gabos, wrote on Stackexchange: “I was working on a project and suddenly all the commits disappeared and were replaced with a single text file.”

That file, consistently across all the posts seen by The Register, reads:

To recover your lost code and avoid leaking it: Send us 0.1 Bitcoin (BTC) to our Bitcoin address 1ES14c7qLb5CYhLMUekctxLgc1FV2Ti9DA and contact us by Email at admin[at]gitsbackup[dot]com with your Git login and a Proof of Payment. If you are unsure if we have your data, contact us and we will send you a proof. Your code is downloaded and backed up on our servers. If we dont receive your payment in the next 10 Days, we will make your code public or use them otherwise.

Gabos added that he was “using SourceTree but somehow I doubt that SourceTree is the issue, or that my system (Windows 10) was compromised. I’m not saying it’s not that, it’s just that I doubt it.” He told El Reg he is running the most recent version of Sourcetree (3.1.3), having updated today from the previous version. The changelog is here.

Gabos added on Stackexchange that his code does not appear to have gone altogether as accessing his commit’s hash had worked, concluding: “So the code is there but there’s something wrong with the HEAD.” He continued to note that git reflog “shows all my commits”, updating as he learned more in his quest to recover his commits. In an edit, he added:

What this means to me is that the attacker doesn’t have the code and there’s no threat of them going over the source code for sensitive data or of making the code public. It also means to me that is not a targeted attack but a random, bulk attack.

Atlassian, maintainer of Sourcetree, had not responded to The Register‘s inquiries at the time of publication. See the updates on this post for instructions on how to recover your repos if they are wiped by the ransomware. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/05/03/git_ransomware_bitcoin/

White House issues Executive Order on cybersecurity, including hacker Hunger Games

“A year after the White House eliminated the position of cybersecurity coordinator, President Donald Trump called for everyone else to do the opposite and push cybersecurity coordination through worker training and recruitment.”

“America built the internet and shared it with the world; now we will do our part to secure and preserve cyberspace for future generations,” said Trump in a statement.

The Executive Order calls for supporting cyber workforce mobility between the private and public sector, without addressing how that will be accomplished. It calls for more training opportunities, for recognizing cybersecurity talent and holding agency heads accountable for risk management.

It directs the Secretary of Homeland Security to create a cybersecurity job rotation program, so government IT security professionals have an opportunity to learn from and share knowledge with different agencies.

The order calls for the use of the National Initiative for Cybersecurity Education (NICE) and NIST’s Cybersecurity Workforce Framework to gauge the skills of industry practitioners and instructs the Director of the Office of Personnel Management (OPM) to compile a list of cybersecurity aptitude tests that agencies can use to evaluate practitioners.

There’s also to be a Workforce Report to evaluate and make recommendations about government cybersecurity goals and talent development.

This might not end well

Then there’s the Cup. The order demands a plan for an annual tournament, called the President’s Cup Cybersecurity Competition (PCCC), which will be open to government employees and armed service members.

“The goal of the competition shall be to identify, challenge, and reward the United States Government’s best cybersecurity practitioners and teams across offensive and defensive cybersecurity disciplines,” the Executive Order says.

There are to be individual and team events for various sorts of hacking, with cash awards of not less than $25,000. The first PCCC is to be held before the end of this year.

Katie Moussouris, founder and CEO of Luta Security, told The Register that the competition could be tricky to implement.

“From the experience running the BlueHat Prize competition for $250,000 in defensive research, we were forced by gaming law to restrict what we could consider based on the exact rules we published and didn’t get to see some of the entries as a result,” she said.

But Moussouris said overall the Executive Order is a good move, so long as it helps fill in the gaps where talent is scarce. Pointing to her Congressional testimony on the subject last year, she stressed the need for defense and maintenance.

“Our love affair and obsession with offense security skills can’t overtake our practical workforce needs to prevent as many issues as possible and create a workforce of secure builders and maintainers, not just bug hunters,” she said.

Navy personnel from Shutterstock

What’s long, hard, and full of seamen? The US Navy’s latest cybersecurity war gaming classes

READ MORE

In a statement to The Register, Kevin Bocek, VP of security strategy and threat intelligence at security biz Venafi, said the Executive Order represents a positive step toward addressing cybersecurity threats. But he contends that acknowledging the need to address these threats isn’t enough.

“It’s especially noteworthy that this new directive concentrates on addressing the US federal government’s lack of competitiveness when attracting and retaining talent,” said Bocek. “If the government wants to recruit the greatest minds in cybersecurity, it must make sure our tools and technology are the best in the world and demonstrate their commitment to success by partnering with industry on key policy questions.”

For example, Bocek urged the Trump administration to adopt the advice of industry experts and commit to not supporting encryption backdoors in consumer technology.

In March, the US Navy issued its Cybersecurity Readiness Review, in which it warned that the Navy “is preparing to win some future kinetic battle, while it is losing the current global, counter-force, counter-value, cyber war.”

The Navy’s cyber SOS was ignored earlier this week when Vice President Mike Pence told Navy personnel aboard the aircraft carrier USS Harry S. Truman that the aging ship will not be mothballed in 2025, something the Navy proposed in its 2020 budget. It’s estimated that Navy would have saved $20bn over the coming three decades by retiring the vessel.

Those funds could have gone toward cybersecurity or other more modern systems of interest to the Navy. Now at least Navy personnel will have some motivation to try for the PCCC prize money. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/05/03/executive_order_cybersecurity/

Open Security Tests Gain Momentum With More Lab Partners

NetSecOPEN, a group of next-generation firewall vendors, has added the first university-based testing facility in its effort to move toward more open security testing.

Open security testing received a boost with the announcement last week that the University of New Hampshire’s InterOperability Lab (UNH-IOL) would become the first university-based testing facility to work with NetSecOPEN, a vendor-based organization that aims to create an open framework for testing next-generation firewall products.

With few exceptions, security testing has been a closed affair — a situation that has often been a sore point for both vendors and security-equipment consumers. Over the past five years, businesses have become more knowledgeable about the type of performance needed to do security at speed, making a stronger argument for open testing, says Timothy Winters, senior executive for software and IP networking at the UNH-IOL

“Open testing is definitely up and coming. The argument is, ‘Open it up so people on both sides can see what was tested, and then you can do apples-to-apples comparisons,'” he says. “Anyone can see whether they are covering the issues that they need covered and, if not, can request changes, and that can feed back into the open testing.”

Along with similar efforts in anti-malware systems and penetration testing, NetSecOPEN marks a growing movement toward open security testing standards. While impetus for the organization came from vendors, both open testing labs and testing-equipment vendors have signed on as well. To date, the list of members include large network-security firms — such as Check Point Software, Cisco Systems, Fortinet, and WatchGuard — and testing firms such as UL and the European Advanced Networking Test Center (EANTC).

“Hearing from a lot of vendors, they want options because other programs are closed,” the UNH-IOL’s Winters says. “They have issues with the lack of openness of not know what was being tested and how it was being tested. Often, the results come out and people would say, ‘I got a different number,’ and vendors would not know why they got a specific score.”

The testing methodology has been submitted to the Internet Engineering Task Force (IETF), the organization that sets the standards for Internet technologies. The focus on getting the specification accepted as a standard demonstrates the commitment to openness, says Brian Monkman, executive director of NetSecOPEN.

“The big thing is the transparency aspect,” he says. “So much performance testing is done in various labs that don’t share their testing methodology. The goal here is to ensure that there is a lot more information presented with these tests that will provide the enterprises with the ability to reproduce things and understand the philosophy behind the tests themselves.”

The effort will initially focus on quantifying the performance of next-generation firewall technology in a small number of realistic environments. In addition, the group will curate different sets of network data, including attack data that can be used to test the efficacy of products. 

Initially, however, the focus will not be on the capabilities of products to detect the latest attacks, but on their performance while checking traffic under realistic scenarios, says the UNH-IOL’s Winters. Those scenarios will be different depending on customer needs.

“A small business or enterprise firewall is very different than a giant-campus firewall,” he says. “Small businesses and really big companies are going to be looking at very different things. Users have to be a bit more educated, but I don’t know how you get around that.”

So far, independent labs have not had the same problems with the NetSecOPEN initiative that they have had with another vendor-led initiative, the Anti-Malware Testing Standards Organization (AMTSO). Last year, private testing firm NSS Labs, a member of AMTSO, filed suit against the organization, claiming it violated antitrust statutes.

NetSecOPEN has a different model, the company said.

“With regard to NetSecOPEN, a standard, defined traffic mix for testing performance that reaches adoption with the IETF will be useful for consumers and help remove some of the incomparability in vendor datasheets,” says Jason Brvenik, CEO at NSS Labs. “While the draft performance testing standard from NetSecOpen serves a different purpose than our performance testing and comparative testing, we welcome more consistency and transparency in published performance claims for products.”

In its off-campus facilities, the UNH-IOL is building out its test bed. The organization brings in company representatives for several days of head-to-head interoperability testing, works with the US government on standards testing, and is now focusing increasingly on security testing.

“I think the world is different today than it was five years ago,” Winters says. “It used to be that open testing was not considered secure. Today I’m not hearing those concerns.”

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/perimeter/open-security-tests-gain-momentum-with-more-lab-partners/d/d-id/1334610?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Massive Dark Web ‘Wall Street Market’ Shuttered

Europol-led international law enforcement operation led to takedown of world’s second-largest digital underground marketplace.

The world’s second-largest Dark Web marketplace has been dismantled in an international law enforcement operation that also resulted in the arrests of three German nationals indicted in the US.

The so-called Wall Street Market, which hosted the sale of illegal drugs, stolen data, fake documents, and malicious software, most recently had some 1.5 million registered customer accounts and 5,400 registered sellers of the illicit goods. Wall Street Market’s officials earned a 2% to 6% commission of the value of each sale.

Law enforcement officials, led by Europol, were able to see the IP address and ultimately identify one of the defendants when his VPN connection to the marketplace computer network failed. The defendants, who have not been named publicly, are a 23-year-old resident of Kleve, a 31-year-old resident of Wurzburg, and a 29-year-old resident of Stuttgart.

In addition, former Wall Street Market operative Marcos Paulo De Oliveira-Annibale, 29, of Sao Paulo, Brazil, was charged in the US with federal drug distribution and money-laundering charges.

“Taking down this site is a huge win for past and future victims of crimes perpetrated due to the proliferation of illegal products and services being sold,” said Chief Don Fort of IRS Criminal Investigation. “We are committed to using our unique financial investigative abilities to tackle these kinds of threats head on to protect citizens, to promote cybersecurity, and to inform the global community.”

Read more here and here. 

 

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/massive-dark-web-wall-street-market-shuttered/d/d-id/1334612?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Belgian programmer solves cryptographic puzzle – 15 years too soon!

April 2019 was a good month for bold Belgians!

Professsional Belgian cyclist Victor Campanaerts broke the world hour record, covering an amazing, unassisted, undrafted 55km in a velodrome (55,089 metres, in fact) in 60 minutes.

The previous record, set by Sir Bradley Wiggins in 2015, had stood for nearly four years.

But professional Belgian programmer Bernard Fabrot conquered an even more durable challenge.

He cracked a computational puzzle that was set back way in 1999, by none other than Professor Ron Rivest of MIT, who’s the R in the well-known public key encryption algorithm RSA.

Fabrot’s achievement is particularly interesting because Rivest specially designed the puzzle in the hope it would take 35 years to solve, assuming you started as soon as it was published.

In the end, Fabrot required 3.5 years of computer running time, thus outpacing Rivest’s estimate by a factor of 10.

The puzzle is what’s known as a “time-lock problem” – a time-consuming calculation that can only be accelerated by tuning your algorithm or by building faster computer hardware.

Time-lock puzzles are interesting, and important, because they can’t be short-circuited simply by splitting the problem into pieces and throwing more computers at it.

Time-lock puzzles are inherently sequential, typically requiring a number of loops through an algorithm where the input to each iteration of the loop can only be acquired by reading in the output of the previous iteration.

The idea is to put everyone in the same boat: you can be the biggest, richest, most energy-slurping cloud computing company in the world, but all those servers, CPUs and CPU cores won’t let you buy your way to victory.

Parallelisability

Most problems can be split into parts, and least some of those parts can overlap.

If you were asked to count all the elephants in Africa, for example, you could fly to Cape Town and then zigzag your way north until you reached Alexandria in Egypt, noting all the elephants as you went along.

But this is an inherently parallelisable task, because – if you ignore complexities such as elephants you’ve already counted wandering across national borders, for example – the number of elephants in, say, Zambia, can be counted at the same time as the number in neighbouring Angola.

Simply put, you can set a different person to counting in each country, without worrying about the order in which they start – and in the biggest countries, you could subdivide the task again, say by using one person in each province, and so on.

The long and winding critical path

Ron Rivest’s 1999 puzzle is, at heart, just a single tight loop – you can only send one person to count all the elephants – with the iteration count devised so that it would require about 35 years to complete.

Rivest even took into account an annual upgrade where you suspended calculations for a moment and updated your computer to the newest and fastest on the market.

Why 35 years?

The puzzle was announced to commemorate the 35th anniversary of MIT’s Laboratory for Computer Science, which opened in the early 1960s, and was take a further 35 years to solve.

In the end, it took 20 years before the puzzle was completed, by the aforementioned Bernard Fabrot; as we said, he needed 3.5 years of actual processing time from start to finish, thus beating the guide time by nearly 15 years.

The algorithm

The algorithm itself is easy to implement.

But the hard part in completing the solution is never losing track of where you are.

There aren’t any “tricks”, other than regular, precise backups of the computational state, to let you recover if your computer crashes or an error creeps in today.

Without well-managed backup, any glitch or outage means going right back to square one.

There’s also the challenge, of course, of not losing faith along the way and packing it in.

Here’s the puzzle, as Rivest set it:

The problem is to compute 2^(2^t) (mod n) for specified values of t and n. Here n is the product of two large primes, and t is chosen to set the desired level of difficulty of the puzzle.

To explain, 2^t is Ron Rivest’s text notation for 2 raised to the power of t, or 2t, and the notation mod n means to take the remainder, or modulus, that is left over after dividing the original number by n.

For example, 3 goes into 6 exactly twice, so 6 / 3 = 2 remainder 0; but if you divide 7 by 3, you get 2 with 1 left over, so that 7 / 3 = 2 remainder 1.

Rivest set t to a value of just under 80 million million and constructed a value of n with 2048 binary digits (512 hexadecimal digits or 256 bytes), like this:

t = 79,685,186,856,218 

n = 0x32052C40E056ED2C9141FC76C060FA685F60C45095EB69930CBE4B2C81B19C33
      55FA9149150D7082284CC3903C12B98DACC7E2FC7C16907F8E946AEFB5FD1240
      77E05D944B6738334E71A9BD37E1C08F2DF3D119EB95182B0F3E87B341A217BB
      433F2114FEAE1555CFB974DA3D56D4AD7C1D83FD789F34143CDD3D502C104639
      EE68DDC8D56D5BC6EAAC7ED16C1F5FF02159B5D52AF94979A18A60EFCABE109E
      E2E90C14B6FC1225B754644D989FC1B9F87552B255997CEE22429CF49E3599DA
      4B3F6D5535B83072A1D4357AE1ABFF8455B80C438EC33A5C7C6CB1ACE22C62FE
      67B3040029B3C37E5EC682363A77D42FB223E194878E146D06739EC4E598A9A1

The idea is that there is no shortcut by which you can compute the final answer, unless you know the two prime numbers that were multiplied together to calculate the value of n.

Ron Rivest, and now Bernard Fabrot, know the prime factors of n, let’s called them p and q, but no one else does.

(Rivest needed a shortcut for his own use so that he could work out the answer in advance in order to decide when a competitor really had solved it.)

So to solve this puzzle without p and q you just have to keep on multiplying over and over and over until you get to the end of the calculation.

Each multiplication in the loop uses the previous output as its input, so you can’t split the computation up between multiple computers, CPUs or even CPU cores.

How to code it

Let’s take a stab at the problem, using Python.

In Python, the operator ** means ‘raise to the power of’ or ‘exponentiate’, and % means ‘find the remainder after division by’ or ‘modulo’, and we’ll assume that the function seq(n) loops through the numbers 1,2,3, all the way to n.

   exp = 2 ** 79685186856218
   val = 2 ** exp
   val = val % 0x32052...98A9A1

Easy!

Except that exp in line 1 is a number with nearly 20 million million decimal digits.

So you need 10 terabytes just to store that value – and yet in the next line of code we aim to raise 2 to the power of that already alarmingly large number.

If we were to implement the ** operation as repeated multiplication, with 2**n calculated as an n-step loop of multiplying 2 by 2 by 2 … n times, you would see just how intractable this calculation looks:

   exp = 1
   for i in seq(796851868562180): exp = exp * 2
   val = 1
   for i in seq(exp): val = val * 2
   val = val % 0x32052...98A9A1

That’s 80 million million loops just to find out how many gadzillion bazillion loops we have to do in the real calculation!

Exponentiation by squaring

Fortunately, there’s a slicker way.

In the looped expression val = val * 2 we increase our exponent by 1 every time, calculating 21, 22, 23, 24, 25 and so on.

But if we keep squaring val instead of doubling it, like this…

   val = val * val

…then we double the exponent each time instead of incrementing it, so we loop through 21, 22, 24, 28, 216 and so on.

So we can calculate 2(2t) with just t loops instead of 2t loops, like this:

   val =  1
   for i in seq(796851868562180): val = val * val
   # Now find the remainder
   val = val % 0x32052...98A9A1

It’s too big!

This code is still no good, even though we now have 80 million million loops in total, intead of a ridiculous 280,000,000.

The number val just gets too large to handle as we go along.

Even doing just a few loops in line 2, you can see that the time taken for each extra iteration pretty much doubles at every step, because we’re multiplying numbers that have twice as many digits each time:

   millisecs | value computed
           0 | 2^(2^ 1)
           0 | 2^(2^ 2)
           0 | 2^(2^ 3)
           . . . . . . .
           0 | 2^(2^16)
           1 | 2^(2^17)
           2 | 2^(2^18)
           5 | 2^(2^19)
          11 | 2^(2^20)
          25 | 2^(2^21)
          49 | 2^(2^22)
          97 | 2^(2^23)
         195 | 2^(2^24)
         406 | 2^(2^25)
         833 | 2^(2^26)
        1690 | 2^(2^27)
        3513 | 2^(2^28)
        7182 | 2^(2^29)
       14832 | 2^(2^30)

Fortunately, in modular arithmetic, you can either take the remainder at the end, as above, or repeatedly, at every step along the way.

Only the remainder needs to be fed back from the bottom of the loop to the next multiplication.

So, you can shift the % operation inside the for loop (in Python the indentation shows what loop level the code is at):

   val =  1
   for i in seq(796851868562180): 
      val = val * val
      # Calculate the remainder each time round
      # inside the loop, not once at the end
      val = val % 0x32052...98A9A1

The remainders are all a fixed length – in this case, they can’t be larger than n-1, and given that n is 2048 bits long, that means each stage involves multiplying 2048 bits by 2048 bits, and then dividing the product back down to a remainder of 2048 bits.

The accumulated answer val is a 2048-bit number at the end of each iteration and thus 2048 bits at the start of the next, so each extra turn round the loop adds the same amount of time, so now we are flying:

   millisecs | value computed
           0 | 2^(2^ 1)
           0 | 2^(2^ 2)
           0 | 2^(2^ 3)
           0 | 2^(2^ 4)
           0 | 2^(2^ 5)
           0 | 2^(2^ 6)
           . . . . . . .
           0 | 2^(2^27)
           0 | 2^(2^28)
           0 | 2^(2^29)
           0 | 2^(2^30)

On my Mac, doing 1,000,000 iterations, which takes me to 2(21000000), takes just over 16 seconds, so I can do about 62,500 iterations a second.

At that rate, it would take me 80,000,000 / 62,500 seconds to complete the puzzle, which is just under 15,000 days, or about 40 years.

A top-end Mac and an optimised square-and-divide program might very reasonably double both my CPU speed and the computational efficiency for a 4× overall speedup, but I’d still be looking at 10 years to work it out.

Well done, Bernard

Fabrot did it in 3.5 years, and he didn’t have today’s hardware when he started.

So it was quite a result – and an exciting victory considering that he used commodity hardware.

Following Fabrot’s announcement, a cryptocurrency startup jumped on the bandwagon by publicly claiming that it solved this puzzle in two months using specialised hardware known as a Field Programmable Gate Array (FPGA).

But the link on its website explicitly uses the past tense in saying the company ‘solved it in 2 months’, but doesn’t go anywhere, and the Github account hosting the source code still says ‘Code coming soon’ – so they’re just trying to photobomb someone else’s moment of triumph.

The glory goes to Fabrot, who showed that quiet competence is its own reward…

…and is an important reminder that when cryptographers warn us that attacks only ever get faster, they’re not wrong!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PLWtqUQV3u0/

Europol takes down Wall Street market: No, the other cesspool of dark international financial skullduggery

Three people have been arrested in Germany in connection with a dark net souk for drugs, dodgy documents and stolen data called the Wall Street Market.

Earlier this year Finnish customs and French police worked together to take down Silkkitie, also known as Valhalla Marketplace. Finnish customs seized a web server and sizeable quantities of Bitcoin. When the “same” traders from Silkkitie, which has been running since 2013, shifted their stalls to Wall Street, German plod were waiting, Europol noted.

Two people accused of selling narcotics via the Wall Street site have been arrested in Los Angeles.

German police said they’d seized over half a million euros in cash along with vehicles, six-digit amounts of Bitcoin and Monero cryptocurrencies as well as computers and storage gear.

Europol’s executive director, Catherine De Bolle, said:

“These two investigations show the importance of law enforcement cooperation at an international level and demonstrate that illegal activity on the dark web is not as anonymous as criminals may think.”

Europol claimed the Wall Street market had over 1 million users and 5,400 sellers. The site’s administrators charged between 2 and 6 per cent commission on sales.

The operation involved cooperation between German Federal Criminal Police, the Dutch National Police, Europol, Eurojust and various US government agencies including the Drug Enforcement Administration, the Federal Bureau of Investigation, the Internal Revenue Service, Homeland Security Investigations, the US Postal Inspection Service, and the US Department of Justice.

Europol has a dedicated Dark Web Team to enable global co-operation in the ongoing whack-a-mole against dark net trading sites.

The full Europol statement is available here. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/05/03/europol_takes_down_two_big_dark_net_marketplaces/