STE WILLIAMS

Oh, baby! Newborn-care website leaves database of medics wide open

A US health company apparently exposed on the public internet contact information for hundreds of medical professionals.

IT pro Brian Wethern says he told medical company Health Stream nine days ago that one of its now-removed websites had left its database of users out in the open, allowing anyone to slurp the first and last names of medics, and their email addresses and ID numbers, on its Neonatal Resuscitation Program.

We’re withholding the URL of the leaky website at this stage because its data is lingering in online caches.

Wethern tells The Register he believes the company used the database to deliver messages from instructors to students – for example, to set up or confirm a class. The site hosting the information was taken offline shortly after Wethern reported it, and remains inaccessible.

Spear-phishing opportunities

Had the data been accessed and copied by the wrong person, the email addresses could have been used for specific attacks on relatively high-value targets: medical professionals and instructors. More importantly, says Wethern, the fact that such a database was left open to the public wouldn’t bode well for security on other parts of the site.

“What I found was a front-side database,” Wethern explained. “I don’t need their passwords … because I have the frontside database.”

Health Stream did not return multiple requests for comment, so we are unable to get their side of the story. Wethern says he last heard from the company eight days ago when they sent their first and only response to his notifications.

Now, Wethern says, he’s going public in the hope other companies will be a bit more forthcoming and responsive to researchers who discover these sorts of data leaks.

“Hire a basic researcher, first and foremost. Allow your company to budget for these types of intrusions,” Wethern explains.

“And before this all happens, make sure to have a data breach summary in place. Be current with bug bounty programs, own up to your mistakes, and honor the fact that security researchers can be good people out to do good things.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/20/oh_baby_health_site_leaves_nrp_database_sitting_out_in_the_open/

DNC Sues Guccifer 2.0, Russian Federation & Trump Campaign for Election Conspiracy

DNC first hacked by Russians in 2015, according to the filing.

The Democratic National Committee (DNC) today filed a multimillion-dollar lawsuit alleging a conspiracy by Russia, the Trump campaign, and WikiLeaks to tip the 2016 presidential election in favor of Donald Trump. Among the individuals named in the suit are Guccifer 2.0, the Russian Federation, Donald Trump, Jr., Paul Manafort, and Jared Kushner.

According to the suit, the Russians first infiltrated the DNC computer system on July 27, 2015, a time frame that previously had not been made public. The forensic investigation, according to the suit, found the system was again attacked on April 18, 2016, and the attackers began exfiltrating documents and data on April 22 of that year. 

“In the Trump campaign, Russia found a willing and active partner in this effort” to disrupt and damage the election, while the Trump campaign “gleefully welcomed Russia’s help,” according to the suit.

The Trump campaign called the lawsuit “frivolous” and said it should be dismissed.

Read more here and here

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/dnc-sues-guccifer-20-russian-federation-and-trump-campaign-for-election-conspiracy/d/d-id/1331609?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

SunTrust Ex-Employee May Have Stolen Data on 1.5 Million Bank Clients

Names, addresses, phone numbers, account balances, may have been exposed.

SunTrust Bank said a former employee may have stolen names, addresses, phone numbers, and account balances of some 1.5 million of its clients. 

The employee tried to download the client contact information six- to eight weeks ago in an attempt to provide the data to a criminal from outside the organization, Reuters reports.

SunTrust CEO William Rogers in an earnings call said there was no indication of fraudulant activity using the client information, and it appears the data had not been sent outside the bank.

The bank is now offering free identity protection services to all of its customers for the “potential data threat,” according to a press announcement from SunTrust. 

“The company became aware of potential theft by a former employee of information from some of its contact lists. Although the investigation is ongoing, SunTrust is proactively notifying approximately 1.5 million clients that certain information, such as name, address, phone number and certain account balances may have been exposed,” the bank said in a press statement. “The contact lists did not include personally identifying information, such as social security number, account number, PIN, User ID, password, or driver’s license information. SunTrust is also working with outside experts and coordinating with law enforcement.”

Read more here  and here.

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here. Register with Promo Code DR200 and save $200.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/suntrust-ex-employee-may-have-stolen-data-on-15-million-bank-clients/d/d-id/1331610?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Trust: The Secret Ingredient to DevSecOps Success

Security practitioners must build trusted relationships with developers and within cross-functional DevOps teams to get themselves embedded into continuous software delivery processes.

RSA CONFERENCE 2018 – San Francisco – As evident by the speaker tracks and hallway discussions here this week at the RSA Conference, the marriage of DevOps and security principles driving the DevSecOps movement is finally gaining traction in the security community.

Old pros in cybersecurity are finally moving past the denial stage – “No, DevOps isn’t a fad. Yes, it’s possible to bake security into it.” Now comes the hard part of evolving the security mindset, practices, and tools to suit the fast pace of DevOps methods and toolchains. 

There are a lot of logistics that go into the process, but the foundational effort that DevSecOps veterans say should to trump all others is that of trust-building. 

“You have to start with building trust. In order to do that, the first thing it takes is listening and understanding the development team’s challenges,” explained Bankim Tejani, senior manager of digital product security for Under Armour. “What are they trying to accomplish? How are they doing it? Why they’re using this particular tooling? What kind of scale are they gaining out of it? What kind of efficiencies are they gaining out of it? Then align what you’re doing with security to that, to support that.”

This was a theme that continually came up during Monday’s DevOps Connect event, and it was further expounded upon by appsec experts and DevSecOps-savvy security leaders here this week. Security and development experts explained that at the end of the day, failed attempts to add security into the DevOps process can be chalked up to a fundamental lack of trust from the dev team.

Security’s attempts to force impractical policies and overly ambitious fix rates cements the belief in DevOps teams that security won’t ever understand what it means to deliver software in modern enterprise. And when the developers don’t believe security has their back, they stop answering security team emails. They stop attending meetings called by the CISO’s minions. And they otherwise disintermediate security from the day-to-day processes of delivering software. At that point, the security team enters the “nag zone.”

According to a survey released this week by Sonatype that primarily questioned the developer community, 72% of participants see security not as trusted partners, but as “nags.” That nag cred is quickly earned when the security team goes to the developers with binders full of vulnerabilities that are unprioritized and have no guidance for how they need to be fixed with a mandate that they’re all acted upon.

Caroline Wong, vice president at Cobalt.io and a speaker at the conference, owned up to this kind of activity in past lives at organizations like eBay and Zynga. She explained that the path to DevSecOps epiphany and improved trust was to stop mandating and start asking questions.

“The first thing that we started to try to do was to ask some questions that we had not asked before. Specifically, what’s important to you? Asking the developers and the technology teams, ‘What are you trying to accomplish?'” she says. “It turns out, developers have things like quarterly goals and deadlines and they’re trying to make new features so they can make money for the business. And when we were approaching them with these piles of work to do, that did not instill trust.”

The more her teams could do the work of prioritizing the vulnerabilities that really meant something to the business risk posture and the work of advising developers to find a path to making fixes, the more trust her team could build.

It’s also crucial for security teams to do a reality check and honestly decide what kind of security improvements can be asked for in the context of demands made on developers by the business. For example, when her appsec team asked developers how much time they could honestly spend on the rework of vulnerability fixes, the honest goal that they came up with together was a 20% reduction in flaws on customer-facing websites.

“Now 20% is not a number that security people like. Security people like a number like 90%, or a number like a number like 95%,” she says. “But if we’d gone in and said ‘We’re going to try to eliminate 90% of the bugs on the website,’ we probably would’ve gotten the same response that we got before, which was people stop coming to our meetings, people stop inviting us to our meetings, people stop reading our e-mails.”

Ultimately, security people need to keep a couple of core interpersonal elements in mind when they’re establishing trust within DevOps teams, says Larry Maccherone, who is currently driving a DevSecOps transformation at Comcast as the senior director in the company’s Technology and Product division’s Security and Privacy group.

“I have this formula for trust. It’s credibility, plus reliability, plus empathy, all divided by apparent self-interest,” he says. “Now, self interest is never zero. I wouldn’t be spending my time talking to you if there wasn’t some self-interest. But the more of that there is the more you need on the numerator to make up for that.”

So as security professionals building trust with devs, it means establishing credibility by understanding the developer tools and developer lingo inside and out. Maccherone does this by exclusively hiring developers on his security staff. It means having developer’s backs by doing the stuff you tell them you’re going to do.

And it means really striving to understand the developer’s ethos, to empathize with the things they struggle with. All of that makes it a lot easier for devs to trust you as a security person when you turn around and start asking for things, he said.

As a part of that equation, it’s also important to remember the simple life principle that you’ve got to give a little to get a little. As Paula Thrasher, director of digital services at government integrator CSRA, explained, one of the first situations she worked in that truly established trust was where embedding security tools into an overall developer tool chain ended up speeding up the development process, while also making it more secure.

“That was a huge ‘ah-ha’ moment for me. That to really bring adoption, one it has to be really collaborative, but two there’s got to be a compelling case for the effort that developers have to put in, they’re going to get something back out of it,” she said.

Related Content:

 Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here. Register with Promo Code DR200 and save $200.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/application-security/trust-the-secret-ingredient-to-devsecops-success/d/d-id/1331611?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cybercrime Economy Generates $1.5 Trillion a Year

Threat actors generate, launder, spend, and reinvest more than $1.5 trillion in illicit funds, according to a new study on cybercrime’s ‘web of profit.’

RSA CONFERENCE 2018 – San Francisco – If cybercrime was a country, it would have the 13th highest GDP in the world. Attackers generate $1.5 trillion in annual profit, which is about equal to the GDP of Russia, according to a new study on the interconnected economy of cybercrime.

“Into the Web of Profit,” among the first studies to explore the intricacies of revenue and profit in the world of cybercrime, was conducted by Dr. Michael McGuire, senior lecturer in Criminology at England’s University of Surrey. Over nine months of study, he learned how the “economy” of cybercrime sustains itself and overlaps with the legitimate economy.

This wasn’t the original intent behind the Bromium-sponsored study, which began with the idea of learning where cybercriminals spend their money. “It turned into a huge piece of research, which looks at the whole of how money flows around the cybercrime system,” says McGuire. The report pieces together conversations with global organizations, security workers who have infiltrated the Dark Web, international police forces, and of course, the criminals themselves.

His study indicates a rise in “platform criminality” similar to the platform capitalism model in which data is the commodity, used by organizations including Amazon and Facebook. This platform turns malware into a product, simplifies purchase of illicit tools and services, and enables broader criminal activities including drug production, human trafficking, and terrorism.

More than 620 new synthetic drug types have appeared on the market since 2005, McGuire says. Many are created in China or India, purchased online, and sent to Europe in bulk. Evidence shows groups earning revenue from cybercrime are also involved in drug production, he found. The takedown of Dark Web online market Alphabay led to the discovery of listings for illegal drugs, toxic chemicals, malware, and stolen and fraudulent data.

The $1.5 trillion that cybercriminals generate each year includes $860 billion in illicit online markets, $500B in theft of trade secrets and intellectual property, $160B in data trading, $1.6B in crimeware-as-a-service, and $1B in ransomware. Evidence indicates cybercrime often generates more revenue than legitimate companies: large multi-national operations can earn more than $1B; smaller ones typically make between $30k-$50K.

It’s time to move behind the idea that cybercrime is like a business. “It’s much, much more than that,” he says. “It’s like an economy which mirrors the legitimate economy. Increasingly, what we’re seeing is the legitimate economy feeding off the cybercrime economy.”

Blurring the Legal Lines

The interdependence between the legitimate and illegitimate economies is driving the “web of profit” fueling cybercrime, McGuire says. Criminal organizations take data and competitive advantages from real companies and luse them to accomplish their goals. Part of the problem is, many of these legitimate organizations don’t know their role in furthering cybercrime.

Companies like Facebook and Uber are rich with data, making them a prime target for attackers seeking user information and intellectual property. They give hackers a platform to sell illicit goods and services, and set up fake shops to launder money or connect buyers and sellers. This makes massive companies facilitators in a criminally driven economy.

The owners of cybercrime platforms are the biggest earners, McGuire found. Each hacker might only make $30K per year; however, managers can earn up to $2M per job with as few as 50 stolen credit cards. They aren’t committing crime but they are selling it, and their criminal platforms have evolved to offer services, descriptions, and technical support for their buyers.

McGuire shares some of the numbers behind these earnings. A zero-day Adobe exploit, for example, can sell for up to $30K while a zero-day iOS exploit costs $250K. Malware exploit kits cost about $200-600 per exploit; a blackhole exploit kit costs $700 to lease for a month or $1,500 for a full year. Custom spyware costs $200, an SMS-spoofing service runs $20 per month, and a “hacker for hire” will charge about $200 for a minor hack.

Much of the money is reinvested in new criminal ventures. Criminals put about 20% of their revenues into additional crime, indicating up to $300B is used to drive illegal activity.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/cybercrime-economy-generates-$15-trillion-a-year/d/d-id/1331613?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Chrome anti-phishing protection… from Microsoft!

Microsoft has made its SmartScreen anti-phishing API available to Chrome browser users through an extension called Windows Defender Browser .

If this makes you do a doubletake, we’ll repeat it:  from this week Chrome users will, for the first time, be able to protect their browsing using not only Google’s own Safe Browsing technology but Microsoft’s too.

Once installed, there’s not much to the extension beyond being able to turn its protection on or off and send Microsoft feedback on the current version (v1.62).

Technologically, it’s a different matter. From the moment it’s installed, Chrome users will potentially see warnings from two different systems designed to tell them about malicious links, pop-ups, malware downloads, and of course, phishing URLs.

These pages will both be coloured red and probably indistinguishable bar the fact they reference either Google or Microsoft.

Why might Microsoft want to be so generous to users of a rival browser?

Hitherto, there’s been a bit of a gulf in integrated browser protection with Chrome, Firefox, Apple’s Safari, Opera and Vivaldi using Google’s Safe Browsing technology and only Microsoft’s Windows 10 browser, Edge, and legacy Internet Explorer versions using SmartScreen.

For users, these function like a second browser-specific layer of protection that supplements whichever anti-malware software (e.g. Sophos Home) they have installed.

Testing last year by NSS Labs suggested that Microsoft’s platform had a noticeable edge over Google when it came to malware download detection and blocking phishing URLs.

We speculated at the time that this might be explained by the sheer volume of malware and phishing attacks that Microsoft must detect through its large base of Windows users.

Having both detection platforms on the same browser would at least set up a fair comparison of their effectiveness in future tests.

The simplest explanation of Microsoft’s motivation for offering SmartScreen on Chrome is that it gives the company visibility on the bad stuff encountered by the 60% of the market that uses Chrome (Edge is around 4%). This, in turn, helps Microsoft’s Office 365 Exchange email service offer better protection to compete with Google’s rival G Suite.

It also reminds Chrome users that Microsoft is out there even if not many of them use Edge as the number of people on its predecessor, IE, slowly dwindles.

Ironically, for anyone accessing Gmail and G Suite via Chrome, this has a hidden benefit – installing Defender Browser Protection means these users are being protected from phishing attacks before email reaches their inboxes via Google’s platform and, after that, courtesy of both Google and Microsoft (the reverse is already true for Outlook.com on Chrome).


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/WMCg4QrA6AU/

Kingpin who made 100 million robocalls loses his voice

How easy is it to download automated phone-calling technology, spoof numbers to make it look like calls are coming from a local neighbor, and robo-drag millions of hapless consumers away from what should be their robot-free dinners?

The question, from Senator Edward J. Markey, was directed at Adrian Abramovich: a Florida man the senator referred to as the “robocall kingpin.” On Wednesday, Abramovich was on the hot seat on Capitol Hill, having been subpoenaed to testify before the Senate Commerce, Science Transportation Committee as it examined the problem of abusive robocalls.

The answer: just a click, Abramovich said. The technology is easy to use and can be set up by “anyone” from a home office.

There is available open source software, totally customizable to your needs, that can be misused by someone to make thousands of automated calls with the click of a button.

All you have to do is run an online search for Voice-over-IP (VoIP) providers, short-duration calls, and you’ll probably come up with 5, 6, or 7 providers, “most of which are US-based,” he said.

Markey then asked, “And how many people would I have to employ to place, say, 10,000 robocalls a day?”

Abramovich’s reply: one.

The Florida man is fighting a $120 million fine proposed last year by the Federal Communications Commission (FCC) for the nearly 97 million robocalls his marketing companies – Marketing Strategies Leaders Inc. and Marketing Leaders Inc. – made between October and December 2016.

That’s over one million calls a day, FCC Chairman Ajit Pai said in a statement that accompanied the proposed fine last year. The fine would be the first enforcement action against a large-scale spoofing operation under the Truth in Caller ID Act.

Abramovich is accused of tricking consumers with the robocalls. Consumers have reported receiving calls from what looked like local numbers, talking about “exclusive” vacation deals from well-known travel companies such as Marriott, Expedia, Hilton and TripAdvisor. Once you pressed “1” as prompted, you’d get transferred to a call center, where live operators gave targets the hard sell on what Pai called “low-quality vacation packages that have no relation” to the reputable companies initially referenced.

Pai said in his statement last year that many consumers spent from a few hundred up to a few thousand dollars on these purportedly “exclusive” vacation packages.

Pai said there are a few things that are truly nasty about the robocalling scheme: first, Abramovich apparently preyed on the elderly, finding it “profitable to send to these live operators the most vulnerable Americans – typically the elderly – to be bilked out of their hard-earned money”.

Secondly, these millions of calls drowned out operations of an emergency medical paging provider, Pai said:

By overloading this paging network, Mr. Abramovich could have delayed vital medical care, making the difference between a patient’s life and death.

Abramovich may well have showed up for the hearing – under the duress of a subpoena – but he stuck to general answers, refusing to answer questions about his particular case. He said he was no kingpin and that his robocalling activities were “significantly overstated,” given that only 2% of consumers had meaningful interaction with the calls.

Senator John Thune pointed out that 2% works out to 8 million people.

Thune: Does that sound like a small effect?
Abramovich: I am not prepared to discuss my specific case.

Other robocalling cases have involved more calls, Abramovich said. He also denied fraudulent activities, saying that resorts associated with his telemarketing “were indeed real resorts, offering real vacation packages.”

Robocalls increased from 831 million in September 2015 to 3.2 billion in March 2018 – a 285% increase in less than three years, according to testimony from Margot Freeman Saunders, senior counsel at the National Consumer Law Center.

Abramovich told the senators that he himself receives “four of five robocalls a day.” Since the FCC headlines, that’s ramped up, he said.

Senator Markey: And you don’t like it?
Abramovich: I decline the call.

Thune said that the committee would consider holding Abramovich in contempt of Congress for claiming a Fifth Amendment privilege throughout the hearing after speaking about his specific case during opening remarks.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/daZOaYGNUH4/

RSA Conference has a leaky app… again!

You wouldn’t expect the organisers of a seminar on nuclear physics to hand out conference badges that were contaminated with dangerous levels of radioactivity.

You wouldn’t expect to attend a workplace health and safety training course in a conference centre where the fire exits had been padlocked shut.

But cybersecurity conferences can be a bit different – they certainly don’t always practise what they preach.

For example, at the RSA Conference (RSAC) 2010 in San Francisco, one of our colleagues – Graham Cluley, now an independent blogger – was asked to copy his presentation onto a USB key supplied by the organisers for collating speakers’ contributions.

When he inserted the USB drive into his Mac, Sophos Anti-Virus popped up, boop!, to alert him to Windows malware on the USB key.

He quickly figured out that the conference computer had no anti-virus at all, and that the same USB key had been in and out of numerous other presenters’ Windows computers already that day. (This story didn’t say much about those other presenters, either.)

At the AusCERT conference, in Queensland, Australia, also in 2010, one of the security vendors – it was IBM, and the company was nominated for a prestigious Pwnie award for this blunder) handed out USB keys with product marketing material on it…

…together with not one but two malware infections.

RSAC was back in the “do as I say not as I do” limelight again in 2014, issuing an official mobile app for the event that hooked into the event database so you could see the schedule of talks, with any last-minute updates or changes automatically shown.

Unfortunately, the database pulled down by the app also included details of all the other conference delegates who had registered to use the app so far – meaning that anyone who installed the app after you would get to see your details, too.

In that breach, the data that leaked out apparently included name, job title, employer, and nationality.

For many delegates, those details were probably public already – or at least easy to figure out or guess – so there wasn’t a huge amount of harm done, but it was still a peculiarly hypocritical cybersecurity blunder for a cybersecurity event company to make.

It happened again

Well, it looks as though it’s happened again: another insecure app published as part of an RSAC cybersecurity event.

At RSAC 2018, Twitter user @svblxyz found similar security problems to those of 2014 in this year’s conference app.

Amongst other things, the app contained URLs from which database content could be downloaded, apparently including the real names of other mobile app users.

RSAC confirmed the breach in a tweet earlier today [at approximately 2018-04-20T06:00Z], admitting:

Our initial investigation shows that 114 first and last names of RSA Conference Mobile App users were improperly accessed. No other personal information was accessed, and we have every indication that the incident has been contained. We continue to take the matter seriously and monitor the situation.

With just 114 names leaked, and given that many conference delegates have probably mentioned their visit to the event publicly anyway, for example on social media or in an out-of-office email, this isn’t a particularly dangerous outcome.

But the leaked names are just a symptom, and it’s the underlying cause that’s worrying: there always seems to “be an app for that”, even when a well-designed web page would be just as good, and even when a well-designed web page already exists anyway.

What to do?

  • As a user, assume the worst, and stick to the web whenever you can. A one-off app for a single event simply won’t have had the same security scrutiny as your browser, so why not simply prefer your browser?
  • As an event organiser, assume the worst, and stick to the web whenever you can. If you need a way to get updated speaker lists and session timetables to delegates, consider publishing a standalone file, such as a PDF, that users can download if they want an offline copy. If you expect to published regular updates, use a simple solution such as an RSS feed so your users can easily find the latest version.
  • As a mobile app developer, assume the worst, and put app security up front, ahead of looks. You can always improve the look and feel of an app later on, but you can’t get stolen or leaked data back later on: once breached, always breached.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PxyOhhE6lXc/

Planned European death ray may not need Brit boffinry brain-picking

The EU is planning to build a laser cannon with double the power of Britain’s under-construction Dragonfire zapper, according to reports – but the general state of the tech doesn’t automatically mean Europe will be trying to snaffle Brit raygun smarts.

The Sun broke the news that the EU Commission wants to, er, commission a 100kW laser cannon similar to the British Dragonfire project, the ultimate goal of which is the production of a 50kW laser turret.

€90m from the EU’s Preparatory Action for Defence Research programme is reportedly being spent on the Eurozapper, with a slideshow seen by a Sun source stating: “Current European high power laser effectors rely mainly on non-European technology and are based on architectures that combine incoherent beams on the target.”

Handily, this phrase leads the curious straight to an EU Defence Agency (EUDA) call-for-proposal paper from March (PDF, 47 pages), which says: “The EU thus risks becoming fully dependent on suppliers established in non-EU countries for this critical defence technology. This not only limits the strategic autonomy of the Member States but also generates security-of-supply risks. End-user restrictions imposed by non-EU nations (e.g., the US International Traffic in Arms and Export Administration Regulations (ITAR and EAR)) already endanger the security-of-supply of essential components of such high power laser systems.”

Thus, says the EUDA, a “European” laser must immediately be researched and built, with the focus being on a product that can destroy airborne items including rockets, artillery shells and mortar bombs, drones and missiles, and also “rapid, small boats”. The due date for this is given as 2027.

But an EU country is already doing laser blaster research – and it ain’t the UK

As we reported last year, one of the companies involved in the Dragonfire consortium, MBDA (known by some cheeky folk as Missiles, Bombs and Dangerous Armaments, though it was formed out of a merger between Matra BAe Dynamics and France’s Aerospatiale-Matra) has been doing RD in Germany on laser weapons for more than a decade, but MBDA confirmed to us today that its privately funded research in the land of beer and sausages is not related to the UK work.

Starting in 2008, the German arm of MBDA built and tested a “high energy laser weapon demonstrator”, according to MBDA puffery, including firing it at “mini UAVs” (drones) at distances of up to 2.5km. The laser reportedly drew 10kW during its 2010 trials, with MBDA claiming 50kW would be possible with extra funding.

The Dragonfire laser turret mockup at DSEI 2017. Pic: MBDA

Military test centre for frikkin’ laser cannon opens in Hampshire

READ MORE

The Royal Aeronautical Society’s magazine also has a handy overview of international laser weapon research, including the snippet that MBDA is already pondering the 100kW power level. This suggests that, far from the EU needing to snaffle British advances in laser weapon tech before Brexit takes place, they’ve already got a pretty good head start on the UK – albeit in a very similar but different method of generating a drone-blatting beam.

The Engineer magazine reports that MBDA’s ambition is to build “on earlier investments in the area of coherent beam combining – a technique in which beams from multiple fibre laser modules are combined to form a single, powerful, high quality beam”.

Coherent beaming, we are told, is the art of making a multi-source laser beam more powerful by ensuring the wavelength of each of the weaker beams that make it up is synchronised. Imagine an oscilloscope with the laser output displayed as your classic wiggly line. That’s the output of one beam generator. Now add in a few more wiggly lines to represent each of the different beam generators. Unless you make an effort to synchronise each of the beams’ waves (the peaks and troughs, on our imaginary oscilloscope), you’re effectively losing some of your potential power output.

We understand that the EU is more interested in incoherent beam laser tech, on the basis that it doesn’t matter if the beam isn’t synchronised if you’ve only got one of them. In addition, the German firm is also using mirrors and lenses to focus its beam, whereas we are told the UK’s blinding boffins are using mirrors only to reduce losses.

An MBDA spokesman told The Register: “Through the laser programmes in Germany and the UK (Dragonfire), MBDA is the clear European leader in laser weapon technologies. MBDA Germany is not a part of the UK Dragonfire Capability Demonstration Programme (CDP). MBDA is working with its respective national customers in order to assess the technical and operational feasibility for introducing DEW (Directed Energy Weapon). The Dragonfire capability demonstration programme aims to create sovereign capability in the UK and focuses on coherent beam combining technology that is different to that used in the MBDA German programme.”

Although MBDA is the lead company in the Dragonfire consortium, the actual laser is supplied by miltech boffinry outfit Qinetiq, which El Reg will be pumping for future updates on all things bright, shiny and melty. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/20/euro_laser_cannon_research/

At RSAC, SOC ‘Sees’ User Behaviors

Instruments at the RSA Security Operations Center give analysts insight into attendee behavior on an open network.

RSA CONFERENCE 2018 – San Francisco – At RSAC 2018 the SOC is a demonstration site. It has some hard limits — no visibility to the external IP interfaces being the most significant — but it has tremendous visibility into what happens on the wireless network that supports the tens of thousands of attendees using the open system. And that network visibility translates into great visibility into the behavior of network security professionals in the wild.

A team of network security specialists including Cisco’s Jessica Bair staff the SOC, watching traffic of various sorts flow to and from the devices carried by attendees, exhibitors, and staff. Because the SOC isn’t blocking any traffic, there’s great interest in the monitoring, which happens courtesy of RSA NetWitness Packets; potentially malicious traffic is further given static analysis by Threat Grid.

One of the things visitors notice in the SOC fishbowl is a screen filled with a rolling list of partially obfuscated passwords. That’s when they see two important things about conference attendees, one of them good, one of them not so much.

Almost all of the passwords are either strong or very strong. That’s great, and shows that security professionals, at least, have acted on the need for stronger passwords.

The problem comes in the fact that the passwords can be seen to be strong; they’re being sent in clear text. It’s a sign of a lesson half-learned and indicative of problems likely to plague all levels of the computer-using population of companies.

And passwords aren’t the only data being sent in the clear. Other examples of documents analysts have seen traversing the network include business plans, resumes, and information on competitors, according to one of the engineers staffing the SOC. 

While the passwords and documents traversing the network represent a significant security risk, Bair quickly points out that there is no threat of long-term information release; the hard disks from the monitoring and analysis appliances are crushed at the end of the conference.

Of course, the monitoring infrastructure established in the SOC sees more that just potentially embarrassing clear text documents. Malware and possible malware were identified and analyzed through Cisco’s Advanced Malware Protection (AMP) Anywhere with its Threat Intelligence Cloud. Information on potential malware seen was communicated among all nodes of the security network and other security networks related to the RSA Conference infrastructure for more rapid identification and (potential) remediation.

Ultimately, Bair likened the activity of the SOC to the basic instruction given to fighting women and men of the U.S. Army. “You have to do three things: Shoot, move, and communicate. If you’re not doing all three three, you’re [redacted] dead.”

In cybersecurity terms, the system must actively defend the organization’s assets, be agile in shifting its activities to meet evolving threats, and share information and commands with other networks looking for malware and malicious behavior. With all three, an organization has a chance to practice effective behavior. Without the three, then sooner or later your organization is truly [redacted] dead.

Related content:

Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/at-rsac-soc-sees-user-behaviors/d/d-id/1331607?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple