STE WILLIAMS

Mega medical tester pester: It smacked a big one, that malware scam, if indeed it was SamSam

Analysis One of the largest clinical testing specialists in the US, LabCorp Diagnostics, is coming out of recovery mode a week after being hit with ransomware – reportedly SamSam, the same malware that brought the US city of Atlanta to a standstill earlier this year.

LabCorp has not confirmed that the malware was SamSam, but several reports have cited “people familiar with the matter” saying it was.

News of the attack emerged when the company made a public statement and filed a Securities and Exchange Commission (SEC) report describing the fact of the attack – and little more – on 16 July.

It has since been reported that the attack, which struck around 14 July, may have been more serious than the notification suggested.

Atlanta

City of Atlanta’s IT gear thoroughly pwned by ransomware nasty

READ MORE

Quoting unnamed sources familiar with the probe, CSO Online reported that the attackers had gained access to the network by brute-forcing credentials on a resource accessible via Remote Desktop Protocol (RDP), after which the first server was encrypted.

LabCorp’s Security Operations Centre reportedly contained the spread of the malware in less than an hour. However, despite this, in that short window the malware is alleged to have managed to reach thousands of systems and servers, including hundreds of production servers important for day-to-day operations.

Patient data was not thought to have been breached in an attack.

“Work has been ongoing to restore full system functionality as quickly as possible, testing operations have substantially resumed, and we are working to restore additional systems and functions over the next several days,” LabCorp said in a follow-up statement to journalists.

“As part of our in-depth and ongoing investigation into this incident, LabCorp has engaged outside security experts and is working with authorities, including law enforcement. Our investigation has found no evidence of theft or misuse of data.”

If the reports are true and the ransomware was SamSam, several elements of this attack stand out, starting with the unusual aggressiveness of its spread even once it had been detected – clearly, defenders don’t have minutes to mitigate, they have seconds.

This has been noticed in previous SamSam attacks as a feature of its design, in which the payload is decrypted manually at runtime by a remote attacker with a password. This makes it hard to detect, let alone analyse forensically, when it has deleted traces of itself.

A second is the consistent targeting of RDP and VNC in which the attackers hunt for and compromise remote access gateways that are protected by weak credentials.

In the case of Hancock Health hospital in Indiana, criminals broke in after finding a box with an exploitable RDP server before injecting their ransomware into connected computers.

Finding open ports or outdated software versions isn’t hard to do using public tools such as Shodan, which raises the question of why defenders don’t comb their own networks for open ports in a similar fashion.

Doctor Nick Riviera

Hospital injects $60,000 into crims’ coffers to cure malware infection

READ MORE

The final part of the SamSam playbook is the targeting of companies in the medical sector. Although the most infamous incident involving this malware was the city of Atlanta in March, disruptive attacks on medical practice management software provider Allscripts as well as Adams Memorial Hospital, also in Indiana, just a few weeks earlier underlined this targeting preference.

On that score, attackers who use the nasty appear to have achieved a level of success. Ransoms are usually below $50,000, with some victims reportedly paying up, which possibly encourages more attacks.

As LabCorp is doubtless finding out to its cost, the alternative is days or even weeks of disruption. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/24/labcorp_samsam_ransomware_attack/

Robo-drop: Factory bot biz ‘leaks’ automakers’ secrets onto the web

Yet another organization has allegedly been caught accidentally exposing more than 100GB of sensitive corporate data to the open internet.

This time it’s Canadian outfit Level One Robotics, which specializes in building factory robots for automakers. The exposed information includes, it is claimed, confidential documents involving the likes of Toyota, Ford, GM, VW, Fiat Chrysler, and Tesla.

Upguard, an infosec biz with a forte in uncovering online leakages, said late last week the secret files were available for anyone on the internet to find via a poorly configured rsync server. We’re told Level One’s rsync server had been set to take connections from any IP address without authentication.

This, according to Upguard, meant that anyone with an rsync client, and the vulnerable server’s IP address, could have potentially connected to the server and downloaded internal – and rather sensitive and valuable – company documents and customer data stored on the box. These files, it is claimed, included robot designs for building cars, NDAs, and so on.

“The 157 gigabytes of exposed data include over 10 years of assembly line schematics, factory floor plans and layouts, robotic configurations and documentation, ID badge request forms, VPN access request forms, and ironically, non-disclosure agreements, detailing the sensitivity of the exposed information,” the Upguard team claimed.

“Not all types of information were discovered for all customers, but each customer contained some data of these kinds.”

1.5 BEEELLION sensitive files found exposed online dwarf Panama Papers leak

READ MORE

We’re told the schematics also included detailed CAD illustrations of factory layouts and equipment design, as well as animations showing how the robotic equipment was designed to operate.

Level One did not respond to El Reg‘s request for comment on the matter, seemingly because the biz is keeping schtum.

“Level One takes these allegations very seriously and is diligently working to conduct a full investigation of the nature, extent and ramifications of this alleged data exposure,” Level One CEO Milan Gasko told the New York Times. “In order to preserve the integrity of this investigation, we will not be providing comment at this time.”

Not all of those impacted by the breach are so worried, however. A Ford spokesperson told The Register that the company’s exposure appears minimal. “We’ve found no information that would indicate Ford is impacted,” the spokesperson said. “This supplier does not handle confidential information for the joint venture to whom they are contracted and they have not alerted us to any issue.”

Meanwhile, General Motors, Toyota, Fiat Chrysler, Tesla, and Volkswagen have declined to comment or not responded to requests for comment.

Regardless of what was exposed and who it belonged to, the breach is yet another reminder that system and network administrators should pay close attention to what data they have exposed to the public internet and how they lock that data down.

“It’s important to note that this most recent data exposure was not due to a vulnerability in rsync but rather a misconfiguration,” RedLock veep of cloud security security Matt Chiodi told The Register. “This is the same type of administrative error we continue to see over and over again both on-premises as well as in the cloud.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/23/car_factory_rsync_server_leak/

Big bad Bluetooth blunder bug battered – check for security fixes

With a bunch of security fixes released and more on the way, details have been made public of a Bluetooth bug that potentially allows miscreants to commandeer nearby devices.

This Carnegie-Mellon CERT vulnerability advisory on Monday laid out the cryptographic flaw: firmware or operating system drivers skip a vital check during a Diffie-Hellman key exchange between devices.

The impact: a nearby eavesdropper could “intercept and decrypt and/or forge and inject device messages” carried over Bluetooth Low Energy and Bluetooth Basic Rate/Enhanced Data Rate (BR/EDR) wireless connections between gizmos.

In other words, you can potentially snoop on supposedly encrypted communications between two devices to steal their info going over the air, and inject malicious commands. To pull this off, you must have been within radio range and transmitting while the gadgets were pairing.

Curveball

The security weakness crept into pairing implementations that use Diffie-Hellman key exchanges. During pairing, the two devices are meant to create a shared secret key based on an exchange of their public keys, and during that process, the two ends of the conversation agree on the elliptic curve parameters they use.

Some implementations don’t validate all the elliptic curve parameters, and that lets an attacker “inject an invalid public key to determine the session key with high probability,” the CERT note explained. “Such an attacker can then passively intercept and decrypt all device messages, and/or forge and inject malicious messages.”

This security shortcoming affects devices that use Secure Simple Pairing and LE Secure Connections. The Bluetooth Special Interest Group, which oversees the communication protocol’s standards, said it will update its specifications to prevent goofy implementations:

Researchers at the Israel Institute of Technology identified a security vulnerability in two related Bluetooth features: Secure Simple Pairing and LE Secure Connections.

The researchers identified that the Bluetooth specification recommends, but does not require, that a device supporting the Secure Simple Pairing or LE Secure Connections features validate the public key received over the air when pairing with a new device.

It is possible that some vendors may have developed Bluetooth products that support those features but do not perform public key validation during the pairing procedure. In such cases, connections between those devices could be vulnerable to a man-in-the-middle attack that would allow for the monitoring or manipulation of traffic.

So far, makers of affected Bluetooth chipsets include Apple, Broadcom, Intel, and Qualcomm.

The bug’s status in Android is confusing: while it doesn’t appear in the operating system project’s July monthly bulletin, phone and tablet manufacturers like LG and Huawei list the bug as being patched in the, er, July security update. Microsoft has declared itself in the clear.

The CERT note says fixes are needed both in software and firmware, which should be obtained from manufacturers and developers, and installed – if at all possible. We’re guessing for random small-time Bluetooth gizmos, it won’t be very easy to prise an update out of the vendors, although you should have better luck with bigger brand gear.

So, make sure you’re patched via the usual software update mechanisms, or just look out for nearby snoops, and be ready to thwart them, when pairing devices. Manufacturers were warned in January, it appears, so have had plenty of time to work on solutions.

Indeed, silicon vendor patches for CVE-2018-5383 are already rolling out among larger gadget and device makers, with Lenovo and Dell posting updates in the past month or so.

Linux versions prior to 3.19 don’t support Bluetooth LE Secure Connections and are therefore not vulnerable, we’re told. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/24/bluetooth_cryptography_bug/

Google Chrome: HTTPS or bust. Insecure HTTP D-Day is tomorrow, folks

Google Chrome users who visit unencrypted websites will be confronted with warnings from tomorrow.

The changes will come for surfers using the latest version of Google Chrome, version 68. Any web page not running HTTPS with a valid TLS certificate will show a “Not secure” warning in the Chrome address bar from version 68 onwards. The warning will apply both to internet-facing websites and intranet sites accessed through Chrome, which has approximately 60 per cent market share.

Google Chrome 68 http only site warning

In Chrome 68, the address box will display “Not secure” for all HTTP pages.

The Chrome update is designed to spur sites still stuck on HTTP to move over to HTTPS, as Google explained back in February. The web has made great strides in that direction of late but much work is yet to be done.

Security luminary Troy Hunt is developing a site called whynohttps to coincide with the Chrome 68 launch. The site will list the world’s largest websites that don’t do HTTPS by default.

Hunt and his colleague Scott Helme are looking to list HTTPS laggard sites by industry sector, a task they’d like some help in automating, as well as country. Hunt explained in a Twitter update: “For people offering support on this, I’ve sorted the country data, but what I really need now is data on the category of the site. Is there any service that says ‘Baidu is a search engine, Fox News is media, etc’?”

The majority (542K) of the top one million sites do not redirect to HTTPS and will therefore be labelled as insecure from tomorrow onwards, Cloudflare warned.

Running secure sites is not only for the big boys and is not necessarily expensive. Letsencrypt certs are free. Aside from the security benefits of preventing pages from being tampered while in transit, HTTPS has commercial benefits for site owners too. Both browsers and search bots favour HTTPS sites.

Although Chrome is the first mainstream browser to affix high-visibility warnings system to non-HTTPS websites, it’s likely that Microsoft, Apple and Mozilla will follow suit. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/23/https_dday_google_chrome/

IT biz embezzlement brouhaha leaves bloke with $456k migraine

An investor in an IT biz has coughed up $456,000 after America’s financial watchdog accused him of looking the other way while executives at the consultancy he backed allegedly embezzled millions of dollars.

Late last week, Bhusan Dandawate was charged [PDF] by the SEC with allegedly aiding and abetting fraud – after he was said to have let Quadrant 4 CEO Nandu Thondavadi and CFO Dhru Desai cook the company’s books to steal more than $4.1m between 2015 and 2016.

Based out of Shaumberg, Illinois, in the US, Quadrant 4 offered consulting and software development support for retail, financial, and medical companies. The firm is still operating, and has been under new management since the November 2016 arrests of Thondavadi and Desai.

Dandawate was accused of helping the two embezzle money from the business by falsely claiming to own ten shell companies that Thondavadi and Desai were using to shift money out of the company. Additionally, the SEC alleged that Dandawate helped the duo set up false payments and produce phony audit confirmation letters to help cover up the fraud.

In exchange for this, it is said Dandawate received $122,000 in cash, benefited from Quadrant 4’s stock price rising thanks to the fraudulent reports, and had debt liability he jointly held with Thondavadi and Desai paid off.

Last month, Quadrant 4 reached a deal [PDF] with the SEC to settle its part in the case and enter Chapter 11 bankruptcy protection. The watchdog’s case against Thondavadi is ongoing. Also last month, Desai agreed to settle his part in the SEC lawsuit by forking out roughly $1.6m.

To end his involvement in this matter, Dandawate will pay the SEC $131,466 and a civil penalty of $325,000. He will also agree to a permanent bar on serving as an officer or director of a publicly traded company, and will agree to an order enjoining him from future violations of US securities law. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/23/sec_quadrant_4/

Spectre rises from the dead to bite Intel in the return stack buffer

Spectre, a class of vulnerabilities in the speculative execution mechanism employed in modern processor chips, is living up to its name by proving to be unkillable.

Despite a series of mitigations proposed by Intel, Google and others, recent claims by Dartmouth computer scientists to have solved Spectre variant 1, and a proposed chip design fix called SafeSpec, new variants keep appearing.

The findings also revive doubts about whether current and past chip designs can ever be truly fixed.

Only two weeks ago, researchers Vladimir Kiriansky and Carl Waldspurger disclosed new data-stealing exploits, dubbed Spectre 1.1 and 1.2 [PDF].

Now there’s another called SpectreRSB that exploits the return stack buffer (RSB), a system in modern CPUs used to help predict return addresses, instead of the branch predictor unit.

In a paper titled, Spectre Returns! Speculation Attacks using the Return Stack Buffer, distributed through pre-print server ArXiv, boffins Esmaeil Mohammadian Koruyeh, Khaled Khasawneh, Chengyu Song, and Nael Abu-Ghazaleh detail a new class of Spectre attack that accomplished the same thing as Spectre variant 1 – allowing malicious software to steal passwords, keys, and other sensitive information, from memory it shouldn’t be allowed to touch.

These researchers, incidentally, are among those who developed the SafeSpec mitigation.

The latest data-theft technique involves forcing the processor to misspeculate using the RSB. Using a call instruction on x86, SpectreRSB allows an attacker to push a value to the RSB so that the return address for the call instruction no longer matches the contents of the RSB.

The paper, dated July 20, outlines the steps involved in the SpectreRSB attack, which itself has six variants:

“(1) after a context switch to the attacker, s/he flushes shared address entries (for flush reload). The attacker also pollutes the RSB with the target address of a payload gadget in the victim’s address space; (2) the attacker yields the CPU to the victim; (3) The victim eventually executes a return, causing speculative execution at the address on the RSB that was injected by the attacker. Steps 4 and 5 switch back to the attacker to measure the leakage.”

The paper also provides some sample code:

1. Function gadget()
2. {
3.   push %rbp
4.   mov %rsp, %rbp
5.   pop %rdi //remove frame/return address
6.   pop %rdi //from stack stopping at
7.   pop %rdi //next return address
8.   nop
9.   pop %rbp
10.  clflush (%rsp) //flush the return address
11.  cpuid
12.  retq //triggers speculative return to 17
13. } //committed return goes to 23
14. Function speculative(char *secret_ptr)
15. {
16.  gadget(); //modify the Software stack
17.  secret = *secret_ptr; //Speculative return here
18.  temp = Array[secret * 256]; //Access Array
19. }
20. Function main()
21. {
22.  speculative(secret_address);
23.  for (i = 1 to 256) //Actual return to here
24.  {
25.   t1 = rdtscp();
26.   junk = Array[i * 256]; //check cache hit
27.   t2 = rdtscp();
28.  }
29. }

The researchers have tested SpectreRSB on Intel Haswell and Skylake processors and the SGX2 secure enclave in a Core i7 Skylake chip. They did not test AMD nor Arm cores but note that both chipmakers use RSBs and so they reported their findings to them just in case, as well as to Chipzilla.

The eggheads claimed “none of the known defenses including [Google’s] Retpoline and Intel’s microcode patches stop all SpectreRSB attacks.”

A spokesperson for Intel told us the Xeon maker believes its mitigations do thwart SpectreRSB side-channel shenanigans:

SpectreRSB is related to Branch Target Injection (CVE-2017-5715), and we expect that the exploits described in this paper are mitigated in the same manner. We have already published guidance for developers in the whitepaper, Speculative Execution Side Channel Mitigations. We are thankful for the ongoing work of the research community as we collectively work to help protect customers.

You can find Intel’s white paper, here [PDF]. Spokespeople for AMD and Arm were not available for immediate comment.

ELF’n’safety

Last week, researchers at Dartmouth suggested a defense against Spectre variant one, to which Intel previously proposed adding the LFENCE instruction to code as a defense against speculative execution.

The Dartmouth solution involves using ELFbac policy techniques. In an email to The Register, Dartmouth PhD student Prashant Anantharaman explained that ELFbac lets programmers set policies for memory permissions. Effectively, a program’s ELF executable tells the operating system how to ring-fence particular areas of memory to hopefully thwart Spectre side-channel attacks.

“The permissions set here for the page-tables are also respected by the speculative execution branches, and hence if the developer intends to protect certain secrets within the program, the attacker exploiting Spectre would still not be able to gain access to these secrets,” he said. “These policies are defined at the ABI-level using the existing capabilities of the static linker and the build tool-chain.”

Anantharaman said the technique, which is enforced through Linux kernel memory management mechanisms, can be generalized for use against a larger class of intra-process memory attacks.

Asked via email whether he’s seen the Dartmouth research, Nael Abu-Ghazaleh, professor of computer science and engineering at UC Riverside and a co-author of the SpectreRSB paper, told The Register that he had reviewed it briefly.

“From the information I can find it has a strong flavor of basically doing the KPTI solution but at the user level software (i.e., map regions of memory where the secrets are only when you need them),” he said. “This is likely to have a substantial performance overhead, especially if the secret data is accessed often. Moreover, the programmer has to be disciplined enough to isolate all the secret data.”

Abu-Ghazaleh said there’s value in software-based fixes, but it’s not enough.

As computer science researcher Daniel Genkin told The Register in January, Abu-Ghazaleh believes that chip will have to be redesigned to eliminate speculative execution flaws.

“Although patching is important, we need to consider this class of vulnerability in the design of new processors to completely address the problem,” he said. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/23/spectre_return_stack_buffer/

If at first you, er, make things worse, you’re probably Microsoft: Bug patch needed patching

A remote code execution vulnerability in the Windows VBScript engine was left open for exploitation for two months after it was supposedly patched.

In fact, the fix made things even worse by introducing another remotely exploitable bug in VBScript.

This is all according to researchers at Qihoo 360, who today claimed a security hole in the scripting engine was only partially resolved in Redmond’s May Patch Tuesday, and was only permanently patched in this month’s batch of fixes.

Designated CVE-2018-8174, the flaw was a use-after-free() vulnerability in the scripting engine that could be exploited by a booby-trapped web page, when opened with Internet Explorer, or a malicious document, when opened by Office, to execute arbitrary devilish code with the current user’s rights.

Qihoo 360 researcher Yuki Chen said the team detected the programming blunder being targeted in the wild to infect victims’ PCs with malware, and reported the issue to Microsoft along with a short proof-of-concept (PoC) exploit. Redmond techies would go on to attempt to patch the bug in the May monthly security release.

“After analyzing the patch for CVE-2018-8174 carefully, we realized that the fix was not so complete and there still exists similar problems which could be leveraged to achieve reliable remote code execution in VBScript engine,” Chen explained this week.

“We reported the issues we found to Microsoft immediately, and Microsoft addressed them with a new fix (CVE-2018-8242) in July 2018 security update.”

As Chen explained, Redmond’s remedy for the bug in May only addressed part of the flaw outlined in the PoC, and the underlying cockup – a SafeArray that can be accessed after being erased – was still present. Additionally, the researcher said, the May patch actually created a fresh remote code execution vulnerability in the engine, a double-free() flaw that can corrupt memory leading to arbitrary code execution.

That second side-effect flaw was also reported to Microsoft, and addressed in the July update. Now, if you have the latest fixes, you should be safe from these two CVEs.

“Now for VBScript operations which will free a SafeArray’s internal buffer (erase, redim), before clearing elements in the array, it will first set the internal SafeArray descriptor to null,” Chen explained.

“After this fix, you will not be able to access (read/write/free) the SafeArray inside the |Class_Terminate| again because the descriptor has already been cleared when the callback is invoked.

“This patch perfectly fixed the new issues we reported.”

A spokesperson for Microsoft was not available for immediate comment.

The situation underscores just how difficult it can be to develop secure and thorough security patches. As much as we at El Reg like to tease Microsoft for its inability to properly close up security holes, we understand that writing secure non-trivial code is difficult, and even when problems are clearly identified, they’re not easy to completely fix.

Just ask Intel. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/23/qihoo_360_microsoft_patches/

Software is Achilles Heel of Hardware Cryptocurrency Wallets

Upcoming Black Hat talk will detail software vulnerabilities that can put private cryptocurrency wallets and currency exchange services at risk.

Cryptocurrency exchanges and private wallets have been fully in cyberattacker crosshairs as criminals seek to make the most of an exploding new financial market that some analysts say will reach $1 trillion by the end of the year. In response to these attacks, a number of manufacturers have come out with secure hardware wallets meant to harden the storage of the cryptographic keys that serve as proof of ownership of vast sums of money. However, a new piece of research expected out of Black Hat USA next month shows that these secure hardware storage devices may not be as locked down as their users expect them to be.

Presented by Sergei Volokitin, the research will show how software attacks can be used to break the Secure Element, the supposedly tamper-resistant hardware platform upon which these hardware wallets base their protection. Volokitin found vulnerabilities in these wallets’ trusted execution environment (TEE) operating systems that could be manipulated to compromise memory isolation and cause the wallet to give up operating system and application secrets.

“Despite the fact that the device makes use of secure hardware to protect the private keys, a number of flaws in the software design and implementation allowed us to create various attack scenarios, including remote, physical and supply chain attacks,” explains Volokitin, who works as a security analyst for Riscure, a global security test lab based in the Netherlands. 

Using the identified vulnerabilities, anybody who can get physical access to the hardware wallet would be able to steal keys and data from the device. What’s more, an attacker could theoretically create a supply chain attack where they poison the device from the get-go in order to gain full control of wallets on the device once users started putting data into the hardware’s applications. It might sound like a far-fetched attack scenario, but given the stakes it’s not unreasonable to consider.

“In cryptocurrency hardware wallets, the stakes are high, since single private key is the only asset preventing an attacker from stealing the coins and getting away with it,” says Volokitin.

While particularly troubling for the cryptocurrency ecosystem, Volokitin’s research also has broader implications across enterprise security. It’s another lesson that hardened secure devices are only as tamper-proof as the firmware and other software embedded within them. 

“Although the hardware wallets are primarily designed to be used in cryptocurrency-related solutions, from a security point of view they are not much different from any other security devices,” he says. “In fact, one of the compromised applications on the device was the secure application for FIDO authentication, which can be used in many other applications as well.”

He explains that generally it is very difficult for end customers to evaluate the risk of using a hardware security device if it does not require mandatory certification.

“The main questions the end users of such secure devices need to ask themselves/the manufacturer is what manufacturer did to improve the security of the device,” he says, explaining they should be looking for vendors that do extensive testing of their hardware products. “Doing an evaluation of a security solution by a third party, through an evaluation or a bug bounty, is an effective way to improve security of a product.”

 

 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/software-is-achilles-heel-of-hardware-cryptocurrency-wallets/d/d-id/1332358?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Microsoft, Google, Facebook, Twitter Launch Data Transfer Project

The open-source Data Transfer Project, intended to simplify and protect data transfer across apps, comes at a sensitive time for many of the participating organizations.

Microsoft, Google, Facebook, and Twitter have teamed up to launch a new initiative dubbed the Data Transfer Project (DTP), which is intended to simplify data sharing across services.

The open-source effort is dedicated to building tools that will enable users to directly transfer information from one service to another so they don’t have to download and re-upload it, explains Google, which first mentioned the project in a post about its preparations for GDPR (General Data Protection Regulation). Instead, people can port data from one company to another from within an application.

It’s an interesting and somewhat sensitive time for these companies to be embarking on a data sharing project, given both Facebook and Google have recently been at the center of news involving their use of consumer information. Facebook is still dealing with the aftermath of the Cambridge Analytica scandal, which was centered around its API. Google recently responded to a report stating developers can sift through users’ inboxes using third-party apps.

The participating organizations outlined their plans to secure and protect users’ data in a white paper on the initiative, and described the responsibilities of users and businesses to protect information.

How the DTP works: all organizations involved with DTP are creating tools to convert any service’s proprietary APIs to and from a set of standardized data formats, which can be used by anyone. This will let people move data between any two services using a standard infrastructure and authorization. So far, Google says, they have created adapters for seven providers and five types of user data.

DTP is made up of three main components, as explained on the project’s website. The first are data models, or frameworks to create a common understanding of how to transfer information. Data models are grouped in verticals; for example, photos, emails, contacts, and music.

Each vertical has its own set of data models to facilitate transfer of related file types. The music vertical, for example, would have models for playlists, songs, or music videos. One goal of the DTP for organizations to use common data models, which would lessen the need for individual businesses to maintain and update proprietary APIs.

The second component is company-specific adapters for data and authentication. Data adapters consist of code that translates a provider’s APIs into data models, and they come in two pairs: one is an exporter to translate from a provider’s API into the data model; the other is an importer to translate from the data model into the API. Authentication adapters let consumers log into their accounts before moving data from service to service.

Task management libraries process background tasks: calls between adapters, secure data storage, retry logic, failure handling, individual notifications. DTP has task management libraries as a reference implementation for how to use the adapters for transferring data between apps.

Weighing in on Data Security

Services involved with the project must first agree to data transfer between platforms and require users must independently authenticate to each account. Authorization mechanisms are up to partners, so they can choose any form currently in their existing security infrastructure.

Users’ data and credentials will be encrypted in transit and at rest, Google explains in a blog post on the news. Further, the DTP will rely on a platform of what Google describes as “perfect forward secrecy,” which generates a new unique key for each transfer. Because DTP is open source, anyone is free to check the code and verify data isn’t collected or used maliciously.

Microsoft’s Craig Shank, vice president for corporate standards, points out how DTP enables data portability that will be especially important for people with poor Internet access.

“For people on slow or low bandwidth connections, service-to-service portability will be especially important where infrastructure constraints and expense make importing and exporting data to or from the user’s system impractical if not nearly impossible,” he writes in a blog post.

While it may seem weird to see four tech giants working together on a project like this, breaking down the barriers for data transfer would make things easier for users and companies in the wake of GDPR, which requires platforms to provide all available information on a person.

Existing code for DTP can be accessed on GitHub.

Related Content:

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/application-security/microsoft-google-facebook-twitter-launch-data-transfer-project/d/d-id/1332360?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

DOJ to publicly disclose election tampering schemes

In the months leading up to the 2016 US presidential election, how many of us knew that Russia was tinkering with the race?

…or that Russia had targeted us with propaganda, tried to suppress the vote, or deliberately tried to puncture Hillary Clinton’s chances by leaking stolen information from her campaign, the Democratic National Committee (DNC) and the Democratic Congressional Campaign Committee (DCCC)?

At least one outfit knew much of that: the Obama administration. Too bad it didn’t tell the country.

But if the US Department of Justice (DOJ) stays true to a newly announced policy, we can expect to hear a whole lot more about foreign cyberattacks and propaganda/disinformation campaigns targeting the country’s democracy – hopefully before a given election takes place, not after.

On Thursday, Deputy Attorney General Rod J. Rosenstein announced at the Aspen Security Forum that under a new policy, the DOJ will inform US businesses, organizations and even individuals if they’re being targeted by foreign operations in an attempt to influence the country’s elections.

As the New York Times notes, the Obama administration knew for months before the 2016 election that Russia was trying to interfere in the race. President Obama didn’t reveal the plot, however, given his concern that it would be seen as a partisan move and his reluctance to add fuel to then-GOP-nominee Donald Trump’s fire with regards to Trump’s claims that the election was rigged.

Keeping these schemes in the dark doesn’t help, said Rosenstein. Engadget quoted the Deputy Attorney General:

Exposing schemes to the public is an important way to neutralize them. The American people have a right to know if foreign governments are targeting them with propaganda.

The report comes less than a week after the DOJ indicted 12 Russian intelligence officers connected to attacks on the computers and email systems of the DNC in the months leading up to the election.

This is the first comprehensive report to come out of the US Attorney General’s Cyber-Digital Task Force, and marks the first time that the DOJ has publicly articulated the types of threats posed by malign foreign influence operations and formally described how, in coordination with other federal departments and agencies, it’s responding.

The report identified five types of malign foreign influence intended to harm the US political system: attacks on voting infrastructure, including voter registration databases and vote-tallying systems; theft and weaponization of data; secret assistance of politicians, including how Russians behind the Guccifer 2.0 and DCLeaks Twitter accounts engaged with politicians to offer them damaging information on their opponents; the spreading of false information and propaganda, such as the use of trolls on social media to spread fake news; and unlawful lobbying efforts.

But to get back to the let-us-know-when-you-know department, the report also announced a new policy governing the disclosure of foreign influence operations: a policy that’s governed by the principle that the DOJ has got to stay politically neutral, has to comply with the First Amendment, and has to do its disclosures in a way that maintains public trust.

During his speech at the Aspen Security Summit, Rosenstein said that Russia’s effort to influence the 2016 election “is just one tree in a growing forest. Focusing merely on a single election misses the point.”

Rosenstein cited Director of National Intelligence Daniel Coats, who last Friday said that Russia’s actions didn’t stop with the conclusion of the 2016 elections. Rather, they’re still ongoing.

As Director Coats made clear, these actions are persistent, they are pervasive, and they are meant to undermine America’s democracy on a daily basis, regardless of whether it is election time or not.

One example: Also at the Aspen Security Summit, Microsoft revealed that it’s already detected and helped to block hacking attempts against three congressional candidates this year, marking the first known example of cyber interference in the upcoming midterm elections.

Tom Burt, Microsoft’s vice president for security and trust, as quoted by Politico:

Earlier this year, we did discover that a fake Microsoft domain had been established as the landing page for phishing attacks. And we saw metadata that suggested those phishing attacks were being directed at three candidates who are all standing for election in the midterm elections.

Burt declined to name the targets and didn’t specify whether or not the attacks came from Russia. But he did say that the targets were “people who, because of their positions, might have been interesting targets from an espionage standpoint as well as an election disruption standpoint.”


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/pmfTpBkwx_c/