STE WILLIAMS

Get Organized Like a Villain

What cybercrime group FIN7 can teach us about using agile frameworks.

This past September, Fedir Hladyr, the IT administrator for the cybercrime group FIN7 — which targeted American consumer data and sold it on the black market — pleaded guilty to wire fraud and conspiracy to commit computer hacking. This case stood out because the techniques and tool sets FIN7 leveraged are fundamentally similar to those that most engineering, help desk, and IT departments use to manage their work on a daily basis.

According to court documents, Hladyr coordinated FIN7’s criminal efforts through several platforms that manage tickets, tasks, and real-time chat. Hackers uploaded stolen credentials and assigned next steps through Jira, shared malicious code and stolen PCI data on HipChat, and communicated in real time on JabbR. Through these means, FIN7 stole more than 15 million credit card numbers from US retailers and restaurants.

FIN7’s well-coordinated attacks greatly contributed to its illegal success. Its techniques have inspired us to reflect on the tactics we use to stay organized during red teams and penetration tests, and the benefits we gain from leveraging agile frameworks and ChatOps (the use of chatbots to execute on custom scripts and plugins and receive metrics and alerts from automation) — whether we are a team of three or a large group spread across multiple time zones.

Increasing Efficiency
If properly executed, implementing an agile workflow increases efficiency of the engagement by eliminating the need to ask “what should I do next?” After completing a task, pen testers can check if they’ve been assigned a new job or can choose from a selection of unclaimed tasks.

Dividing and conquering tasks also allows team members to play to their strengths — one member may be better at cracking hashes for credentials, while another is great at finding where to use those credentials. When specialized testers can focus on tasks that align with their niche skills, downtime and confusion are reduced, and the whole team is more effective.

Spontaneous task creation is another game changer. If something interesting pops up midreview, a new task can be added to the backlog and reviewed later. This process captures the spark of hacker intuition while keeping the tester focused on the current objective.

Increasing Transparency
For most security engagements, there are countless starting places, each with a slew of attack vectors to test. Creating and assigning tasks in a centralized location not only provides a flexible structure for building lists of attack venues and monitoring progress, but it also increases transparency for teams and their clients.

During an on-site assessment, our four-person team created an impromptu Kanban board on a conference room wall, placing Post-it notes in three columns: TO DO, IN PROGRESS, and DONE. The initial tasks were based on high-level goals, and as we identified new opportunities, new Post-its were created. This improvised Kanban board helped us track our activities quickly and clearly. And when the client suggested new areas to investigate, those became new Post-its in the TO DO column. This level of real-time transparency communicated our progress, confirmed we were completing their high-level goals, demonstrated our custom approach to their environment, and showed them their input mattered. 

Ensuring Consistency
Inconsistent team behaviors can lead to missed critical exposures. Tickets become a central place to discuss how a task is completed and templates ensure that jobs are performed in a repeatable way.

Recently, security researcher Tom Hudson (a member of DISTURBANCE, a top bug bounty team) told us that Trello checklists created during team bug bounty challenges helped teams build a strong foundation:

It’s really common to perform the same set of tasks against multiple targets or endpoints; for a given domain, we might want to enumerate subdomains, run port scans, screenshot web-server responses, and so on. Having a template card with a prepopulated to-do list means we can make our process consistent between team members and we don’t forget things.

Setting up a reliable, agreed-upon framework includes choosing which ChatOps channels tor use (such as, JabbR, HipChat, Slack, IRC), and deciding how to classify and prioritize tasks.  A good administrator, like FIN7 had with Hladyr, is also needed to manage shared naming conventions, maintain well-labeled folders, and keep everything running smoothly.

Enabling Continuous Agility
By adopting agile project management techniques for our continuous testing engagements, we create a real-time feed of potential vulnerabilities that we can review in a structured way. Real-time leads generated by automation are automatically turned into tasks and can be immediately picked up by team members across time zones or delegated to specialists. Furthermore, bots can push vulns of a certain type or severity level to a group chat for manual investigation. As a result, we can act on high-impact issues immediately and create a backlog of tickets for other potentially dangerous indicators.

The organization and flexibility that comes with this continuous testing methodology allows us to alert our teams to new publicly disclosed CVEs and track recurring patterns over time.

No One System Fits All
Whether it’s a one-time engagement or a continuous assessment, a minimal, flexible structure amplifies and accelerates the efforts of security professionals on both sides of the law. Whether you’re using Jira, Trello, or a Post-it Kanban board, it’s important to build a robust environment that includes clear ways to organize information and communicate with your team.

FIN7’s infrastructure of tickets, botnets, and ChatOps allowed them to react to evolving situations and complete their backlog of exploit tasks. Without project management processes, organized channels, and tagged items, FIN7’s crime likely wouldn’t have paid as much. Disorganized crime just isn’t as profitable.

Attackers are finding great success adopting these agile techniques. Shouldn’t your offensive security team be doing the same?

Special thanks to Tom Hudson and Ori Zigindere for their insights on this topic and to Brianne Hughes for her editorial guidance.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The Next Security Silicon Valley: Coming to a City Near You?

Rob Ragan is a principal researcher at Bishop Fox, where he focuses on solutions and strategy as well as fostering industry relationships. His areas of expertise include continuous penetration testing and red teaming. He is developing research to improve Bishop … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/get-organized-like-a-villain/a/d-id/1336504?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Most, Least Insecure US Cities for SMBs

A new report looks at computer activity in the 50 largest metropolitan areas.

Does where a business is located impact its level of cybersecurity? A recent report says “absolutely.”  

For its research, Coronet, which provides cybersecurity services to small and midsize businesses, analyzed more than 93 million security events across a million endpoints residing on 24 million public and private networks in the 50 largest US metropolitan regions. The company’s analysis generated a composite threat index on which its rankings were based.

According to the report, the most secure US cities are Salt Lake City, St. Louis, Seattle-Tacoma, and Austin. The most insecure cities? Those honors go to Las Vegas, Houston, New York, and Miami-Fort Lauderdale.

Read more here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The Next Security Silicon Valley: Coming to a City Near You?

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/risk/the-most-least-insecure-us-cities-for-smbs/d/d-id/1336605?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Apple iOS 13.3 is here, bringing support for keyfobby authentication

On Tuesday, as expected, Apple released iOS 13.3, iPadOS 13.3, tvOS 13.3, and watchOS 6.1.1 to the public, bringing bug fixes and performance improvements, as well as one big new security improvement: support in its Safari browser for two-factor authentication (2FA) hardware tokens such as Yubico’s Yubikey.

We’ve talked about these tokens before, and we like them.

As we’ve noted – most particularly when crooks have managed to steal people’s 2FA codes – hardware tokens based on the FIDO U2F or WebAuthn specifications, such as Yubico’s Yubikey or Google’s own Titan, are resistant to phishing and can’t be intercepted with a SIM-swap attack.

In its Tuesday updates, Apple added support for FIDO2-compliant security keys that make use of near-field communications (NFC), USB, and Lightning.

With the update, you can now authenticate by using the YubiKey 5 NFC or Security Key NFC: you just tap the YubiKey at the top of an iPhone (available on the iPhone 7 and above). You can also do physical authentication with the YubiKey 5Ci: you just plug it into the Lightning or USB-C port of an iPhone or iPad. ZDNet published a nice little gallery showing these keyfobby gadgets and where you insert them into your iThings.

Apple had detailed official support for the security keys in the iOS 13.3 beta 2 release notes:

Safari

New Features

Now supports NFC, USB, and Lightning FIDO2-compliant security keys in Safari, SFSafariViewController, and ASWebAuthenticationSession using the WebAuthn standard, on devices with the necessary hardware capabilities.

Turn that thing OFF and go do XYZ

In other privacy/safety news, the updates include something that should make parents happy: Communication Limits in Screen Time.

This gives parents more control over who they – or their kids – can call, FaceTime, or Message. It lets you set specific limitations on when and with whom you can communicate. During downtime, you can limit your list to certain people. Also, a contact list for children lets parents manage the contacts that appear on their children’s devices.

After a certain time of day – say, when the kids are supposed to be doing their homework, or sleeping, parents can use the feature to block their communication with everyone except mom and dad. Screen Time Communication Limits apply to the Phone app, Messages app, and FaceTime.

Incognito Mode in Google Maps

Here’s another piece of privacy news for iOS-ers: Google is rolling out Incognito Mode in Google Maps for Apple iOS users.

On Monday, Marlo McGriff, Google Maps Product Manager, said that Incognito mode will work on iOS the same as on Android:

The places you search for or navigate to won’t be saved to your Google Account and you won’t see personalized features within Maps, like restaurant recommendations based on dining spots you’ve been to previously.

Google, the data gobbler that’s been accused of secretly tracking users’ locations and which has faced allegations of its “deceitful” tracking being against EU law, announced the new Incognito Mode for Maps in October.

Incognito mode in Maps will keep your device from recording your Maps activity on that device.

If you share a device with, say, your partner, they won’t know that you’ve searched Maps for jewellery stores to buy an engagement ring, nor if your route then included a stop at the Golden Banana club before you headed home.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/I3Xzl-C4fcA/

December Patch Tuesday blunts WizardOpium attack chain

December 2019’s Patch Tuesday updates are out, and for the most part, it’s the usual undemanding Christmas load for admins to browse through.

All told, there are 36 CVE-level vulnerabilities, seven of which are marked ‘critical’, 27 important, and one each for low and moderate.

Predictably, the critical flaws are all remote code execution (RCE) flaws, five relating to Git for Visual Studio, one in Hyper-V, and one in the Win32k Graphics subsystem.

The award for most interesting flaw of the month goes to CVE-2019-1458, an elevation of privilege zero-day in the W32k component that’s being exploited in the wild.

The assessment that this is ‘important’ rather than ‘critical’ is misleading given unconfirmed speculation that attackers are using it in conjunction with CVE-2019-13720, a use-after-free zero-day in Google Chrome versions prior to 78.0.3904.87, publicised in October.

The campaign behind their use was labelled Operation WizardOpium and linked to the Lazarus Group that was recently discovered to be separately targeting macOS users with ‘fileless’ malware.

The good news is that the Chrome flaw has already been patched, which just leaves admins to do the same for its apparent Microsoft companion flaw.

The RDP flaw with no patch

A curiosity this month is CVE-2019-1489 – the latest in long a line of Remote Desktop Protocol (RDP) bugs. What’s unexpected is that it affects Windows XP SP3, an operating system which stopped receiving automatic security fixes five years ago.

Unusually, Microsoft patched an RDP flaw in XP SP3 before in May, which at least raised the possibility that one might be offered in this case too. However, when we checked the Microsoft Update Catalogue for a manual patch for this, none was on offer. That’s because:

Microsoft will not provide an update for this vulnerability because Windows XP is out of support. Microsoft strongly recommends upgrading to a supported version of Windows software.

In other words, anyone suffering this flaw is on their own. Worse still, at least one security blogger thinks this flaw is probably being exploited on the basis of Microsoft’s ambiguous advisory.

SGX Plundervolt

Among 11 security advisories, Intel’s Patch Tuesday update features a fix for a research proof-of-concept attack on the company’s Software Guard eXtensions (SGX) enclave security implemented in all the company’s recent processors.

Identified as CVE-2019-11157, it’s been dubbed ‘PlunderVolt’ by the researchers who reported it to Intel earlier this year:

Intel recommends that users of the above Intel Processors update to the latest BIOS version provided by the system manufacturer that addresses these issues.

This is probably not a big deal for the average computer user but the advisory to look for is INTEL-SA-00289.

Adobe

Adobe Acrobat and Reader get fixes for 21 CVEs, 14 of which are RCEs and therefore rated critical.  There’s also a smattering of security fixes for ColdFusion, Photoshop CC, Bridge CC, Media Encoder, Illustrator, and Animate, including several more rated critical.

Time to get patching.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PoQEqUB0h3I/

S2 Ep20: Why don’t they send ransomware on floppies anymore?

December 1989 marks 30 years since the first ransomware attack was spammed out on 20,000 floppy disks [1’39”]. We also talk about the Snatch ransomware [8’08”], iPhone 11 tracking concerns [18’10”], and open-source supply chain madness [28’14”].

Host Anna Brading is joined by Sophos experts Mark Stockley, Peter Mackenzie and Paul Ducklin.

Listen below, or wherever you get your podcasts – just search for Naked Security.

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/L395T7rUrqk/

LightAnchors array: LEDs in routers, power strips, and more, can sneakily ship data to this smartphone app

Video A pentad of bit boffins have devised a way to integrate electronic objects into augmented reality applications using their existing visible light sources, like power lights and signal strength indicators, to transmit data.

In a recent research paper, “LightAnchors: Appropriating Point Lights for Spatially-Anchored Augmented Reality Interfaces,” Carnegie Mellon computer scientists Karan Ahuja, Sujeath Pareddy, Robert Xiao, Mayank Goel, and Chris Harrison describe a technique for fetching data from device LEDs and then using those lights as anchor points for overlaid augmented reality graphics.

As depicted in a video published earlier this week on YouTube, LightAnchors allow an augmented reality scene, displayed on a mobile phone, to incorporate data derived from an LED embedded in the real-world object being shown on screen. You can see it here.

Unlike various visual tagging schemes that have been employed for this purpose, like using stickers or QR codes to hold information, LightAnchors rely on existing object features (device LEDs) and can be dynamic, reading live information from LED modulations.

The reason to do so is that device LEDs can serve not only as a point to affix AR interface elements, but also as an output port for the binary data being translated into human-readable form in the on-screen UI.

“Many devices such as routers, thermostats, security cameras already have LEDs that are addressable,” Karan Ahuja, a doctoral student at the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University told The Register.

“For devices such as glue guns and power strips, their LED can be co-opted with a very cheap micro-controller (less than US$1) to blink it at high frame rates.”

The system relies on an algorithm that creates an image pyramid of five layers, each scaled by half, to ensure that at least one version captures each LED in the scene within a single pixel. The algorithm then searches for possible light anchor points by looking for bright pixels surrounded by dark ones.

Candidate anchors are then scanned to see if they display the preamble binary blink pattern. When found, the rest of the signal can then be decoded.

wtf

With a warehouse of unsold AR googles, Magic Leap has a brainwave… let’s rebadge ‘em and sell to business!

READ MORE

Some of the example applications that have been contemplated include: a smoke alarm that transmits real-time battery and alarm status through its LED, a power strip that transmits its power usage, and a Wi-Fi Router presents its SSID and guest password when viewed through a mobile AR app.

Ahuja said the scheme works across different lighting conditions, though he allows that in bright outdoor lighting, device LEDs may get missed. “But usually the LED appears to be the brightest point,” he said.

The initial version of the specification (v0.1) has been published on the LightAnchors.org website. It requires a camera capable of capturing video at 120Hz, to read light sources blinking at 60Hz. The data transmission protocol consists of a fixed 6-bit preamble, an 8-bit payload, 4-bit parity, and a fixed 8-bit postamble. Mobile devices that support faster video frame rates and contain faster processors could allow faster data transmission.

Future versions of the specification may incorporate security measures against potential malicious use, such as a temporary token to ensure that users have line-of-sight to devices. Sample demo code for Linux, macOS, and Windows laptops, along with Arduino devices, can be found on the project website.

The boffins have also created a sample iOS app and interested developers can sign up on the website to receive an invitation to try it out through Apple’s Testflight service. ®

Sponsored:
From CDO to CEO

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/12/augmented_reality_led_data_transfer/

Microsoft movie tried to Azure Ignite attendees about CPU side-channel flaws, but biz wouldn’t be drawn on details

How does Microsoft mitigate the risk of speculative-execution bugs on its Azure platform? The US goliath is unwilling to comment, despite running a session at its Ignite conference last month on exactly this subject.

The Ignite session itself was titled “Spectre/Meltdown: An Azure retrospective” and talked about “how the computer industry came together to address this new class of vulnerability, and specifically how Azure responded”.

Spectre and Meltdown were the first examples of data-leaking side-channel flaws involving speculative execution, and blight CPU cores designed by Intel, AMD, Arm, and others, to varying degrees. Speculative execution is an optimisation technique where the processor executes some likely software instructions in advance, and discards the result if it is not needed. During this process, the CPU will access its caches, or otherwise touch resources on the system, in a way that allows an eavesdropper to gradually discern the contents of memory or registers. It’s ingenious stuff, but slow and tricky to exploit in real life.

The risk, though, is that malicious code or rogue logged-in users can potentially access sensitive data belonging to other applications and users across isolation boundaries, even between virtual machines, or between guest machines and the host – a possibility that is particularly alarming for cloud providers.

The session was unusual. The main part was a video describing the build-up to the Spectre and Meltdown reveal in January 2018. The specific problem was discovered by Google’s Project Zero in June 2017, but was kept under embargo for six months. Microsoft was among those companies in the know, furiously patching Windows and its Azure platform, before the embargo on disclosure lifted on 10 January 2018. Open-source systems like Linux are patched in the open, though, and changes to the kernel, along with industry sources, tipped off The Register.

On 2 January, El Reg published details, specifically of Meltdown, that many already secretly knew but were not wide open in full public view.

The Project Zero team announced at 9.30am the following day that it would disclose the vulnerabilities at 3pm, according to the video, causing Microsoft to assemble a crisis meeting. The Azure patching schedule to address the vulnerability had seven days to run, but now there were only four hours to go. “I don’t know the proper collective noun for executives, but we had all them in a conference room,” said the narrator. Dramatic music played in the background while Ignite attendees learned how the meeting went silent as Azure EVP Scott Guthrie spoke. “The security of our customers is paramount. Accelerate the rollout.”

In a curious blend of technical information and marketing, Microsoft’s intention seems to have been to assure attendees of its focus on security. Corporate VP Julia White is heard to say: “We could never ever put our customers at risk and if we broke that promise, why would anyone trust us ever again? It was this moment that to me proved it.”

There was also some insight into Microsoft’s approach, patching aside. Chief technology officer Mark Russinovich said: “In Azure we’ve adopted a mindset of assumed breach, which means we assume that hackers are going to get into the infrastructure, that potentially there might even be malicious insiders in Azure.”

However, the video had a sting in the tail. Towards the end we heard: “Unfortunately we weren’t done. Shortly after we had released this particular set of updates, we were contacted by another researcher who had discovered another issue of very similar style. And then a few days later another researcher, and then another. It turns out that Spectre and Meltdown were just the very beginning of an entire new class of issues.”

This is the point hammered home by kernel maintainer Greg Kroah-Hartman, who said in October (more than two years after Spectre and Meltdown were discovered): “These problems are going to be with us for a long time; they’re not going away.” He also offered a partial solution, for anyone not running in a secure environment where all users are trusted. “Disable hyper-threading. That’s the only way you can solve some of these issues. We are slowing down your workloads. Sorry.”

AWS re:Invent in Las Vegas, 2019

Managing the Linux kernel at AWS: ‘A large team of security experts’ dealing with fallout from Spectre, Meltdown flaws

READ MORE

The downside of disabling hyper-threading is a performance penalty of 20 per cent or more. Following the Ignite session video, a panel took questions so we asked the obvious one. Does Microsoft disable hyper-threading on Azure to protect its customers, as Kroah-Hartman recommends?

Our question was answered by Igal Figlin, partner program director of Azure Compute. When he said “Thank you for your question,” the room burst into sympathetic laughter. “We do have, of course, our own hypervisor kernel and we do have extra measures beyond completely disabling hyper-threading. I understand the general notion of saying that, it is like, if you don’t go out of the house, you won’t be run over by a car. In a realistic world, not if we mitigate the threats, we need to continue to stay vigilant about potential attack vectors and mitigate attack vectors. Our stance is that every known attack vector we see, and this even includes attacks that we were not able to implement but it seems that it might be an attack vector, are being mitigated at every point in time. I don’t think we need to go to the extreme measures of disabling functionality.”

Figlin added a reference to the “big data that we can analyse” as Microsoft studies attacks on its infrastructure which is helping the company “to make Azure safe and to make Windows hypervisor safe and also to contribute back to Linux. We should be sufficiently safe, and when it becomes not, this is why we have all sorts of measures and processes.”

A fair answer, but it would be good to know more. We asked AWS about the same issue and the director of kernel and operating systems answered in detail. Google has detailed information here about mitigation for a number of its services, and here regarding Kubernetes, where it says: “The host infrastructure that runs Kubernetes Engine isolates customer workloads from each other. Unless you are running untrusted code inside your own multi-tenant GKE clusters, you are not impacted.” And here, where there is similar information about Compute Engine.

At this time, Microsoft will not be commenting

The issue is complex and one way, for example, that a cloud provider can protect customers is by ensuring that concurrent customer workloads do not run on the same CPU core. In the case of “serverless” services like AWS Lambda, this could be expensive. “We’ve been very careful to ensure that we never split cores between customers,” AWS Elastic Compute Cloud VP Dave Brown told us. “We’ve never put workloads from different customers on the same VM.” For serverless, “running functions from different customers within the same VM and just using processor isolation is absolutely not good enough.” Expensive if a customer is making light use of a service? “It absolutely becomes very expensive.” This was a factor behind the development of Firecracker, which can boot lightweight VMs quickly.

It is probable Microsoft has an equally strong story to tell for Azure, but when we asked for more details, the answer was: “At this time, Microsoft will not be commenting.” Could its rationale be that when it comes to security, information might help attackers? Further, while most Ignite sessions are now available to watch on video, the session we attended has not, as far as we know, been published. That said, the company does publish a detailed document about mitigating risk on Windows.

Speculative-execution bugs are real and there are proof-of-concept demonstrations, but to date there have been few reports of successful and damaging attacks. At the same time, in an age of cyberwarfare it is hard to believe that interested parties are not taking careful note, especially since the risk from these vulnerabilities is information theft. It is a difficult problem, and one that the cloud giants are perhaps better placed to address than smaller hosting providers. It is obvious that the three biggest cloud providers take the issue with extreme seriousness, but despite its video and presentation, Microsoft has been the least forthcoming on the subject. ®

Sponsored:
From CDO to CEO

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/12/spectre_meltdown_microsoft_azure/

It’s time you were T0RTT a lesson: Here’s how you could build a better Tor, say boffins

Academics in Germany say they’ve found a way to make Tor and similar onion networks more efficient and lower their latency.

The crew at Ruhr University Bochum, Universität Wuppertal, and Paderborn University described their technique in a paper [PDF] this week accepted into next year’s Proceedings on Privacy Enhancing Technologies Symposium: it describes an optimized and simplified approach to constructing circuits through which traffic is sent.

Anonymizing onion networks like Tor work by forming a circuit between a client and a destination via a number of relay nodes. The destination could be a so-called hidden service within the network, or an exit node that connects to a service on the public internet on behalf of the client. The connection between the client and destination is routed through these randomly picked nodes to obfuscate the public IP address of the client. The destination just sees the last hop in this chain, be it an exit or one of the relay nodes, and not the client.

The client wraps its traffic in nested layers of encryption, hence the name onion network, to protect it from eavesdropping and to ensure rogue nodes and snoops can’t walk through the circuit and unmask the client. This requires the client to perform secure Diffie-Hellman cryptographic key pair exchanges with relay nodes during the establishment of a circuit, carefully so to avoid unmasking the client, which takes time and bandwidth.

What the team suggests is a method they have dubbed T0RTT: Tor zero Round-Trip-Time key exchange. In this system, using a technique known as puncturable encryption, the number of steps to establish a circuit is drastically reduced. According to the paper, this will “allow a sender to transmit cryptographically protected data within the first message, without prior exchange of key establishment messages.”

This single-pass method also provides forward secrecy. If you want to know the mathematics behind it in detail, see the above links.

Co-authors Sebastian Lauer and Kai Gellert told The Register on Wednesday this week that T0RTT would significantly decrease circuit establishment latency for netizens, as it would essentially halve the number of messages needed to set up a connection.

Tails is a Linux distribution aimed at privacy and anonymous browsing

Heads up, private penguins: Tails 4.0 is out. Security-conscious Linux gets updated apps, speed boost

READ MORE

“This ensures that circuits are constructed faster and the number of messages in the network is reduced, which in turn reduces the load on the network infrastructure,” the boffins explained. “In Tor it is also recommended to change the circuit for each connection to prevent profiling attacks. A faster circuit construction thus increases not only efficiency but also security.”

They hope that, in the process, their approach would help ease one of the major complaints of Tor, the drag it adds to web browsing and internet access.

“In addition to usability, efficiency is usually more important to most users than privacy,” they said. “In our work, we would like to bring these two properties closer together and make the use of the Tor network more interesting.”

The technique is not without its faults, however. Because of the complexity of the data being transmitted to each node along the circuit, T0RTT would create a significant burden on RAM and CPUs that Tor nodes may struggle with. Before the protocol would be practical, this trade-off of latency for compute would need to be solved.

“The first solution would be to equip onion routers with better hardware. This would probably be the simplest solution and easy to realize with modern hardware,” said Lauer and Gellert. “The second and more interesting way would be to improve the construction of puncturable KEMs. Since this cryptographic primitive is still relatively young, we hope for further research results in this area in the next years.”

The paper, T0RTT: Non-Interactive Immediate Forward Secret Single-Pass Circuit Construction, was written by Sebastian Lauer (Ruhr University Bochum), Kai Gellert (Universität Wuppertal), Robert Merget (Ruhr University Bochum), Tobias Handirk (Paderborn University), and Jörg Schwenk (Ruhr University Bochum). It is due to be presented in July 2020 at the privacy symposium in Canada. ®

Sponsored:
From CDO to CEO

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/12/tortt_research_paper/

Disgrace of Base: Scammy hordes force Keybase to end cryptocoin giveaway

Citing an explosion in fraudulent accounts, Keybase says it is ending its maligned Stellar Space Drop giveaway.

The developer of the secure comms app confirmed this week that the coming Lumen cryptocoin giveaway will be the last and that between now and then, no new accounts will be registered for the event.

In making the announcement, Keybase blamed the move on a flood of bad actors that had been pouring in.

“While this giveaway mostly worked, it’s clear that there will be decreasing returns and massively increased effort required,” the site said.

“Why? Starting in the last week or so, hordes of fake people were beginning to come in, far beyond the capacity of Keybase or SDF to filter. It’s not in the Stellar network’s interest to reward those people; it is also not in Keybase’s interest to have them as Keybase users.”

This after users had complained for months about an influx of spam, harassment, and fraud attempts that were coming from new accounts seemingly created for the sole purpose of getting in on the coin drop.

Designed as a partnership between Stellar and Keybase, the air drop sought to distribute some 300 million Lumens (roughly $16m or £12m depending on exchange rate) to Keybase users who created a wallet with a verified account.

Unfortunately, the event was also blamed for the rush of new accounts to the site, many of which were only concerned with making a quick buck. Some veteran users were so disgusted with the harassment, they opted to leave the service completely.

While Keybase has promised additional measures to allow users to block unwanted communications from others, it seems additional steps were deemed necessary to clean up the service.

“My gut says [Keybase] were getting tired of people messaging them asking about it on every possible social media platform,” said Noid, a hacker and long-time user of the service.

“Basically they got to experience what it was like to be a regular Keybase user and they didn’t like it one bit.”

The final Keybase Space Drop is set to take place over next week, beginning on December 15th. ®

Sponsored:
From CDO to CEO

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/12/keybase_ends_drops/

Waking Up to Third-Party Security Risk

You can’t rely on the words, intentions, or security measures of others to guard your company, customer and brand.

One misconfiguration can compromise nearly all of an entire country’s financial data in the new realm of third-party risk management. Witness the attack on Capital One, a financial services giant that exposed hundreds of millions of people’s sensitive financial data because of a poor security configuration in a third-party service, Amazon Web Services. That attack could happen to any company today; most have no inventory of their third-party risk and are prime targets for hackers.

Third parties including partners, suppliers, and cloud hosting services that are directly linked to your networks are one of your largest risks for a damaging security breach. A 2018 study by the Ponemon Institute found that almost 60% of companies surveyed had suffered a data breach caused by third parties or vendors in the last year.

What’s causing more third-party risk? First, the way both internal and external (customer-facing) applications are built today is very different than a decade ago. Today, applications are composed of multiple smaller services: microservices. These may be for simple internal tasks  delivering a feed of alert data or for complicated services delivered via a software-as-a-service. All services connect, internally and externally, via APIs. When popular finance websites, for example, load on your browser or in mobile apps, the results you see are built by dozens of third-party services for specialized capabilities like calling a news feed or a share price, or pulling location data.

Additionally, web applications, middleware, and other code increasingly are built with third-party code components. Popular JavaScript libraries may be used by millions of sites even though the maintainers of the library are not well known. Third parties may also be tenants or customers for a cloud hosting service or a SaaS service. According to a survey of IT and security pros by Tripwire, 60% of organizations have suffered a container security incident. What does this mean? Hackers see multitenancy and services that share space or resell services to multiple customers as a viable path to a breach. 

Worse, increasingly, services are nested. A third-party SaaS service is composed of multiple additional third-party services and libraries. Called Nth-party services, we are now in an era of exceptionally hard-to-measure and, more importantly, hard-to-manage risk. Any critical information deployed to major cloud hosting services or SaaS applications, or on shared networks, may be exposed to the risks of every other user of those services and networks.

How to Protect Yourself from Third-Party Risk
You cannot rely on the words, intentions, or security measures of others to guard your company, customers, and brand against this growing risk. Protecting your critical IT infrastructure and applications requires a multitiered approach.

Step 1: Protect Your Own Infrastructure
Assume that protecting your own infrastructure is now a 24/7 task. This means vigilance must be continuous. Firewalls, antivirus, and all other security controls must be properly updated and configured all the time. Smart security teams should continuously test their controls and networks for security weaknesses.

The best tool for this testing is found in breach-and-attack-simulation (BAS) platforms that tap frameworks of known attacks, such as those from MITRE, and allows teams to run round-the-clock simulated attacks. Good BAS systems are highly tunable, allowing security teams to not only test against the entire playbook of known exploits but also to create compound exploits focused on the tools and software that their organization uses in its own infrastructure.

Step 2: Demand Others Protect Their Infrastructure
Demand that third-party organizations certify they are running similar protections against their own infrastructure as a condition of partnership, purchase, or granting access to your data. This is even better than requiring proper compliance and certifications, such as SOC 2. The certifications represent a snapshot in time that may not reflect the current reality.

With regard to risk from third-party libraries and open source code, it is critical that organizations actively audit, monitor, and validate this code. This is an extra step. Running static code analysis against open source libraries takes time and effort, for example. Fortunately, a growing number of services are checking and certifying open source code. So rather than run the testing yourself, you can probably pay for one of those services.

An additional step: Require third-party partners or customers to maintain a database of all known third-party connections and exposures. This may sound cumbersome, but in reality, it is good security hygiene both for you and your service providers, or for the platforms you use. Few companies today can produce this information. But if they could, it would allow them to not only do a better job of proactively guarding against attacks but would also help them identify the source of a breach more quickly.

Step 3: Test More Because We Can’t Go Back
This new way of running our technology infrastructure is beneficial in key ways: It allows teams to build applications faster and scale more quickly, and it prevents us from slipping back into the old practices of fewer and more brittle connections. A breach can happen quickly, doing millions of dollars of damage before the attack is stopped. With third-party risks, an ounce of prevention is better than many pounds of cure.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Criminals Hide Fraud Behind the Green Lock Icon.”

With a distinguished 30-year career at the Central Intelligence Agency (CIA), including 15 years as CISO, Robert Bigman is a pioneer in classified information protection. He developed technical measures and procedures to manage the nation’s most sensitive secrets. His … View Full Bio

Article source: https://www.darkreading.com/application-security/waking-up-to-third-party-security-risk-/a/d-id/1336567?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple