STE WILLIAMS

DHS Helps Shop Android IPS Prototype

A MITRE-developed intrusion prevention system technology got showcased here this week at the RSA Conference.

RSA CONFERENCE 2018 – San Francisco – Tucked away in one of the large exhibit halls here this week was the US Department of Homeland Security which among other things was showcasing a prototype intrusion prevention system (IPS) for mobile devices.

The IPS, developed by a researcher at MITRE nearly three years ago, is a patented tool for Android mobile devices that operates like a traditional network or host-based IPS, blocking malicious incoming and outgoing IPv4 network traffic attempting to come and go from the Android. The prototype, called APE, is in the form of a mobile app, and uses deep packet inspection to filter the network traffic from cellular and Wi-Fi networks.

Nadia Carlsten, program manager for DHS’s Transition to Practice (TTP) program, says MITRE’s APE prototype was selected as a promising federally funded cybersecurity project under the TTP program. DHS’s TTP basically facilitates the adoption of technologies, helping spur adoption via partnerships, product development, marketing strategies, and helping accelerate commercialization of the research. It includes funding, training, mentorship, and a connection to the private sector. APE was picked for DHS’s TTP program in May of 2017.

“We’re actually doing all the steps that it takes to get [a] product commercially viable including partnering with industry and the inventory community, getting them to rally around the technology, help with development of the technology, and get this product to market so people can buy it, including government agencies,” said Carlsten, who is with the DHS Science and Technology Directorate, under the Homeland Security Advanced Research Projects Agency.

Mark Mitchell, lead multi-discipline systems engineer at MITRE and the creator of APE, says his concept for the IPS came from what he saw as a gap in mobile attack intelligence. “I was looking at ways to monitor devices … or a honeypot to infer the probabilities of a given attack vector,” he said. “I sort of had this ‘aha’ moment, of well, I have the traffic here on the device, so rather than logging it and analyzing it after the fact, why don’t I just block it if it I recognize it’s malicious? And that was the beginning of APE.”

APE relies on a set of rules for identifying malicious behavior, and includes signatures and behavior-based analysis and filtering, Mitchell said.

MITRE’s Mitchell ultimately hopes to license APE to a vendor that would then build it out as a product for mobile security. He hopes to get firms to test and evaluate the IPS in the meantime. As a research arm, MITRE doesn’t sell products itself.

APE can work alongside mobile device management software, he says, and could eventually be offered as an app in the Google Play store if it becomes an official product.

He picked Android as a start for the mobile IPS prototype, but believes the APE could also work in theory on an iOS – or even an IoT device – as well. APE isn’t quite baked yet, either: “It continues to evolve,” he says, including as a tool to protect mobile user privacy online, as well as enhancing it with machine learning.

Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/endpoint/dhs-helps-shop-android-ips-prototype/d/d-id/1331591?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Surprise! Wireless brain implants are insecure, and can be hijacked to kill you or steal thoughts

Scientists in Belgium have tested the security of a wireless brain implant called a neurostimulator – and found that its unprotected signals can be hacked with off-the-shelf equipment.

And because this particularly bit of kit resides amid sensitive gray matter – to treat conditions like Parkinson’s – the potential consequences of successful remote exploitation include voltage changes that could result in sensory denial, disability, and death.

musk

If you can’t beat AI, join it: Boffinry biz baron Elon Musk backs brain-machine interface biz

READ MORE

In a paper, Securing Wireless Neurostimulators, presented at Eighth ACM Conference on Data and Application Security and Privacy last month, the researchers described how they reverse engineered an unnamed implantable medical device, and how they believe its security can be improved.

They had help doing so from the device’s programmer, but argue that an adversary could accomplish as much, though not as quickly.

Beyond these rather dire consequences, the brain-busting boffins – Eduard Marin, Dave Singelee, Bohan Yang, Vladimir Volskiy, Guy Vandenbosch, Bart Nuttin and Bart Preneel – suggest private medical information can be pilfered.

That’s hardly surprising given that the transmissions of the implantable medical device in question are not encrypted or authenticated.

What is intriguing is that the researchers suggest future neurotransmitters are expected to utilize information gleaned from brain waves like P-300 to tailor therapies. Were an attacker to capture and analyze the signal, they suggest, private thoughts could be exposed.

They point to related research from 2012 indicating that attacks on brain-computer interfaces have shown “that the P-300 wave can leak sensitive personal information such as passwords, PINs, whether a person is known to the subject, or even reveal emotions and thoughts.”

Can the brain be a better defense?

To mitigate this speculative risk, the boffins propose a novel security architecture involving session key initialization, key transport and secure data communication.

Implants of this sort, the researchers say, typically rely on microcontroller-based systems that lack random number generation hardware, which makes encryption keys unnecessarily weak.

The session key enabling symmetric encryption for wireless networking between the implant and a diagnostic base station could be generated by a developer and inserted into implant. But the researchers contend there’s a risk of interception and potentially a need for extra security hardware that would make the implant bulkier.

They believe there’s an alternative: Using the brain as a true random number generator, a critical element for secure key generation.

“We propose to use a physiological signal from the patient’s brain called local field potential (LFP), which refers to the electric potential in the extracellular space around neurons,” the paper explains.

And to transmit the key to the external device, they suggest using an electrical signal carrying the key bits from the neurostimulator, a signal that can be picked up by a device touching the patient’s skin. Other modes of transmission, such as an acoustic signal, they contend could be too easily intercepted by an adversary.

The lesson here, the eggheads say, is that security-through-obscurity is a dangerous design choice.

Implantable medical device makers, they argue, should “migrate from weak closed proprietary solutions to open and thoroughly evaluated security solutions and use them according to the guidelines.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/18/boffins_break_into_brain_implant/

How’s your Wednesday? Things going well? OK, your iPhone, iPad can be pwned via Wi-Fi sync

RSA 2018 The iTunes Wi-Fi sync feature in Apple’s iOS can be potentially abused by cops, snoops, and hackers to remotely extract information from, and control, iPhones and iPads.

This is according to researchers at Symantec, who discovered that, once an iOS device trusts a physically connected computer, the device can, in certain circumstances, be accessed by miscreants sharing the same Wi-Fi network as the device and the computer.

Said miscreants can make backups of the iPhone or iPad’s documents, extract screenshots, and even add and remove applications without the iThing owner’s knowledge.

Speaking at the 2018 RSA Conference today in San Francisco, Symantec operating system research team leader Roy Iarchi and senior veep Adi Sharabani said it’s all because the cryptographic keys generated for accessing devices via USB are also used when authenticating access via Wi-Fi.

Thus if an iThing trusts a computer, or some other terminal, hands over its keys, and those keys land in the hands of scumbags, they can be used to hijack the handheld or fondleslab over the shared wireless network. The iOS gadget must also have iTunes Wi-Fi sync enabled, which can be turned on via social engineering or some tricky app on the device.

It sounds like a bit of a long shot – but could be pretty useful for determined snoops, crime investigators, and so on.

macos

Pro tip: You can log into macOS High Sierra as root with no password

READ MORE

Once an iOS device is plugged into a PC or Mac, and the user has opted to trust the machine, those aforementioned access credentials can be used via Wi-Fi to perform the same tasks possible if the device were connected with a USB-Lightning cable.

What’s worse, said the eggheads, those credentials are permanently saved by the computer, meaning they can be used to get into the smartphone weeks or months after it was paired. An attacker could infect the PC – or just buy a used machine that wasn’t wiped – and reuse those credentials on a targeted victim. Or an airport charger station could ask to be trusted when plugged in, and later pwn devices via shared Wi-Fi. Just your imagination.

Additionally, the duo noted, the technique could be paired with malicious profile attacks to route the device’s network traffic via a VPN, and exploit the vulnerability when the device is not on the Wi-Fi network.

Iarchi said the issue was discovered by accident in 2017 when, while debugging several iOS devices for a different project, he noticed a strange set of logs showing up in his terminal window.

“The problem is those logs didn’t collate to what I did on the devices,” he explained. “It was the logs of another device of one of my team members that wasn’t in the same room with me.”

From there, Iarchi was able to determine that, with a bit of digging, he could use developer tools to access backups, stream screens, and covertly remove and install the apps on any iOS device that had previously been connected to his machine.

Symantec said it had notified Apple of the issue, and though iOS 11 now requires a passcode to trust a computer, the so-called “trustjacking” design flaw they found is still present and open to abuse.

Until Cupertino decides to permanently fix the problem, Iarchi and Sharabani recommend users take some basic steps to limit trusted machine access, including encrypting their backups and deleting their list of old trusted machines (this can be done via Settings General Reset Reset Location and Privacy).

Developers can also help to protect their apps from data harvesting by not saving sensitive info to the device nor including it in backup data. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/18/ios_itunes_wi_fi_sync/

Russia’s Grizzly Steppe gunning for vulnerable routers

The Russian Government’s hackers – codenamed “Grizzly Steppe” – stand accused of trying to turn millions of routers against their owners.

After the stream of recent accusations levelled by cyber-authorities in the US, UK and Australia, it was probably inevitable that Russia would be formally accused of targeting network infrastructure at some point.

That happened yesterday, in the bludgeoning co-ordinated style that now marks out every official statement regarding Russia and cyberwarfare.

Stated US-CERT:

Since 2015, the US Government received information from multiple sources – including private and public-sector cybersecurity research organizations and allies – that cyber actors are exploiting large numbers of enterprise-class and SOHO/residential routers and switches worldwide.

These operations enable espionage and intellectual property that supports the Russian Federation’s national security and economic goals.

In fact, Grizzly Steppe was first mentioned in late 2016 when the FBI published its first report on the group’s alleged activities.

There will perhaps be two public reactions to this remarkable accusation, the first being to wonder what routers are and why they matter so much that Russia would want to target them.

The second may be to wonder why it has taken these countries so long to point out the phenomenon of co-ordinated router compromise – something that a variety of groups have been engaged in for at least a decade without much fuss being made about it.

In case the alert sounds a bit vague, the UK National Cyber Security Centre (NCSC) followed up the warnings with a document explaining in some detail the hardware weaknesses the Russians are alleged to be exploiting.

Switches, firewalls, and Intrusion Detection Systems (IDS) are all on the Russian target list but the central importance of routers in homes and offices made them prized targets, it said.

Products aren’t named beyond a few generic references to Cisco and Juniper, both of which are of course known to be extremely common in ISP networks.

However, what is made clear is the type of product vulnerable to Russian takeover. This includes:

  • Devices not set up securely (default passwords, too many interfaces/protocols left turned on)
  • Legacy devices using “unencrypted protocols or unauthenticated services” (presumably a reference to managing routers using Telnet or via HTTP)
  • End-of-life devices no longer receiving security patches

It lists numerous technical mitigations that a well-informed engineer would already know about and a series of Grizzly Steppe Indicators of Compromise (IoCs) they might not.

Reflecting the number of vulnerable devices, a Reuters report quotes a source at the British government’s National Cyber Security Centre as numbering targeted systems in the millions.

A separate warning put out by Australian authorities said that “that potentially 400 Australian companies were targeted”, although without “any exploitation of significance.”

The alerts are best understood as part warning, part political theatre.

For the Russians, it’s about making crystal clear that the defenders can see what they’re up to, which holds an implicit threat in return – if you target our routers we can do the same to yours.

The idea that we might be on the edge of an age of cyberattacks followed by retaliation is pretty scary if, indeed, that line hasn’t been quietly crossed already.

For companies, equipment makers and service providers, it’s a way of saying that the good times are over, you can’t take router security for granted.

Everyone should take basic precautions to defend their customers, and themselves, and not just hope for the best or assume the government will step in to save them.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/uDteDN4vQHc/

Why ‘remote detonator’ is a bad name for your Wi-Fi network

Tell us, XFINITY, CableWiFi and HOME-7F0C-2.4, did it ever occur to you that your Wi-Fi names are really, really boring?

No offense, though! Generic is good! It’s so much better than “Quick, everybody out, NOW – before somebody connects to ‘remote detonator’!!!”

As the Michigan news site M Live reports, a patron of a Planet Fitness in Saginaw Township was looking through available Wi-Fi connections on Sunday evening when he noticed one named just that – “remote detonator.”

He brought it to the attention of the manager, who promptly evacuated the 24-hour gym and called police. According to Saginaw Township Police Chief Donald Pussehl, a bomb-sniffing dog made a sweep of the premises, but it didn’t turn up any explosives.

Nothing can be done to make the Wi-Fi naming wit change his or her alarming network name, Pussehl said: it’s speech that’s protected under the First Amendment. Pussehl:

Everything is perfectly legal from a police standpoint. There was no crime or threat. No call saying there was a bomb.

Fine, “remote detonator,” fine. We’ll see you your incendiary title, and we’ll raise you a virtual fortune cookie plea for help. Naked Security’s Paul Ducklin says he was once war-training through Sydney – that’s like war-biking, as in, measuring Wi-Fi security, but without the bike – when he came across a very not-boring Wi-Fi name:

I was travelling to Rooty Hill. I wanted to see how well Wi-Fi scanning worked at 100km/hr from inside a train carriage – very well, it turned out – and I came across the ESSID “Help I’m stuck in this router”, which made me laugh.

But perhaps I should have called the emergency services?

Maybe! But if we’re going into reactionary mode in response to Wi-Fi names, somebody really needs to call a urogynecologist for whoever owns that It Hurts When IP network that Steve Aoki came across.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jGQtBHt1hIE/

Hacking like it’s 1999 (oh, and what to do about Facebook) [PODCAST]

Here’s Episode 3 of the Naked Security podcast.

Charlotte Williams asks the questions this week, quizzing Sophos experts Matt Boddy and Paul Ducklin about old-school malware, how to judge Patch Tuesday, and what to do about Facebook.

If you enjoy the podcast, please share it with other people interested in security and privacy and give us a vote on iTunes and other podcasting directories.

Further reading

Listen and rate via iTunes...
Sophos podcasts on Soundcloud...
RSS feed of Sophos podcasts...

Intro music: http://www.purple-planet.com

Closing music: https://codices.bandcamp.com


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qBVVFutz1Sc/

ID theft in UK hits record high as crooks shift to more vulnerable targets

Identity fraud in Blighty hit a record high of 174,523 incidents last year – and the vast majority of it happened online.

According to the latest report by fraud prevention service Cifas, ID theft rose 1 per cent on last year. However, that is an increase of 125 per cent on 2007, the Fraudscape (PDF) report shows.

Eight out of 10 cases took place online, and Cifas noted that the increase came from fraudsters targeting telecoms, online shopping and insurance – rather than bank account or credit card fraud as in previous years.

It said this “retargeting” by fraudsters can be seen as a shift towards exploiting more accessible products such as mobile phone contracts, online retail accounts, retail credit loans and short-term loans, all of which are less likely to be subject to the same strict checks as bank accounts and credit cards.

Separate research has found that fraudsters operating on the dark web could buy a person’s entire identity for just £820.

Thieves are gaining information by targeting individuals directly through phishing, malware attacks, social media, or other forms of social engineering, the report said.

According to the research, bank accounts bearing marks of money mule activity – where folk are recruited online to unwittingly transfer cash from the proceeds of crime – were up 11 per cent. There were 32,000 such cases in 2017.

Youngsters are most at risk – there was a 27 per cent growth in people aged 14-24 being recruited as mules.

More than a third of bank account takeover victims were over 60 years old. That was put down to the increasing popularity of online banking, and more fraudsters phoning victims claiming to be from the bank and asking to “verify” online passwords.

However, organisations prevented more than £1.3bn in fraud losses through non-competitive data sharing, the report added.

Conor Burns MP, chairman of the All-Party Parliamentary Group on Financial Crime and Scamming, said:

“Fraud is the 21st century volume crime and the issue is not going to go away. With more and more people sharing data, transacting, setting up businesses, dating and chatting online this trend is only going to continue.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/18/id_theft_in_uk_at_record_high_cifas_report/

How to Leverage Artificial Intelligence for Cybersecurity

What’s This?

AI and predictive analytics should be used to augment a company’s security team, not replace it. Here’s why.

Artificial intelligence (AI) and machine learning (ML) are two interrelated concepts that people across the tech landscape know hold important implications. While the benefits of these capabilities are many, experts in the field are also known to flaunt dire theories about them – from threats of dystopian futures ala The Matrix films to more tangible fears like wide-scale data breaches.

Machine learning falls under the broader AI umbrella as a technology that can enable computers to learn and adapt through experience, essentially, mirroring human cognition to recognize patterns. Successful examples of recent ML deployments include Google’s evolving search algorithms and Amazon’s product recommendations, along with the many “news feeds” that are common across social media.

But similar initiatives can have big dividends where cybersecurity is involved, especially in freeing up many of the more rote activities with which security staff are tasked. Predictive analytics and greater automation, for instance, are being employed via AI as an innovative means to fill the skills shortage that’s prevalent across the industry. This allows teams to off-load basic tasks in favor of high-priority or more technical initiatives.

Matching Human Capabilities at a Speed We Can’t touch

Any technology that can lessen the burden of an enterprise security team is extremely useful. Further to that, any time there is a defined data set that can be analyzed and categorized into a defined set of actions, AI will be successful. Some of the benefits that are already being enjoyed by security teams include things like enhanced behavioral analysis, email security and malware prevention.

For instance, businesses can use AI to help establish “known knowns” and “known unknowns” – that is, traffic behavior that follows an expected baseline of activity, and the traffic, users, or devices that appear anomalous by comparison. Even if an individual was given a single pane of glass to monitor all the traffic crossing the network perimeter, spotting anomalies would be nearly impossible given the number of users and devices that, on average, leverage contemporary enterprise networks.

The bottom line is that computers simply absorb information at a greater speed than humans while adhering to the same rules and protocols.

Tread Carefully at First

Of course, that isn’t to say there aren’t pitfalls to implementing AI and ML into a security workflow. It’s important to remember that AI and predictive analytics should be used to augment a company’s security team not replace it.

As was explained above, quality data sets will inform the success of an AI or ML program. If a business is collecting the wrong type of information from the get go, or is storing it incorrectly, AI and predictive analytics will be drawing conclusions based on incomplete or inaccurate information, leading to a reduction in performance.

Organizations achieve the greatest benefits from these technologies when they are used to free teams up from manual tasks in order to focus on higher-level problems that require “human” brain power to assess context and nuance. Those assessments can’t be passed onto computers – at least not yet.

It’s also important to expect that hackers will inevitably ramp up their use of AI in response to the new tactics being deployed by security teams. This underscores  the point that while cybersecurity tactics will change with team, threats will always be present, requiring dedicated human collateral to help businesses remain secure for years to come.

Joe Cosmano has over 15 years of leadership and hands-on technical experience in roles including Senior Systems and Network Engineer and cybersecurity expert. Prior to iboss, he held positions with Atlantic Net, as engineering director overseeing a large team of engineers and … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/iboss/how-to-leverage-artificial-intelligence-for-cybersecurity-/a/d-id/1331550?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Data Visibility, Control Top Cloud Concerns at RSA

As the traditional perimeter dissolves and sensitive data moves to the cloud, security experts at RSA talk about how they’re going to protect it.

RSA CONFERENCE 2018 – San Francisco – Businesses moving their data and processes to the cloud are worried about the ability to view and secure them, as indicated by trends and announcements at RSA. Visibility and control were two commonly voiced concerns related to cloud security.

In a panel at this year’s Cloud Security Alliance (CSA) Summit, a group of security experts discussed the transition process in a panel entitled “Getting to Mission Critical with Cloud.”

“Moving to cloud is a business enabler for a couple of different reasons,” said Stephen Scharf, CISO of DTCC. “It allows you to go rebuild in a new environment, which some of us never get a chance to do.” Many security leaders inherit their own historical infrastructure, he explains, and trying to secure that “is almost impossible.”

“I think there’s an opportunity with the cloud that we’ve never been given before,” chimed in Jerry Archer, CISO at Sallie Mae. “I think it’s a gas pedal for the business.”

However, the transition is fraught with challenges, noted Dan Solero, assistant vice president of technology security at ATT. Many businesses are adopting cloud services and tools before understanding how to secure them. It’s their responsibility to understand the risk, create awareness, and collaborate to get ahead of cloud security threats.

Data visibility and control are two primary cloud concerns, said CSA CTO Daniele Catteddu in an interview with Dark Reading. “The need for a more granular view of what’s going on in the organization will be necessary,” he notes, as businesses connect more devices to the cloud.

Indeed, many IT departments are flying blind in the cloud. In a survey of more than 570 security and IT pros, Bitglass found 78% have visibility into user logins but only 58% have visibility into file downloads, and 56% into file uploads. Less than half (44%) have visibility into external sharing and DLP policy violations, and only 15% can view anomalous behavior across apps.

Top Cloud Concerns

Manuel Nedbal, founder and CTO at ShieldX Networks, pointed to six types of cloud security threats likely to challenge cloud-enabled businesses: “cross-cloud” attacks between the private and public cloud, attacks within the data center, attacks between cloud tenants, cross-workload attacks, orchestration attacks, and serverless attacks.

In describing these threats, Nedbal pointed to a common theme pervading the week’s discussions: the perimeter is moving into “unprotected territory” within cloud-based environments, and its new shape can put businesses at risk if the right steps aren’t taken. Traditional multi-layer security tools like firewalls and intrusion prevention systems are less effective in protecting against lateral attacks because they can’t move into public cloud.

“If you have multilayered security there, you’re in pretty good shape in terms of traffic from the outside,” he said of traditional defenses. However, if an attacker slips through the cracks, “they have the run of the place.” If a threat actor enters the data center, often there is no defense to stop them from accessing sensitive data and resources, an example of a cross-data center attack.

Many organizations think they don’t need to buckle down on security if they don’t host sensitive data in the cloud; however, attackers commonly use public clouds to enter on-prem environments. Once your business brings workloads to the cloud, your on-prem perimeter extends into the public cloud, exposing on-prem data to attackers. As a result, many businesses adopt a fragmented security approach, which is often complex to maintain and leaves the enterprise exposed to attackers if no lateral defense is in place.

Security Defense: Starting with Basics, Moving to Cloud

“This is a year that we’re starting to see more willingness to consider having security services delivered from the cloud than in the past,” says Patrick Foxhoven, CIO and vice president of emerging technologies at ZScaler.

The growing adoption of cloud services is making businesses more comfortable with the idea of cloud-based security, he explains. If a company is willing to trust the cloud with their email and other sensitive data, it’s less of a stretch to ease them into cloud-based security tools.

However, businesses still need to make sure they have basic security steps in place. David Weston, principal security group manager at Microsoft, points to common attacks he sees in today’s threat landscape.

“The stuff we’re seeing is the unpatched public-facing services, and misconfiguration,” he said in an interview with Dark Reading. “There’s also trends in credential targeting, at least rolling credential attacks.” In these public cloud attacks, threat actors take the identities of everyone they’d like to target and use one password across all of them.

“By my count, we still don’t have a major breach that’s been attributed to a flaw in the cloud infrastructure itself,” says Misha Govshteyn, senior vice president of products and marketing at Alert Logic. “I’m not aware of any breaches attributed to underlying flaws in their cloud platforms.”

“The biggest thing we’re still battling is misconfiguration in cloud environments,” he continues, adding that businesses have “a tremendous amount of control” over cloud configurability. “Every time we see a data leak or compromise, it’s because a customer has failed to do something, as opposed to a cloud provider themselves has failed.”

“There should be no reason to miss these flaws,” says Govshteyn. “It’s all configuration-level issues.”

Services Buckle Down on Cloud

Companies this week announced products and services to help secure companies making the move to cloud. Kaspersky announced a hybrid cloud security offering, a management tool that integrates with Amazon Web Services and Microsoft Azure.

Its idea is to recognize businesses may not be fully ready to move to cloud due to poor visibility. The tool combines exploit prevention, vulnerability assessment, and automated patch management, anti-ransomware, and behavior detection into a single system.

A new partnership between FireEye and Oracle will focus on cloud security. FireEye Email Security is now available on the Oracle Cloud Marketplace, and customers can evaluate the email security tool running on Oracle Cloud Infrastructure via the Oracle Jump Start demo lab.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here. Register with Promo Code DR200 and save $200.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/data-visibility-control-top-cloud-concerns-at-rsa-/d/d-id/1331573?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Latest News from RSAC 2018

Check out Dark Reading’s exclusive coverage of the news and security themes that are dominating RSA Conference 2018 this week in San Francisco.

RSA CONFERENCE 2018 – San Francisco – Kelly Jackson Higgins, Sara Peters, Kelly Sheridan, Curt Franklin, and Ericka Chickowski offer news and analysis of keynote presentations, press conferences, and interviews with speakers and attendees. Content will be updated regularly through the week.

Cyber War Game Shows How Federal Agencies Disagree on Incident Response
Former officials at DHS, DOJ, and DOD diverge on issues of attribution and defining what constitutes an act of cyber war.

Data Visibility, Control Top Cloud Concerns at RSA
As the traditional perimeter dissolves and sensitive data moves to the cloud, security experts at RSA talk about how they’re going to protect it.

Trump Administration Cyber Czar Rob Joyce to Return to the NSA
First year of Trump White House’s cybersecurity policy mostly followed in the footsteps of the Obama administration.

Microsoft to Roll Out Azure Sphere for IoT Security
Azure Sphere, now in preview, is a three-part program designed to secure the future of connected devices and powered by its own custom version of Linux.

DevOps May Be Cause of and Solution to Open Source Component Chaos
DevOps is accelerating the trend of componentized development approaches, but its automation can also help enforce better governance and security.

Large Majority of Businesses Store Sensitive Data in Cloud Despite Lack of Trust
Researchers report 97% of survey respondents use some type of cloud service but continue to navigate issues around visibility and control.

 

 Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here. Register with Promo Code DR200 and save $200.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/careers-and-people/latest-news-from-rsac-2018/d/d-id/1331575?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple