STE WILLIAMS

Turning Vision to Reality: A New Road Map for Security Leadership

Among the takeaways from a Gartner Symposium/Xpo session: who should be accountable for data security, why security groups should stop thinking of themselves as protectors, and the consequence of locking down ‘dumb’ users.

(Image: iconicbestiary via Adobe Stock)

Whenever executives gather, words like “leadership” and “vision” get thrown around — a lot. Gartner Symposium/ITExpo is no exception, but at a Monday morning conference session in Orlando, Florida, a Gartner analyst added specifics to the high-level vision around security and risk management.

Tom Scholtz, distinguished vice president and analyst at Gartner, began by pointing out how security has become a board-level issue at successful companies. He then spent the next 40 minutes talking about how executives should be turning their vision into something more concrete and how to lead with that solid vision.

He began at the top and worked down through a process that, he said, makes sense in today’s highly dynamic business environment.

Why Are We Here?
Scholtz began with a typical “mission” statement for IT security: “to establish and maintain the organization, its digital platforms, people, partners, services, and things as trusted participants in the digital economy.” Turning this into strategy, he said, begins with understanding the effects of at least three critical drivers: business, technology, and environment.

The next step is to understand where security starts. At least two assessment will be necessary to get an accurate picture of the “now,” he said. For some companies, external vulnerability and technology maturity assessments will do the job. Other companies, though, may need to add regulatory compliance, risk, or other assessments to build the proper foundation for a strategy.

Four additional steps will take a company to a strategy, according to Scholtz: Find the gaps between where you are and where you need to be; prioritize changes and actions based on the specific needs of the organization; gain approval from all stakeholders and the board; and execute on the plan.

Who’s on First?
Those are standard parts of any strategy development, but Scholtz had some specific ideas to deal with the rapidly changing nature of an increasingly digital company.

One of the keys to building a solid leadership vision for security and risk management, Scholtz said, is establishing a solid governance plan. That is, figure out who is responsible for making decisions about data and who is accountable for those decisions. The answer, Scholtz said, should not be the CIO or the CISO. Rather, the business unit that has decided to collect and analyze a particular set of information should be the owner of the data, and it should also be accountable for its security, he said.

Tom Scholtz, Gartner distinguished vice president and analyst (Image: Curtis Franklin, Jr. for Dark Reading)

“Too many organizations say the CIO or CISO is accountable, but a key characterization of digitalization is that many changes are being driven by the business,” Scholtz said. “We want the business to be innovative, but we also want the business to understand that if something goes wrong, they will be accountable.”

Principle Ingredients
Another key Scholtz stressed was basing decisions on the proper foundation. Too many businesses build implementation plans on processes, he said. A better answer it to base them on principles, and the reason has everything to do with flexibility.

One of the primary principles Scholtz recommended is a shift from protecting infrastructure to protecting business outcomes. A focus on business outcomes, he says, allows for consistency through times of shifting technology. Business outcomes also scale much more successfully than do technology protections, especially if an organization goes through times of rapid growth.

Another principle is that the security group should see themselves as enablers rather than protectors. If the business unit is going to be accountable for its data, then security’s job is to enable the unit to make decisions that keep it secure. In most organizations, that enabling will also involve implementing pieces of security infrastructure, but the key to this principle is the accountability and cooperation of the business unit in security operations.

Enabling is important in a third principle, too: Security should become people-centric. Scholtz mentioned the traditional view of a “dumb” user as the weakest link in a security system and a factor to be minimized as much as possible.

“Increasingly, the ‘dumb users’ aren’t so dumb — they’re the ones driving innovation. We can’t just lock them down,” Scholtz explained.

Instead, he suggested, the current generation of technology-savvy users should be given (and be assumed to have) a certain level of knowledge about their systems and security.

“We must give them the knowledge that corresponds to the level of knowledge required to safely operate an automobile,” Scholtz said, referring to the combination of “book learning” and practical experience that has to be demonstrated before someone is given a driver’s license.

All of the plans and strategies that make up security leadership should be revised every year, rather than the three- to five-year cycle long considered sufficient, he added. And they should be reviewed quarterly to make sure business conditions haven’t left them behind.

The keys to successful security leadership, Scholtz said, is remaining flexible, understanding the importance of context for decisions, and focusing on broad principles rather than prescriptive policies.

Related Content:

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/turning-vision-to-reality-a-new-road-map-for-security-leadership/b/d-id/1336133?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Surviving Security Alert Fatigue: 7 Tools and Techniques

Experts discuss why security teams are increasingly overwhelmed with alerts and share tactics for lightening the load.

It’s an all-too-common problem for today’s security teams: Alerts stream from a range of tools (sometimes misconfigured) and flood operations centers, forcing analysts to analyze and prioritize which ones deserve attention. Suffice to say, major problems arise when critical alerts slip through the cracks and lead to a security incident.

“One of the biggest drivers of alert fatigue is the fact that people are unsure or unconfident about the configuration that they have or the assets they have,” says Dr. Richard Gold, head of security engineering at Digital Shadows. “What happens is you end up with a lot of alerts because people don’t understand the nature of the problem, and they don’t have time to.”

Dr. Anton Chuvakin, head of solution strategy at Chronicle Security, takes it a step further: Many businesses are overwhelmed by alerts because they have never needed to handle them.

“I think a lot of organizations, until very recently, still weren’t truly accepting of the fact they have to detect attacks and respond to incidents,” he explains. Now, those that never had a security operations center or security team are adopting threat detection and are underprepared.

The proliferation of security tools is also contributing to the alert fatigue challenge, Chuvakin notes. “Today we have a dramatically wider scope of where we are looking for threats,” he continues. “We have more stuff to monitor, and that leads alerts to increase as well.” The most obvious risk of alert overload, of course, is companies could miss the most damaging attacks.

Security staff tasked with processing an unmanageable number of alerts will ultimately suffer from burnout and poor morale, security experts agree. What’s more, overwhelmed employees may also be likely to simply shut off their tools.

It isn’t the technology’s fault, notes Chris Morales, head of security analytics at Vectra. “We don’t have a detection problem – we have a prioritization problem,” he explains. Any given person in a commercial security environment is tasked with multiple jobs: parsing data, writing scripts, knowing the ins and outs of cloud – and managing arrange of tech in their environment.

“The amount of data being pushed through corporate networks today is unlike anything we could have imagined 10 years ago,” says Richard Henderson, head of global threat intelligence at LastLine. Organizations are struggling, and the onslaught of alerts is putting them at risk.

Here, security experts share their thoughts on the drivers and effects of alert fatigue, as well as the tools and techniques businesses can use to mitigate the problem. Which strategies have you used to combat alert overload? Which are effective? Feel free to share in the Comments section, below.

(Image: VadimGuzhva – stock.adobe.com)

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full BioPreviousNext

Article source: https://www.darkreading.com/edge/theedge/surviving-security-alert-fatigue-7-tools-and-techniques/b/d-id/1336128?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Bugcrowd Enters the IT Asset Discovery Business

New service searches for errant or vulnerable devices on the Internet.

Bug bounty program provider Bugcrowd today added a new service in which selected white-hat hackers help root out an organization’s exposed and vulnerable network devices on the Internet.

The new Attack Surface Management (ASM) service also analyzes the risks these devices pose and provides remediation recommendations for the findings.

Mapping and amassing a full inventory of devices on a network sounds like an obvious practice, but most organizations struggle to get a handle on what’s living on their network — a problem exacerbated by the explosion of mobile and Internet of Things devices in the typical enterprise — and how attackers could abuse them if they’re vulnerable or misconfigured.

Casey Ellis, founder, chairman, and CTO of Bugcrowd, says ASM differs from traditional asset discovery tools in that it focuses on the Internet view of the devices rather than on an internal network view. “We’re at a point right now where pretty much everyone is part of the way in migration to the cloud, which means you can’t really find” everything, he notes. “We’re doing it for them.”

Bugcrowd, which launched in 2012 as a crowdsourcing model for finding vulnerabilities in software, offers bug bounty, vulnerability disclosure programs, and penetration testing. The company relies on vetted independent security researchers to discover security weaknesses.

“This [new ASM offering] doubles down that we’re not just focused on bug bounties and vulnerability disclosure. … There are more things we can do with the crowd. This cements us bringing this crowdsourced security approach” more widely, he notes.

ASM’s rollout comes on the heels of Metasploit creator and renowned security expert HD Moore’s recent rollout of his new IT asset discovery tool, Rumble Network Discovery, which detects an organization’s devices and their status on a network without requiring administrative access to reach them.

While Discovery maps out devices from the inside of an organization’s network, Bugcrowd’s ASM detects asset exposure on the public Internet. “The thing HD is solving first is the idea of an internal view of a corporate network, something [he’s] beginning to address from the inside-out. We’re taking it from the outside-in,” Ellis says.

ASM will essentially provide a benchmark of network assets and can be set to detect devices on a continuous basis, he says. New devices are often placed on the Internet outside the purview of the security team, he notes, and that makes it difficult to keep tabs on them.

Moore, founder and CEO of Critical Research Corp., says Bugcrowd’s new service should augment the bug bounty program as well. “Many bounty programs are limited by unrealistically small scopes because the folks running the program aren’t aware of how much stuff they have exposed to the Internet,” he says. “This should be a good thing for Bugcrowd, as it gives the crowd more things to look at, and great for their customers, as they get visibility into their overall exposure, and not just what they happen to know about.”

Moore notes that there are several other vendors currently monitoring the external attack surface, including Censys.io, Asset Note, Expanse.co, RiskRecon, and BitDiscovery. “In the case of Asset Note, the team started the company as the result of doing bug bounty work and realizing how big the gap was between perceived and actual exposure for most organizations,” he says. “Visibility is a big deal for security and it’s great to see another company making Internet-wide asset discovery part of their platform.”

Profiles and Context
Ellis notes that many organizations today merely consult DNS records for tracking any external weaknesses of their devices. But those lists only contain the systems they know about, he says.

Bugcrowd’s new asset discovery service stops short of exploiting any vulnerable devices it discovers, he says. It’s more about profiling the assets and providing context on how risky it is and what would happen if it were attacked.

ASM’s findings can be used in Bugcrowd’s bug bounty and penetration testing programs for more targeted testing, the company says.

Related Content:

Kelly Jackson Higgins is the Executive Editor of Dark Reading. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/risk/bugcrowd-enters-the-it-asset-discovery-business/d/d-id/1336135?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Keeping Too Many Cooks out of the Security Kitchen

A good security team helps the business help itself operate more securely — soliciting input while adhering to a unified strategy, vision, goals, and priorities.

I’ve been fond of the idiom “too many cooks in the kitchen” for quite a while now. The Free Dictionary defines the idiom as “Too many people are trying to control, influence, or work on something, with the quality of the final product suffering as a result.” In some sense, this phrase goes hand in hand with another favorite of mine: “A camel is a horse designed by a committee.”

Those of us who are familiar with these sayings are aware of the lesson they aim to teach us. Sometimes, we need to lead. Other times, we need to follow. And still other times, we need to move out of the way of those already engaged in an activity. Knowing when which type of behavior is required is difficult. Here are five ways they apply to security:

Maturation rate: The goal of any security organization should be to continuously mature. For more mature security organizations, this means keeping pace with changing risks and threats and not being lulled into a sense of complacency. For those organizations that are less mature, it provides an opportunity to take a step back, thoroughly assess gaps in the organization’s security posture, and work to address those gaps. In both of these cases, leadership, informed from a variety of sources, needs to set a strategic direction. Responsibility for implementing the different elements of the strategy needs to be delegated to the management within the security organization, and individual team members need to be given clear direction regarding what to execute, along with the requisite resources.

This may sound straightforward, but, perhaps surprisingly, many organizations struggle with this. Lack of clear direction from leadership creates a cacophony of voices weighing in on and trying to affect strategy and vision. Lack of training, skills, or proficiency within the management ranks causes a failure to properly implement the strategy, leading various individual team members to attempt to take charge from the bottom up. Lack of guidance causes team members to overstep bounds or to perform redundant or overlapping activities. Any or all of these indicate that there are too many cooks in the kitchen, and this has the effect of slowing down the maturation rate.

Policy: Formal security policies are and have always been an important part of a mature security program. While setting and enforcing policies is important, so is drafting them properly. When drafting policies, it’s important to solicit and incorporate feedback from a variety of sources both inside and outside of the organization. Equally as important, however, is maintaining strong leadership throughout this process. While the feedback and guidance we receive from others is valuable, we cannot allow it to drive the process. That will only stall us — preventing us from progressing as we need to. Take input for what it is — data, not leadership.

Process: Setting up practical, efficient, and productive processes is one of the best ways a security organization can mature. As you might expect, this is easier said than done. While attaining the right processes is in itself a process, there are a few tips that can help make the journey smoother. When designing a process, it helps to have in mind the initial conditions, as well as the desired end state. It is also useful to gain executive, management, and stakeholder buy-in. Lastly, industry best practices, third-party guidance, and expert advice can also come in handy. When leveraged properly, all of these inputs can greatly improve processes. On the other hand, when the inputs begin to drive process, rather than the organization’s vision and strategy, the result is rarely the practical, efficient, and productive process we seek.

Technology: As time marches on, technology changes, often for the better. As such, it makes complete sense to consider new and improved technologies as they prove fit for our respective security programs. The trick with technology is to procure it to address process inefficiencies, programmatic gaps, regulatory requirements, and business needs. The chorus of voices suggesting every which technology under the sun can’t be allowed to drown out the matrixed operational needs of the security organization. Giving in to the noise can steer the security team off course and slow or even harm the maturity of the organization. There is a balance here between soliciting feedback around technology and managing technology procurement in line with the organization’s strategic direction.

Business: The job of the security organization is to reduce and mitigate risk in order to facilitate the business operating as securely as possible within the bounds of accepted risk. There will always be business functions that necessitate accepting a certain amount of risk. That being said, there are often creative ways to reduce and mitigate some or all of that risk without adversely affecting the business. The security organization needs to work with the business, but it also needs to be firm. It can’t be the Department of No, but it can’t be a pushover either. Walking this fine line means understanding, listening to input from, and accepting feedback from the business. It does not mean, however, allowing the business to take the wheel and steer security. A good security team helps the business help itself to operate more securely — soliciting input while adhering to its strategy, vision, goals, and priorities.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Turning Vision to Reality: A New Road Map for Security Leadership.”

Josh (Twitter: @ananalytical) is an experienced information security leader who works with enterprises to mature and improve their enterprise security programs.  Previously, Josh served as VP, CTO – Emerging Technologies at FireEye and as Chief Security Officer for … View Full Bio

Article source: https://www.darkreading.com/operations/keeping-too-many-cooks-out-of-the-security-kitchen-/a/d-id/1336101?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Autoclerk Database Spills 179GB of Customer, US Government Data

An open Elasticsearch database exposed hundreds of thousands of hotel booking reservations, compromising data from full names to room numbers.

Security researchers have detected a leak in an Elasticsearch database belonging to Autoclerk, a reservations management system recently acquired by Best Western Hotels and Resorts Group.

The 179GB database was linked to multiple online travel and hospitality platforms. This leak exposed personal data and travel arrangements of thousands of hotel guests and members of the US government, military, and Department of Homeland Security (DHS), vpnMentor reports. A team led by Noam Roten and Ran Locar call it a “massive breach” of government security.

It was difficult to pinpoint the database’s owner due to the sheer amount of data exposed and number of external origin points, the team says in a writeup of their discovery. Autoclerk is a reservations system for hotels, accommodation providers, travel agencies, and other companies. It includes server- and cloud-based property management systems, an online booking engine, central reservations systems, and hotel property management systems. The database vpnMentor discovered was connected to several hotel and travel platforms mostly based in the US.

Much of the data came from these external property management systems, booking engines, and data services in the tourism and hospitality industries including HAPI Cloud, OpenTravel, myHMS, CleanMeNext, and Synxis. They used the database owner’s platform to interact.

This leak compromised travelers all over the world. Autoclerk’s database held hundreds of thousands of booking reservations, which contained data including full names, birth dates, home addresses, phone numbers, dates and costs of travel, and masked credit card details. Guests who had checked into a hotel had the check-in time and room number exposed.

One of the compromised platforms was a contractor for the US government, military, and DHS that manages travel for government and military personnel. The leak exposed personally identifiable information of employees and travel plans, including logs for US army generals going to Moscow, Tel Aviv, and other places, along with email addresses and phone numbers.

Read more details in vpnMentor’s post here.

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/autoclerk-database-spills-179gb-of-customer-us-government-data/d/d-id/1336143?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

NordVPN Breached Via Data Center Provider’s Error

The VPN company said that one of its 3,000 servers in a third-party data center was open to exploitation through a misconfigured management tool.

NordVPN, a popular consumer VPN provider, has provided details on an intrusion that affected one of its 3,000 servers. According to the company, the March 2018 cyberattack did not expose any user credentials or history to outsiders.

In a NordVPN blog post, blog editor Daniel Markuson wrote that the breach was due to the actions of a third-party provider. He explained that the breach occurred in “…an insecure remote management system account that the datacenter had added without our knowledge. The datacenter deleted the user accounts that the intruder had exploited rather than notify us.” NordVPN has since terminated its contract with the provider.

NordVPN was not alone in being hit by the attack; VPN providers TorGuard and VikingVPN were also affected. All three have said that no user data or credentials were exposed by the events.

For more, read here and here.

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/nordvpn-breached-via-data-center-providers-error/d/d-id/1336144?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The AI (R)evolution: Why Humans Will Always Have a Place in the SOC

In cybersecurity, the combination of men, women and machines can do what neither can do alone — form a complementary team capable of upholding order and fighting the forces of evil.

Amber Wolff, campaign specialist at McAfee, also contributed to this article.

The 20th century was uniquely fascinated with the idea of artificial intelligence (AI). From friendly and helpful humanoid machines — think Rosie the Robot maid or C-3PO — to monolithic and menacing machines like HAL 9000 and the infrastructure of the Matrix, AI was a standard fixture in science fiction. Today, as we’ve entered the AI era in earnest, it’s become clear that our visions of AI were far more fantasy than prophecy. But what we did get right was AI’s potential to revolutionize the world around us — in the service of both good actors and bad.

Artificial intelligence has revolutionized just about every industry in which it’s been adopted, including healthcare, the stock markets, and, increasingly, cybersecurity, where it’s being used to both supplement human labor and strengthen defenses. Because of recent developments in machine learning, the tedious work that was once done by humans — sifting through seemingly endless amounts of data looking for threat indicators and anomalies — can now be automated. Modern AI’s ability to “understand” threats, risks, and relationships gives it the ability to filter out a substantial amount of the noise burdening cybersecurity departments and surface only the indicators most likely to be legitimate.

The benefits of this are twofold: Threats no longer slip through the cracks because of fatigue or boredom, and cybersecurity professionals are freed to do more mission-critical tasks, such as remediation. AI can also be used to increase visibility across the network. It can scan for phishing by simulating clicks on email links and analyzing word choice and grammar. It can monitor network communications for attempted installation of malware, command and control communications, and the presence of suspicious packets. And it’s helped transform virus detection from a solely signature-based system — which was complicated by issues with reaction time, efficiency, and storage requirements — to the era of behavioral analysis, which can detect signatureless malware, zero-day exploits, and previously unidentified threats.

But while the possibilities with AI seem endless, the idea that they could eliminate the role of humans in cybersecurity departments is about as farfetched as the idea of a phalanx of Baymaxes replacing the country’s doctors. While the end goal of AI is to simulate human functions such as problem-solving, learning, planning, and intuition, there will always be things that AI cannot handle (yet), as well as things AI should not handle. The first category includes things like creativity, which cannot be effectively taught or programmed, and thus will require the guiding hand of a human. Expecting AI to effectively and reliably determine the context of an attack may also be an insurmountable ask, at least in the short term, as is the idea that AI could create new solutions to security problems. In other words, while AI can certainly add speed and accuracy to tasks traditionally handled by humans, it is very poor at expanding the scope of such tasks.

There are also the tasks that humans currently excel at that AI could potentially perform someday. But these tasks are ones that humans will always have a sizable edge in, or are things AI shouldn’t be trusted with. This list includes compliance, independently forming policy, analyzing risks, or responding to cyberattacks. These are areas where we will always need people to serve as a check on AI systems’ judgment, check its work, and help guide its training.

There’s another reason humans will always have a place in the SOC: to stay ahead of cybercriminals who have begun using AI for their own nefarious ends. Unfortunately, any AI technology that can be used to help can also be used to harm, and over time AI will be every bit as big a boon for cybercriminals as it is for legitimate businesses.

Brute-force attacks, once on the wane due to more sophisticated password requirements, have received a giant boost in the form of AI. The technology combines databases of previously leaked passwords with publicly available social media information. So instead of trying to guess every conceivable password starting with, say, 111111, only educated guesses are made, with a startling degree of success.

In a similar way, AI can be used for spearphishing attacks. Right now, spearphishing typically must be done manually, limiting its practicality. But with a combination of data gathering and machine learning technologies, social media and other public sources can be used to “teach” the AI to write in the style of someone the target trusts, making it much more likely that the target will perform an action that allows the attacker to access sensitive data or install malicious software. Because the amount of work required for spearphishing will drop significantly at the same time the potential for payoff skyrockets, we’ll no doubt see many more such attacks.

Perhaps the biggest threat, however, is that hackers will use their AI to turn cybersecurity teams’ AI against them. One way this can be done is by foiling existing machine learning models, a process that’s become known as “adversarial machine learning.” The “learning” part of machine learning refers to the ability of the system to observe patterns in data and make assumptions about what that data means. But by inserting false data into the system, the patterns that algorithms base their decisions on can be disrupted — convincing the target AI that malicious processes are meaningless everyday occurrences, and can be safely disregarded. Some of the processes and signals that bad actors place into AI-based systems have no effect on the system itself — they merely retrain the AI to see these actions as normal. Once that’s accomplished, those exact processes can be used to carry out an attack that has little chance of being caught.

Given all the ways AI can be used against us, it may be tempting for some to want to give up on AI altogether. But regardless of your feelings about it, there’s no going back. As cybercriminals develop more sophisticated and more dangerous ways to utilize AI, it’s become impossible for humans alone to keep up. The only solution, then, is to lean in, working to develop and deploy new advancements in AI before criminals do, while at the same time resisting the urge to become complacent. After all, the idea that there’s no rest for the wicked seems to apply double to cyberattackers, and even today’s most clever advancements are unlikely to stem tomorrow’s threats.

The future of cybersecurity will be fraught with threats we cannot even conceive of today. But with vigilance and hard work, the combination of man and machine can do what neither can do alone — form a complementary team capable of upholding order and fighting the forces of evil.

Maybe our AI isn’t so different from the movies, after all.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Turning Vision to Reality: A New Road Map for Security Leadership.”

Dr. Celeste Fralick has nearly 40 years of data science, statistical, and architectural experience in eight different market segments. Currently, the chief data scientist and senior principal engineer for McAfee, Dr. Fralick has developed many AI models to detect ransomware … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/the-ai-(r)evolution-why-humans-will-always-have-a-place-in-the-soc-/a/d-id/1336102?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google chief warns visitors about smart speakers in his home

Apparently caught off-guard by a question from the BBC, Google hardware chief Rick Osterloh made up a privacy etiquette rule on the spot last week when he said that yes, homeowners should tell guests that they’ve got smart speakers running in their homes.

At any rate, that’s what he does, he said.

Here’s his reported response after being asked whether homeowners should tell guests about smart devices, such as a Google Nest speaker or an Amazon Echo display, being in use before they enter a building:

Gosh, I haven’t thought about this before in quite this way.

It’s quite important for all these technologies to think about all users… we have to consider all stakeholders that might be in proximity.

After a bit of mulling, Osterloh said that the answer is yes, and that he himself discloses the use of the always-listening devices, which record conversations when they hear their trigger words… or by something that more or less sounds like one of their trigger words. Or by a burger advertisement. Or, say, by a little girl with a hankering for cookies and a dollhouse.

Not only should a homeowner disclose the presence of the devices, Osterloh said. The devices themselves should also – “probably” – let people know when they’re recording:

Does the owner of a home need to disclose to a guest? I would and do when someone enters into my home, and it’s probably something that the products themselves should try to indicate.

“Probably?” One would imagine that Google learned about the necessity of its gadgets disclosing their surveillance when it went through prolonged discussion of such questions with regards to whether its Google Glass always indicated that it was capturing images.

Back in 2014, before Google Glass got taken out of the running as a consumer product, Google went on the defensive with a list of “Google Myths”. Google would have had us believe that Glass would indicate that it’s on and recording by virtue of its green camera-on light.

Indicator lights are good things. Perhaps not so relevant when talking about smart speakers, though, given that they don’t need line of sight to record us.

And granted, multiple researchers went on to show that they could tinker with the Glass recording indicator light, including a Glass spyware app’s that could surreptitiously take photos of a Glass wearer him- or herself.

And then too there was the work done by Android and iOS developer Jay Freeman, in which he found that he could root Glass and thus install any software he wanted.

That’s pretty creepy, given what rooting can allow a wearer to do, Freeman said, including turning off the recording indicator light.

The question about smart speaker disclosure came at the end of a one-on-one interview to mark the launch of Google’s Pixel 4 smartphones, a new Nest smart speaker and other products.

Lessons learned (or not) from the Glass privacy debates are one thing. And to be fair, Google’s Nest cameras shine an LED light when they’re in recording mode, Osterloh noted – a light that can’t be overridden. In fact, in August, Google removed the option to turn off the status light that indicates when newer Nest cameras are recording: a move that some owners called “absurd,” given that they use the cameras for covert surveillance.

But more recently, there’s been even more relevant privacy backlash, specifically about smart speakers.

In August, following news that smart speakers from both Apple and Google were capturing voice recordings that the companies were then letting their employees and contractors listen to and analyze, both companies suspended their contractors’ access.

The BBC says it’s a “big ask” to expect homeowners to inform visitors about smart speakers being in use.

Is it?

Is it unreasonable to expect homeowners to disable their gadgets if visitors object to the potential privacy implications of being recorded?

Is it unreasonable to wish to avoid your, or perhaps your children’s, voices and utterances from being recorded, by mistake or otherwise?

Readers, your thoughts?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qRZbDKOCYpk/

Woman ordered to type in iPhone passcode so police can search device

An Oregon appeals court last week decided that a woman who was high on meth when she crashed into a tree, seriously injuring one adult and five children passengers, can be forced to unlock her iPhone.

It’s not a violation of her Fifth Amendment rights against self-incrimination, the court said on Wednesday, because the fact that she knows her phone passcode is a “foregone conclusion.” Oregon Live reports that the court’s rationale is that police already had reason to believe that the phone in question is hers, given that they found it in her purse, the court said.

The foregone conclusion standard keeps cropping up in these compelled-unlocking cases. It allows prosecutors to bypass Fifth Amendment protections if the government can show that it knows that the defendant knows the passcode to unlock a device.

The woman in question, Catrice Cherrelle Pittman, was sentenced to 11 years in prison in March 2017.

According to court documents, at the time Pittman drove off the road and into a tree in June 2016, at the age of 27.

She pleaded guilty to second-degree assault, third-degree assault and driving under the influence of intoxicants (DUII). Prosecutors had wanted to use evidence from Pittman’s iPhone to help them build a case that she was also allegedly dealing meth, but that charge was later dismissed.

What does this change?

Ryan Scott, a criminal defense attorney in Portland who closely follows appeals cases, told Oregon Live that the ruling is an example of a continuing erosion of rights, given that federal case law has been heading in this direction for the past few years:

Our rights are a little less than they were yesterday. But for those of us following this area of law it’s not a surprise.

He’d be one to know, but it’s worth pointing out that the courts are far from marching in lockstep on this issue.

Some, but certainly not all, courts have decided that compelled password disclosure amounts to forcing the defendant to disclose the contents of her own mind – a violation of Fifth Amendment rights against self-incrimination.

One example is the decision that came out of the Florida Court of Appeal in November 2018 regarding a case that’s similar to that of Pittman, in that it involved an intoxicated person who crashed their car, leading to the injury or death of passengers, then refused to unlock their iPhone for police.

In Florida, the court refused a request from police that they be allowed to compel an underage driver to provide the passcode for his iPhone because of the “contents of his mind” argument about the Fifth Amendment.

But the Florida court also went beyond that, saying that whereas the government in the past has only had to show that the defendant knows their passcode, with the evolution of encryption, the government needed to show that it knew that specific evidence needed to prosecute the case was on the device – not just that there was a reasonable certainty the device could be unlocked by the person targeted by the order.

If prosecutors already knew what was on the phone, and that it was the evidence needed to prosecute the case, they didn’t prove it, the Florida court said at the time. From the order to quash the passcode request:

Because the state did not show, with any particularity, knowledge of the evidence within the phone, the trial court could not find that the contents of the phone were already known to the state and thus within the “foregone conclusion” exception.

Regardless of the “foregone conclusion” standard, producing a passcode is testimonial and has the potential to harm the defendant, just like any other Fifth Amendment violation would, the Florida court said. It’s not as if the passcode itself does anything for the government. What it’s really after is what lies beyond that passcode: information it can use as evidence against the defendant who’s being compelled to produce it.

In short, the Oregon decision doesn’t change the way all courts interpret these gadget-unlock, Fifth Amendment cases. What it does is to create yet another decision to which prosecutors and courts can refer when arguing and deciding future cases.

Can’t they just use their cracking tools?

As Oregon Live points out, some police departments already have tools to bypass the iOS lock screen – tools believed to be used by companies such as Grayshift and Cellebrite.

But the tools don’t always work. One example came up a year ago in a case concerning an investigation into a pedophile ring in the US state of Ohio.

With search warrant in hand, investigators searched a suspect’s house, demanding that he use Face ID to unlock the iPhone X that they found. He complied, which gave the FBI access to photos, videos, correspondence, emails, instant messages, chat logs, web cache information and more on the iPhone.

Or, at least, that’s what the search warrant authorized investigators to seize. However, they couldn’t get everything that they were after before the phone locked. A device can be unlocked by using Face ID, but unless you know the passcode, you can’t do a forensic extraction. The clock starts ticking down, and after an hour, the phone will require a passcode.

According to the suspect’s lawyer, the FBI wanted to use Cellebrite tools to get more data from his client’s phone, but they weren’t successful.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/DdWnZ_mEEBY/

Just say the ‘magic password’: Boffins turn up potential backdoor in SQL Server 2012, 2014

Security researchers at ESET have published details of a backdoor into Microsoft’s SQL Server via hooks and the splendidly named “magic passwords”.

The backdoor, which targets SQL Server 2012 and 2014, has the ability to leave a miscreant with stealthy access to a compromised server and forms part of the arsenal of a malware group dubbed “Winnti” by researchers.

The Register spoke with ESET malware bod Mathieu Tartare about the research and the risk posed by backdoor.

Before any administrators get too panicked, it is important to note that actually getting the backdoor running on a server requires administrative-level privileges. If that’s a risk in your organisation, you arguably have quite a bit more to worry about than someone blowing a hole in SQL Server authentication.

However, assuming the nefarious code somehow finds its way into the directories of a SQL Server, it has the potential to leave a handy backdoor silently swinging in the breeze for bad guys to sneak in, copy, modify or delete data. This is assuming SQL authentication is being used and they know a username with sufficient privileges.

You did remember to do something about that default admin account, didn’t you?

Oh, and the technique does away with phishing for passwords, so long as the attacker knows the “magic password” embedded into the malware. Unsurprisingly, ESET has redacted that particular nugget.

Dubbed “skip-2.0”, the malware relies on being injected into a running process, in this case the venerable sqlserver.exe from where it grabs a handle to sqllang.dll and hooks multiple functions, principally those linked to authentication and event logging.

One function hijacked is CPwdPolicyManager::ValidatePwdForLogin, which validates the password for a given SQL user.

You can probably guess where this is going.

If the user enters the “magic password”, the original function is not called and the user is connected. Worse, a global flag is set for the other hooked functions to pick up to silence event logging and auditing.

It’s all a bit horrid and means that if you know the username of a SQL account, you can connect no matter how strong the admin thinks the password might be.

Of course, the approach does not work for Windows Authentication and, Tartare told us, will not work for disabled accounts.

There is some good news: as well as the level of privilege required to actually get the code required for this technique running on the server, it is also pretty easy to detect (for something that goes to such lengths to cover its tracks in SQL Server).

The success of the attempt to install the hook is written in cleartext to a logfile, hardcoded to C:WindowsTempTS_2CE1.tmp.

It seems a bit of an oversight on the part of the attackers. Tartare agreed, saying that it was “quite surprising” of the miscreants to leave the log file, but added that it was “usual to see some debug information in the malware – at the end of the day they are developers…”

We could think of an alternative name or two for the ne’er-do-wells.

Tartare told us that the exploit had been tried on other versions of SQL Server without success. He also added that if detected (by spotting that logfile or through an antivirus application) once the injection code was removed, a restart would clear the backdoor.

Unlike recent SSH backdoors (PDF), skip-2.0 is installed in-memory rather than needing an .exe modified prior to execution.

All told, it’s a nasty and stealthy password bypass backdoor. However, to make it work administrative privileges are needed to actually install the thing and, of course, the attacker needs to know a valid SQL username. So a victim really would have all kinds of other security hygiene problems on their hands.

We asked Microsoft for its thoughts on the bypass, and a mouthpiece said: “This technique will only work on SQL servers that have already been compromised. We advise customers to ensure their systems have the latest security updates installed, to use strong passwords and to only expose MSSQL servers to the internet if necessary.” ®

Sponsored:
Technical Overview: Exasol Peek Under the Hood

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/22/eset_sql_server_backdoor/