STE WILLIAMS

Tories change Twitter name to ‘factcheckUK’ during live TV debate

The Tories changed their verified Twitter press account’s display name to read “factcheckUK” for Tuesday’s live TV general-election debate between Boris Johnson and Jeremy Corbyn, switched it back right after, and triggered much gleeful parodying of the attempt to pull on the mask of nonpartisan fact-checkers.

Hey, if the UK’s Conservative Party gets to do that with its @CCHQPress account, then “@BorisJohnson_MP” (a parody account) evidently feels that they get to rename their account “CCHQ Press” and issue this apology on the party’s behalf:

Twitter has officially tsk-tsk’ed the Tories, telling the BBC that it plans to take “decisive corrective action” if they pull that stunt again … though it apparently didn’t do anything at all in response to this particular incident.

A Twitter spokesperson:

Twitter is committed to facilitating healthy debate throughout the UK general election.

We have global rules in place that prohibit behavior that can mislead people, including those with verified accounts. Any further attempts to mislead people by editing verified profile information – in a manner seen during the UK Election Debate – will result in decisive corrective action.

Twitter told the BBC that according to its terms of service, it can remove an account’s “verified” status if the account owner is “intentionally misleading people on Twitter by changing one’s display name or bio”.

At least one genuine fact-checking organization, Full Fact, was not amused. Don’t confuse @factcheckUK for a real fake-news-sniffer-outer, it said in a statement:

It is inappropriate and misleading for the Conservative press office to rename their twitter account ‘factcheckUK’ during this debate.

Please do not mistake it for an independent fact checking service such as FullFact, FactCheck or FactCheckNI.

Well, it wasn’t my decision to rename the account, Conservative Party chairman James Cleverly said in response to criticism. It was the party’s digital team, not me. Besides, he told BBC Newsnight, the party’s handle stayed the same, so “it’s clear the nature of the site.”

Mind you, renaming it was a sweet move, he said, telling the BBC that he was “absolutely comfortable” with the party “calling out when the Labour Party put what they know to be complete fabrications in the public domain”.

That attitude is in keeping with Cleverly’s response to criticism of a doctored video the party put up a few weeks ago. The video was edited to make it look like the Labour Party’s Sir Keir Starmer got stumped by a question on Brexit and just sat there in silence.

Ooops, said the Conservative party’s Johnny Mercer – the interview had “inexplicably” been doctored, he said in apology.

Cleverly’s response to that same episode: the video was meant to be “light-hearted.”

Was redubbing the party’s Twitter account “factcheckUK” another “light-hearted” move? Was it in fact a parody, or satire? If that was in fact the real intent, it was lost on people who interpreted it as a genuine fact-checking service when it gave opinions on Jeremy Corbyn, according to the BBC’s media editor, Amol Rajan.

The responses the Twitterverse coughed up in “we can do that too” fashion included multiple people pointedly changing their Twitter display names.

One was Charlie Brooker – creator of the tech-dystopia TV show Black Mirror – who changed his handle to “factcheckUK” and then tweeted out a reference to George Orwell’s Nineteen Eighty-Four, borrowing a line about a made-up war from the novel’s fictional ruling party:

Others, including actor Ralf Little, found their accounts suspended by Twitter for allegedly doing the same thing:

… and many users agreed with Technically Ron: Twitter’s using a double standard. Why stomp on individuals, but let a major political party slide for the same thing? Why wasn’t @CCHQ suspended?

But while some were appalled, others were bored by the whole thing, calling it a waste of airspace. One such:

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/tYYIJoeFjEw/

Former White House CIO Shares Enduring Security Strategies

Theresa Payton explains the strategies organizations should consider as they integrate layers of new technology.

Infosecurity ISACA North America Conference and Expo – New York, NY – In her two-and-a-half years as White House CIO, Theresa Payton learned valuable long-term lessons about securely adopting new technology, which she shared today with an audience of cybersecurity industry pros.

“Every year we’re layering in new technologies to be considered, and it feels like we have to change our strategy every year, but [we] don’t,” she said in her opening keynote talk at the Infosecurity ISACA Conference and Expo, held here this week. Payton, the first woman to hold the position of White House CIO, is now the founder and CEO at Fortalice.

There are 3,000 staff supporting the Executive Office of the President, Payton continued, and they don’t all fit in the White House. “They’re flying all around the world. My job was to make sure that I could extend the desk in Washington, DC, to them wherever they were so that they could do their jobs.”

In the age of the smartphone, employees wanted everything to be simple – to work like an app. And not only did they want the latest technology, they wanted it secured.

As Payton pointed out, the challenges she faced then are similar to the challenges her audience of security professionals face every day. Today’s employees are also “mobile and global,” and it’s the responsibility of security practitioners to ensure the technologies they’re using are secure.

“We have never actually designed security with the human in mind,” Payton said, pointing to strong passwords as an example. Nobody ever thanks the security team for enforcing strong passwords, which have to be complex and regularly changed. Thinking about human-centered design, Payton would say to her team, “What’s the warm hug around the user?”

“How do we basically assume that technology will fail us, that humans will fail us, and therefore what are we going to do differently?” she asked. “What are the safety nets we’re going to put around everybody?” It’s “up to us,” she said, to break the tradition and think differently.

More Tech, More Problems
Payton pointed to the Internet of Things (IoT) as an example of new tech challenging security pros and encouraged the audience to think about incident response playbooks. As an example of why, she described a client running transportation centers undergoing a renovation that involved installing IoT lightbulbs and collecting travelers’ data and preferences from a mobile app. An engineer called it an “incredible customized experience.” Payton called it “really creepy.”

And what if an incident like the 2016 Mirai botnet occurred and knocked the connected devices offline? It’s an important factor to consider in an incident response playbook: What will you do? “It will happen again,” she said. “There will be a virus, there will be malware that puts your operations at risk.”

When it happens, what limited functionality will your company have left? Payton encouraged attendees to reconsider their playbook and discuss it with their teams.

Another factor to consider in an incident response playbook, she said, is what to do if your company’s data is discovered on the Dark Web. As an example, Payton told the story of a client whose payment card vendor had its source code for sale on the Dark Web. A cybercriminal had stolen the source code and was seen on Reddit, bragging that it was for sale. Using a Dark Web alias, Payton communicated with the criminal online. Her research revealed the code was legitimate, albeit a few versions old, and the attacker had discovered vulnerabilities in it.

To pay or not to pay? This is a tricky subject to broach. As Payton pointed out, she doesn’t advise paying cybercriminals in ransomware attacks. But as she told her client, if they could get the source code off the marketplace it would be for “the greater good.” They paid the client extra for exclusivity; no other attackers bought the code, and it didn’t become a bigger problem.

Companies should be asking themselves, what is their position for being in a situation like this? Will you consult an outside firm or do it yourself? Do you have alternate identities to communicate on the Dark Web, and have you discussed this with your legal team? Using alternate identities on the Dark Web is a difficult topic to address with legal, she added.

Blocking BEC Attacks
As technology evolves and deep fake AI grows popular, business email compromise (BEC) attacks are growing more common and sophisticated. Payton gave the audience a piece of advice: Do not use your public-facing domain name for moving money.

Cybercriminals do their open source intelligence. They know your CEO. They know your CFO. They can figure out who your vendors are and your marketing campaigns. With knowledge gleaned from an Internet search, they have enough to send a social engineering email and transfer money.

“Get a domain name that is not your public-facing domain name,” Payton said. Get a set of email credentials only for people who are allowed to move money. Tell your bank you’re no longer using the public-facing domain name for anything to do with wire transfers and money movement. From there, create a template to be used among employees sending and fulfilling financial requests.

Decide on a code word you text to each other that isn’t a term shared on social media, Payton advised. This way, a request that doesn’t come with a code word will appear suspicious. A large healthcare provider adopted the method, she said – and it has already worked. The same strategy can be used for transferring intellectual property.

“If you have this creativity in your design, you make it very hard for the cybercriminals to figure it out,” she said.

BECs are also growing more sophisticated with deep fake artificial intelligence (AI), Payton continued, noting four cases in which an attacker created a deep fake of a CEO’s voice. It’s a trend that is likely to grow: CEOs are public figures. Think about all the time they spend talking to media and speaking in public. “Creating a deep fake AI of a voice is not that hard to do,” she added.

Most of these cases aren’t successful, but in one the attacker was wired £250,000.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How Medical Device Vendors Hold Healthcare Security for Ransom.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/former-white-house-cio-shares-enduring-security-strategies/d/d-id/1336415?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Employee Privacy in a Mobile Workplace

Why businesses need guidelines for managing their employees’ personal information — without compromising on security.

Consumer privacy has long been the focal point of controversies regarding how companies handle personal data. While this is clearly an important matter, it has kept the spotlight off of another important issue: the way businesses handle the personal data of their own people.

Consumer privacy is typically associated with the way companies use personal data to make a profit. But employee data is used by companies to monitor for things such as security threats, risky online behavior, and productivity drains. Because monitoring for these types of issues is essential for any business, it’s easy to see how some companies might justify higher levels of employee surveillance. 

The US Electronics Communications Privacy Act (ECPA), enacted in 1986, prohibits companies from carrying out certain privacy infringements, such as monitoring their employees’ personal phone calls without consent. Even if the federal government were to conduct an overhaul to employee privacy legislation, it would be extremely complex and probably become antiquated shortly after its enactment — given the rate at which technology advances.

Employee privacy is a difficult subject. And for today’s increasingly mobile workplace, it’s becoming even more difficult.

When Smartphones Are Put to Work
It wasn’t too long ago that monitoring people through their phones was synonymous with wiretapping. And it also wasn’t too long ago that a company-issued desktop computer or laptop was the primary or only computer one would use in the workplace.

Today, mobile Internet traffic eclipses desktop/laptop traffic, accounting for 52% of combined traffic worldwide, according to a September 2019 report from Statista.

When cellular phones first transformed into handheld computers, the way companies distributed them to employees generally followed the way any other work equipment was distributed. And while the intent may have been for company-issued smartphones to be looked at by employees the same way company-issued desktops or laptops are typically considered — as company property for company purposes — there’s a much greater mix of personal and business use on smartphones compared with desktops/laptops. According to our research, 50% of all corporate data usage on mobile devices is not business critical.

Given this degree of personal and business activity on the same device, businesses are constantly adapting the ways they approach mobility, and employee privacy is becoming a major factor in this ongoing process.

The method of ownership for mobile devices used by employees has expanded beyond the traditional corporate owned, business only (COBO) model to include bring your own device (BYOD) and company owned, personally enabled (COPE) models. Today, many businesses use mixed combinations of these ownership models across individual employees and departments.

BYOD has been widely adopted in recent years and, on the surface, it makes a lot of sense. As of 2018, 81% of Americans owned a smartphone, according to Pew Research Center. Why spend money on a new phones for employees when they probably already have their own? Why not just offer a fixed stipend to compensate them for work-related activity conducted on their devices? It’s like the difference between providing a company car and reimbursing an employee for gas expenses.

There are obvious privacy ramifications of monitoring what a person does on his or her personal device, even if there are reasonable grounds from a security perspective when it comes to business data and applications. An organization has a strong basis for seeing what their employees do with files stored in their corporate Dropbox account, for example, but should it also be able to see their employees’ private messages, social media activity, and photos?

Regardless of whether an employee is using a corporate-issued device or their own device for work, there is bound to be some crossover between personal and business data. And when businesses collect and monitor employee data, how can they approach this crossover in a responsible way that does not infringe on personal privacy or compromise any security measures?

Employee Privacy Framework
To find the right balance, organizations can utilize a framework made up of four pillars to inform the way they handle their employees’ personal information as they establish privacy policies, develop their internal infrastructures, and implement new products.

These pillars are: User Identity, User Activity, Policy, and Transparency.

Under each pillar are best practices for how organizations should collect, store, and use their employees’ personal information. These best practices can account for many gray areas that currently make it difficult for certain security processes to be conducted without infringing on employees’ personal privacy.

Source: Wandera

 

For example, in monitoring network activity, a company needs to understand who is accessing corporate resources and from what devices, as well as determine “normal” parameters so that anything unusual can be detected and flagged for inspection. However, in COPE and BYOD environments, what behaviors are OK for businesses to monitor?

By applying the principle of data minimization, organizations can limit the amount of information collected to only what is needed for the intended purpose. This could mean that user information would only be collected on the work profile of a device, or only when corporate applications are in use. It could also mean that location data is not continuously tracked or that the access of personal applications (such as social media, messaging, photos, etc.) is not monitored.

Taking this approach of only collecting the most necessary data, when it’s necessary, should help organizations approach their employees’ personal information in a more responsible manner, helping them establish better trust with their people as they keep the business secure.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How Medical Device Vendors Hold Healthcare Security for Ransom.”

Michael J. Covington, Ph.D., is a seasoned technologist and the Vice President of Product Strategy for Wandera, a leading provider of mobile security. Michael is a hands-on innovator with broad experience across the entire product life cycle, from planning and RD to … View Full Bio

Article source: https://www.darkreading.com/application-security/employee-privacy-in-a-mobile-workplace/a/d-id/1336374?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Cloud Update Gives Users Greater Data Control

External Key Manager and Key Access Justification are intended to give organizations greater visibility into requests for data access.

Google Cloud today debuted new capabilities, External Key Manager and Key Access Justifications, to give customers greater visibility into who requests access to their information and the reasoning behind these requests. They also have the ability to approve or deny them.

Google Cloud encrypts customer data-at-rest by default; users have several options to manage encryption keys. External Key Manager, coming soon in beta, is the next level of control. It works with Cloud KMS and lets users encrypt data in BigQuery and Compute Engine. Encryption keys are stored and managed in a third-party system outside Google. The idea is to let companies separate data and encryption keys while still using cloud compute and analytics.

Key Access Justifications is a new capability designed to work with External Key Manager. When an encryption key is requested to decrypt data, this tool provides visibility into the request and its justification, along with a mechanism to approve or deny the key in the context of that request, using an automated policy set by the administrator via third-party functionality.

This feature is coming soon to alpha for BigQuery and Compute Engine/Persistent Disk, and it covers the transition from data-at-rest to data-in-use, Google reports.

Read more details here and here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How Medical Device Vendors Hold Healthcare Security for Ransom.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/google-cloud-update-gives-users-greater-data-control/d/d-id/1336422?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Patch ‘Easily Exploitable’ Oracle EBS Flaws ASAP: Onapsis

Organizations that have not yet applied a pair of months-old critical patches from Oracle for E-Business Suite are at risk of attacks on their financial systems, the application security firm says.

Two highly critical vulnerabilities in Oracle’s E-Business Suite could put firms who haven’t patched the flaws at risk of their systems getting hacked for illicit payments and other financial fraud.

Exploitation of the vulnerabilities could allow, for examples, an attacker to create a supplier in the system, add a bank account, and then issue payments to that supplier — all without approvals, according to cybersecurity firm Onapsis, which issued an advisory today that details the possible exploitation techniques attackers could employ against the EBS vulnerabilities.

Oracle fixed the EBS issues in its April 2019 critical patch update, but companies are often slow to apply such fixes, because they cannot risk disruption to their enterprise resource planning (ERP) software, a critical component of operations, says Juan-Perez Etchegoyen, chief technology officer for Onapsis.

The vulns, which affect two components of Oracle’s EBS, are “easily exploitable,” according to the official description in the National Vulnerability Database.

“We don’t have any numbers, but we know that customers tend to take months to years to apply (ERP software) patches — that is a reality for ERP customers,” he says. “They need to get into a more frequent cadence, because otherwise it is just too slow.”.

The issues are the latest to plague enterprise resource planning (ERP) software, highly complex platforms that are often critical to business operations. The platforms have often been only used on-premise, with Internet capabilities added afterwards, exposing them to threats.

Onapsis, a provider of cybersecurity for enterprise applications, highlighted the issue more than 18 months ago, informing Oracle and then working withe company to fix the issues, Etchegoyen says. The company only released public information on the issue on Nov. 20, after Oracle customers were given time to patch.

The flaws — one in Oracle’s General Ledger component (CVE-2019-2638) and another in Oracle Work in Progress component (CVE-2019-2633)  exploit Oracle’s Thin Client Framework (TCF), which is installed by default on E-Business Suite systems. Anywhere from 15,000 to 21,000 companies, mostly small businesses but also including businesses with more than 10,000 employees, use the software. At least 1,500 companies also expose the software directly to the Internet, Etchegoyen says.

“We waited for a few months to issue a public notice, because it is such a great risk,” he says. “If the system is accessible to a Web browser, then it is totally exposed. We decided to go public and increase the awareness.”

‘Full Control’

“Successfully exploiting any of these vulnerabilities could lead to full control over the entire Oracle EBS system,” the company stated in its alert. “An attacker with this type of access could be detrimental in any application, but represents the worst case scenario when an ERP system is attacked.” 

Because the vulnerabilities are in components that cannot be disabled, patching the system is critical. 

Onapsis notified Oracle of the security issues affecting the Thin Client Framework in September 2017, and the company issued a Critical Patch Update (CPU) fixing the issues on April 2018. By December 2018, Onapsis had found more vulnerabilities and a way of bypassing one of the previous patches, according to the company’s advisory.

“Even though multiple bugs were fixed, starting with the April 2018 CPU up to the most recent CPU, the most critical patches have a CVSS score of 9.9,” the advisory stated. “All of them could be exploited remotely and, depending on the patch applied, by an unauthenticated attacker.”

The company expects that many businesses have not installed the patches, because ERP systems are often critical enough that the firms do not want a misstep.

“In our experience, we see this as an industry problem,” Etchegoyen says. “Because the data is so critical, and often customized, changing or updating or applying patches can be a significant challenge for organizations.”

Still, companies should not wait any longer and apply the fixes, he says.

Related Content

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How Medical Device Vendors Hold Healthcare Security for Ransom.'”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/application-security/patch-easily-exploitable-oracle-ebs-flaws-asap-onapsis/d/d-id/1336421?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

What’s in a WAF?

Need a 101 lesson on Web application firewalls? Here’s your crib sheet on what a WAF is, how it works, and what to look for when you’re in the market for a new solution.

image Source Zsolt Biczo, via Adobe Stock

Spring chickens they’re not, but Web application firewalls (WAFs) are surging in popularity as more industries connect critical business functions to the Internet — and attackers inevitably follow.  

So what exactly is a WAF, and what are the tool’s benefits and drawbacks?  

What a WAF Is
“A WAF has two primary uses: visibility into incoming malicious HTTP(S) attack traffic [and] the ability to fend off attacks, especially where a Web application is known to be vulnerable, until the underlying code can be properly fixed,” says Jeremiah Grossman, CEO of Bit Discovery and founder of WhiteHat Security.  

Traditionally, WAFs have existed in the form of physical or virtual appliances, and “increasingly are delivered from the cloud, as a service (cloud WAF service),” according to Gartner.

What a WAF Isn’t
“WAFs cannot ‘fix’ Web application vulnerabilities,” Grossman says. “It can only shield them.”

Further, a WAF product might perform a wider variety of tasks than described above — but it might not.

As Eric Parizo, senior analyst at Ovum, explains, WAF vendors have begun to wrap in capabilities often provided by other tools, like runtime application security, anti-bot protections, anti-DDoS services, and API abuse prevention. However, if you’re in the market for a WAF, you shouldn’t assume all products will come with these capabilities.

How a WAF Works
“Similar to a network firewall that inspects and discriminates traffic based upon IP address and port, a web application firewall inspects and discriminates based on HTTP(S) traffic,” Grossman explains. “Specifically, input parameter data and format, cookie data and format, and so on. … Incoming HTTP(S) traffic is analyzed and parsed where traffic can be optionally denied.”

“WAFs can be functionally deployed in-line with a website, as an out-of-band deployment, or as a software component on the Web server itself,” he adds.

Why You Might Want a WAF
Although WAFs won’t “fix” Web app vulnerabilities, it can identify those vulnerabilities and implement security controls over incoming HTTP(S) traffic that might cause a threat to those vulnerable apps.

Although new vulnerability classes are emerging, the old standby vulns are still going strong. For example, the enterprise DevOps movement is inspiring attackers to create more API-based attacks, but good-ol’ SQL injection and cross-site scripting attacks are still plaguing security teams.

A WAF also might help satisfy compliance needs.

“The WAF was given life by the payment card industry,” Ovum’s Parizo says. As he explains, WAFs didn’t really begin to catch on until the release of PCI DSS 6.6, which established new requirements for automated technical solutions to protect Web apps and put forth WAFs as a way to satisfy the new rule. PCI DSS 6.6, in Parizo’s words, “shoved the technology like a boulder downhill.”

Why You Should Be Careful With Your WAF
Like any security solution, a WAF will not solve all your problems.

Although 75% of respondents to a recent study by Radware, which provides WAFs and other Web appsec solutions, had WAFs deployed (among other Web app security tools), 90% of respondents nevertheless experienced appsec-related breaches.

Further, the recent Capital One breach that exposed extensive personal data of over 106 million people was enabled by a misconfigured WAF. The breach was allegedly perpetrated by a former Amazon Web Services (AWS) employee who was able to exploit a weakness in a misconfigured (by Capital One) WAF to gain access to the files stored in an AWS database. The WAF was apparently granted too many permissions, which allowed the attacker to allegedly use a server-side request forgery attack to exploit the vulnerable Web app.

The Most Common Mistakes People Make When Using/Configuring a WAF
“The No. 1 challenge by far,” says Grossman, “is underestimating the deployment time and difficult configuration, and ongoing management of the device.”

Capital One’s WAF was apparently given too many permissions when granted access to the AWS database.

To avoid incidents like this and others, Dr. Richard Gold, head of security engineering at Digital Shadows, provided this advice in a column for Dark Reading: “It’s critical to continuously assess cloud environments for security issues, especially those at risk of external access from the public Internet. Reviewing security group configurations regularly can help ensure that services are not accidentally exposed and access controls are correctly applied.”

Companies That Provide WAF Solutions
Traditionally, WAF vendors provided on-premise appliances — Parizo cites Imperva and F5 as examples — but more companies are spinning up cloud-based WAF offerings — Akamai and Cloudflare, he mentions. However, several companies are now providing both appliances and cloud offerings, sometimes by making careful acquisitions. Other players include Barracuda, Radware, Trustwave, Qualys, and Signal Sciences, not to mention open source offerings, such as ModSec (which was at the root of the Capital One incident).  

According to Parizo, if these WAF providers don’t offer both appliances and cloud offerings now, and if they don’t offer cloud-agnostic support, they will be heading in that direction soon.

“The future is going to be a combination,” he says.

Related Content:

 

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/whats-in-a-waf-/b/d-id/1336402?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

As Retailers Prepare for the Holiday Season, So Do Cybercriminals

Online shoppers need to be wary about domain spoofing, fraudulent giveaways, and other scams, ZeroFOX study shows.

Retailers aren’t the only ones looking forward to a busy holiday shopping season this year. So are cybercriminals.

With all signs pointing to another record-breaking year for online merchants, crooks have begun ramping up their efforts to divert dollars their way via malicious domains, coupons, gift card scams, counterfeit goods, and other means.

Security vendor ZeroFOX recently analyzed threat data gathered from its retail customers over a period of 12 months. Data was analyzed across assets that a retailer wanted monitored, such as specific domains, brands, high-value executives and employees. For purposes of the research, ZeroFOX also gathered data from social media platforms, web marketplaces, the Dark Web, mobile app stores, and other sources.

ZeroFOX’s analysis showed that retailers face a diverse and multifaceted threat landscape, says Ashlee Benge, a threat researcher at ZeroFOX. Most threats attempt to abuse the brand in some way. But the way it happens varies widely, she says. “The diversity in this landscape makes it more difficult for retailers to defend themselves and their brands from these attacks,” Benge says.

Domain-based attacks top the list of threat that retailers — and, by extension, consumers — face this shopping season. These are attacks where threat actors set up websites that are spoofed to look like the domains of popular brands — and where users can land if, for example, they make a single typo or misspelling when entering the URL of the original sites. Users tricked into interacting with these domains can end up giving up account and payment card information and other sensitive data.

Ninety-two percent of the nearly 1.4 million alerts involving retail customers that ZeroFOX encountered last year involved domain-related issues. On average, ZeroFOX generated over six domain alerts per asset monitored, per day, over the 12-month period.

“A domain alert would be an alert indicator to possible impersonation or infringement of a brand, a product, or other asset,” Benge says. “The findings showed this to be the most common alert type with a very significant number of these per legitimate instance of the underlying brand, product, etc.,” she notes. The high incidence of these attacks makes it imperative for retail organizations to monitor domains related to their brands.

Proactive retailers can request takedown of domains that abuse their brand though the actual time needed to accomplish that can vary with hosts, networks, and registrars, Benge says. Retailers attempting to takedown spoofed domains can sometimes find the process takes longer than expected, and they end up being frustrated.

Fraudulent Giveaways and Brand Impersonation
Fraudulent giveaways, coupons, and gift cards are another major concern, as are counterfeit goods. ZeroFOX counted 2,900 such scams across its retail customer base over the last year — or roughly five scam alerts per brand asset monitored. Of these, 86% were giveaway scams, where users are tricked into parting with sensitive personal information under the belief they will get free holiday gifts, gift cards, or other products in exchange.

Here again, though it is not the retailer that is directly responsible for the scam, victims can often end up blaming them by association, according to ZeroFOX. “When scams and counterfeits are identified, particularly on social media platforms, the retailer has the right to request takedown of the content,” Benge says. But as with domain takedown requests, content removal request can be an arduous process, depending on the volume of content, she says.

Brand impersonation is another issue that could trip up holiday shoppers this year. ZeroFOX identified over 33,000 instances where attackers tried to impersonate a brand by mimicking its pages, logos, and images in order to trick users. It counted another nearly 9,000 instances of executive impersonation among customers in the retail sector.

Impersonation accounts are often used to promote phishing campaigns and other scams such as directing users to sites that download malware. “By impersonating well-known individuals like executives, attackers are able to establish credibility and gain access to a wider pool of potential victims than they would be able to otherwise,” Benge says.

Another report from One Identity this week shows that online scammers are not the only concern for retailers. The report, based on a survey of over 1,000 IT professionals, says that retailers feel most at risk compared with other organizations, from unsecured third-party access.

Nearly three in 10 retailers in the survey said that a third-party — such as a supplier or business partner — had successfully accessed files they were not supposed to, and 25% admitted to giving all third parties privileged access to their systems.

Todd Peterson, security evangelist at One Identity, says the reason why retailers likely feel this way is because of high employee turnover, a lot of seasonal workers, and a heavy reliance on third parties for key business operations that cannot be staffed at each retail location.

“The nature of their workforce and the fact that they are typically not in business for data security is the biggest factor that puts them at risk,” Peterson says. “Basic security practices such as managing third-party access or deprovisioning users is often forgotten about from an operational standpoint, which puts most retailers at a higher risk.”

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How Medical Device Vendors Hold Healthcare Security for Ransom.”

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/as-retailers-prepare-for-the-holiday-season-so-do-cybercriminals/d/d-id/1336424?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

XSS security hole in Gmail’s dynamic email

Did Android users celebrate loudly when Google announced support for Accelerated Mobile Pages for Email (AMP4Email) in its globally popular Gmail service in 2018?

Highly unlikely. Few will even have heard of it, nor have any idea why the open source technology might improve their webmail experience.

They might, however, be interested to learn that a researcher, Michał Bentkowski, of Securitum, recently discovered a surprisingly basic security flaw affecting Google’s implementation of the technology.

The intention behind AMP4Email, called ‘dynamic email’ in Gmail, was to reduce tab-clutter and make viewing email more like viewing and interacting with web pages, by allowing, for example, filling out reservation forms or searching Pinterest from within an email.

For examples of what dynamic email looks like in Gmail, scroll through Google’s 2018 YouTube demo featuring AMP4Email examples taken from Doodle, Booking.com and Pinterest.

AMP4Email beats plain HTML hands down but from the start Google knew this could potentially open the door to a security wrangle – the more things an email can do, the more likely someone will abuse those capabilities maliciously.

That’s why dynamic email senders are required to use TLS encryption, as well as deploying email authentication using DKIM, SPF, and DMARC so not just anyone could spray users with empowered malicious spam.

As for the content, to avoid the possibility that attackers might execute JavaScript to attempt a Cross-Site Scripting (XSS) attack, senders must also build email content using an allow list of tags and attributes or risk validation errors that stop it rendering.

XSS is bad enough when users are lured to a vulnerable website. Embedding this in an email is even more dangerous because the threat is being delivered straight to users’ webmail inboxes.

DOM clobbering

Stopping such threats by limiting the possibilities requires a form of sanitising. In AMP4Email, this included blocking custom JavaScript but not, apparently, the HTML id attribute.

Bentkowski cites this as an example of the Document Object Model (DOM) ‘clobbering’ – basically an oversight in the sanitisation process caused by the attempt to balance the need for webmail content to display without opening an XSS hole.

Having uncovered the id attribute issue, all he had to do was use trial and error to find a vulnerable condition.

Google was told of the issue in August and was impressed enough to reply:

The bug is awesome, thanks for reporting!

What to do?

The bug was fixed at least a month ago so users receiving AMP4Email/dynamic email content have one less thing to worry about.

In fairness to Google, AMP4Email only reached general availability in July, so it’s early days. Gmail users can turn it off via Settings the General tab Dynamic email Enable dynamic email.

This should be turned on by default unless the user has previously disabled image display through the Ask before displaying images setting, which has the side effect of disabling dynamic email.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/XM0wx0yUQ-A/

Instagram stalker app Ghosty yanked from Play store

Ever wanted to view hidden profiles on Instagram? To stalk users who’ve chosen to make their profiles private?

Up until Tuesday morning, you could do that by using a stalker service called Ghosty. Here’s what the app developer promised on versions available on Google Play and Apple’s App Store:

Ghosty – View Hidden Instagram Profile. You can view all the profiles you want to view including hidden profiles on Instagram. You can download or share photos or videos from your Instagram profiles to your gallery. In addition, you will soon be able to access many new features related to your instagram account.

“Soon” won’t come for the app, the logo for which was the profile of snooper extraordinaire Sherlock Holmes. Ghosty was removed from Google’s Play store after Android Police found the service creating what the publication called a “stalker paradise.” Nor could I find it on Apple’s store.

In that stalker paradise/privacy dystopia, anyone could view the many private profiles Ghosty amassed by signing up users who handed over their own accounts’ data – including whatever private accounts those users follow.

As Android Police tells it, this was the deal you had to make with the devil: in order to view whatever private accounts Ghosty had managed to crowd-source, you handed over your Instagram login credentials. You also had to invite at least one other person to Ghosty in order to view private profiles. Thus did Ghosty keep expanding the pool of content it could show its users: if any of those users followed a private account, that profile got added to the content Ghosty would make available.

Android Police noted that when it looked into the app, the media outlet managed to skip past that invitation step and was still able to view at least one private profile.

Not only was the service brazenly exploiting users’ desires to get at private accounts; it was also charging them for bundles or flinging ads at them.

Ghosty isn’t new; it appeared on the Play Store in April 2019. It had been downloaded over half a million times as of 13 November.

That’s a long time for an app to be amassing content while breaking Instagram’s rules. The relevant terms of service clause that forbids what Ghosty was up to:

You can’t attempt to buy, sell, or transfer any aspect of your account (including your username) or solicit, collect, or use login credentials or badges of other users.

As Android Police points out, during the half year that Ghosty was operating, neither Facebook (Instagram’s owners) nor Google apparently did anything about it – at least, not until now.

On Saturday, a Facebook spokesperson sent a statement to Android Police saying that no, Ghosty wasn’t exploiting Instagram’s application programming interface (API), as has been done by at least one other Instagram follower app that was recently yanked. But then, why would Ghosty even need Instagram’s API, when users were simply handing over their logins to enable the service to get at the private profiles the users follow?

The Instagram spokesperson said that the company would send a cease and desist letter:

We will be sending a cease and desist letter to Ghosty ordering them to immediately stop their activities on Instagram, among other requests.

We are investigating and planning further enforcement relating to this developer.

Last week, Apple pulled another Instagram-watching app from its store. That one, called Like Patrol, was reportedly charging users a yearly fee of $80 in exchange for access to their Instagram friends’ activities on the platform, including which posts they liked and from whom. It was also reportedly offering notifications of a person’s interactions with users of specific genders. None of that information required the consent of the person being monitored.

Android Police reports that following Facebook’s cease and desist letter, Ghosty disappeared from Google’s Play store. It’s not clear whether the developer made it go poof! or if Google pulled the app.

FTC cracks down on stalker apps

The removals of Ghosty and Like Patrol follow close on the heels of the Federal Trade Commission (FTC) having settled charges with the stalker app maker Retina-X Studio in October.

Retina-X Studio, (former) maker of the snooper tools PhoneSheriff, TeenShield, SniperSpy and Mobile Spy, put the kibosh on the products in March 2018 as a result of two hacks: the first in April 2017 and the second in February 2018.

A breach of a spyware app means that data for both the snooper users and their surveillance targets get compromised, and with these tools, that’s saying a lot: Retina-X’s tools were used to track targets’ call logs (including deleted ones), text messages, photos, GPS locations, and browser histories, as well as to eavesdrop on victims, wherever they might be.

Fortunately, the FTC has said that it’s going to be paying close attention to what spyware apps get up to. Retina-X was the first stalker app the FTC has ever gone after, but it likely won’t be the last, going by what the Commission had to say about its determination to…

…hold app developers accountable for designing and marketing a dangerous product.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SR3O4ii82mU/

Update WhatsApp now: MP4 video bug exposes your messages

WhatsApp’s pitch: Simple. Secure. Reliable messaging.

Needed marketing addendum: Hole. Update. Now. Evil. MP4s.

Facebook on Thursday posted a security advisory about a seriously risky buffer overflow vulnerability in WhatsApp, CVE-2019-11931, that could be triggered by a nastily crafted MP4 video.

It’s rated as a high-risk vulnerability – 7.8 – on the CVE scale. Understandably so: if left unpatched, it can lead to remote code execution (RCE), which can then enable attackers to access users’ files and messages. The security hole also leaves devices vulnerable to Denial of Service (DoS) attack.

Facebook said that this one affects WhatsApp versions for iOS, Android and Windows phones. The problem isn’t just on the regular WhatsApp; it’s also found on WhatsApp for Business and WhatsApp for Enterprise.

That’s an enormous number of users: With over 1.5 billion monthly active users, WhatsApp is the most popular mobile messenger app worldwide, according to Statista.

Facebook has issued a fix, so if you haven’t already, it’s time to update. Here’s Facebook’s technical explanation about the vulnerability:

A stack-based buffer overflow could be triggered in WhatsApp by sending a specially crafted MP4 file to a WhatsApp user. The issue was present in parsing the elementary stream metadata of an MP4 file and could result in a DoS or RCE.

A WhatsApp spokesperson told The Next Web that as far as the company can tell, the vulnerability hasn’t yet been exploited in the wild:

WhatsApp is constantly working to improve the security of our service. We make public, reports on potential issues we have fixed consistent with industry best practices. In this instance there is no reason to believe users were impacted.

These are the versions of the app that are affected:

  • Android versions prior to 2.19.274
  • iOS versions prior to 2.19.100
  • Enterprise Client versions prior to 2.25.3
  • Windows Phone versions before and including 2.18.368
  • Business for Android versions prior to 2.19.104
  • Business for iOS versions prior to 2.19.100

Links in exploit chains

While it’s good news to hear that the bug hasn’t yet been exploited, it’s no reason not to stomp on it hard and fast. Such flaws can be incorporated into exploit chains that link vulnerabilities: a technique reportedly used by companies that advertise tools that can break even Apple’s iPhone encryption.

In fact, WhatsApp last month sued the spyware maker NSO Group over what’s known as a zero-click vulnerability: one that allowed attackers to silently install spyware just by placing a video call to a target’s phone.

The attack let somebody or somebodies call vulnerable devices to install spyware that could listen in on calls, read messages and switch on the camera.

WhatsApp users were getting hacked over that zero-click hole in an attack that WhatsApp says was enabled by NSO Group’s off-the-shelf spyware tools – specifically, the notorious Pegasus.

Update your phone!

You’re OK if you have a newer build of WhatsApp installed. Do run a check to see if any updates might be available for your device, though.

And please do that check regularly: if you’re using WhatsApp, you’re expecting secure messaging. To get that secure messaging, you have to harden your defenses against attackers who want to punch a hole in your encryption wall.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jIBF0sl6Kuo/