STE WILLIAMS

Equifax CISO: ‘Trust Starts and Ends with You’

Organizational culture is key to good enterprise security posture, Jamil Farshchi told Black Hat attendees.

BLACK HAT USA — Las Vegas — One of the main takeaways from major data breaches like the one at Equifax in September 2017 is that organizational culture is fundamental to a good security posture, said Jamil Farshchi, the credit monitoring bureau’s CISO, in a talk here today.

Farshchi was CISO at Home Depot when the breach at Equifax happened. He was hired at Equifax less than six months later and has been in charge of rebuilding the company’s beleaguered security program. It’s the same role he was called in to play at Home Depot following the 2014 data breach that exposed data on over 50 million payment cards.

“Equifax was meaningfully impacted right out of the gate,” Farshchi said. The company experienced a 40% loss in market cap in the immediate aftermath of the breach. It also lost its CEO, CIO, and CSO and had over $1.25 billion in incremental transformation costs. Recently, Equifax also agreed to pay $700 million to compensate victims of the breach.

Incidents like these tend to focus a lot of attention on the immediate causes and less so on the underlying, systemic issues, he said. At a Senate hearing on the Equifax breach, for instance, many of the questions that Farshchi received were focused on technical issues, such as the company’s patching processes, certificate management habits, and asset inventory-handling capabilities.

While all of the questions were meaningful and relevant, they did not touch on root-cause issues that often have to do with organizational culture and attitudes toward security, he said. “If you are looking at individual breaches, you are missing the bigger picture,” he said.

Farshchi said his experience has shown that five things are key to having an effective security organization. First, the head of security or the CISO needs to have the ability to influence and drive change as required across the entire enterprise. Second, this person also needs to be able to regularly interact with the board of directors and senior leadership on security strategies and direction.

The third big driver is economic incentive. The organizations that are doing well at security tie economic incentives to the effectiveness of the security program.

Farshchi identified the fourth and fifth key factors to security success as risk management and crisis management. Security teams that conduct regular crisis management exercises with executive leadership and the board are often better prepared to deal with an actual one, he noted.

Security organizations need to have the ability to identify meaningful risks, and security leaders need to have the conviction to escalate concerns even if doing so means halting or delaying a business initiative, he said. It is absolutely critical for security groups not to just identify an issue but to do something about it, Farshchi said.

He noted a new “say/do” motto within his own organization that emphasizes the idea that if you say something, you absolutely need to deliver on it. “Trust starts and end with you,” Farshchi said.

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/equifax-ciso-trust-starts-and-ends-with-you/d/d-id/1335479?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How Behavioral Data Shaped a Security Training Makeover

A new program leveraged behavioral data of employees to determine when they excelled at security and where they needed improvement.

BLACK HAT 2019 – Las Vegas – As human error continues to top data breach causes, security leaders grapple with how to get employees to care about, and adopt, stronger security habits.

“When you think about the ways how you could lower that number, the first thing that comes to mind is training,” said Aika Sengirbay, current security awareness program manager at Airbnb and former senior security engagement specialist at Autodesk, in the Black Hat briefing “It’s Not What You Know, It’s What You Do: How Data Can Shape Security Engagement.”

“But compliance-focused trainings are not enough to change human behavior, and especially not enough when it comes to security behaviors,” she added. Noticing the old way of training was “broken,” Autodesk sought new ways to improve its employee security training strategy.

Companywide trainings, often done to “check the box” on security awareness, are not typically measured and don’t offer a way to track improvement. All employees receive the same general training, which fails to engage and rarely drives progress. They wanted the new methodology to recognize their skill levels, respect their time, and motivate them to learn about security.

To accomplish this, the Autodesk teamed up with Elevate Security. Their first step was to create a list of desired employee behaviors: handle sensitive data, patch, increase reporting, and use multifactor authentication and VPNs, said Elevate co-founder Masha Sedova.

“If you had a magic wand, what would your employees be doing right now?” Sedova asked the audience. “These end up actually being mindsets; they’re not things you can measure in a tangible way.” This “master list” became a bank of open-ended behaviors they wanted to see.

Step two drilled into “vital behaviors,” which required the team to create a list of questions to prioritize worker activity: “What would be the most damaging to your company?” for example. “What are your most frequent incidents?” “What do your stakeholders care most about?”

Step three was to find data to measure progress and inform future strategies, Sengirbay said. The team ran internal phishing assessments, worked with incident response teams to identify suspicious messages, and consulted with enterprise device admins to see who used password managers. They pulled from the learning management tool to see who had completed training.

An employee was considered “successful” if they had not submitted their credentials to a phishing page, sent sensitive data through appropriate channels, installed and used a password manager within the 30 days prior, and completed the required training.

This all contributed to the “Individual Security Snapshot,” a program designed to present employees with prioritized security behaviors, identify strengths, provide recommendations for training, and reward behaviors. A considerable amount of effort went into creating a dynamic scoring system that was culturally relevant and ongoing, and urged people to change their actions.

“How do we communicate this in a way that actually shifts behavior?” said Sedova of employees’ security feedback. The team wanted their scoring system to be on a sliding scale so people would know they could change it with ongoing good behavior, similar to a credit score. They illustrated the scale with dragons: Poor security habits earned them a “flimsy” dragon on one end of the spectrum; strong habits made them an “indestructible” dragon on the other.

To add incentive, they leveraged social proof, which uses the context of what other people are doing to influence someone’s decision. For example, one alert informed employees they were “3.2 times more like to fall for a phish” than others in the department. Another said “12% of your department has installed LastPass” and mentioned Autodesk’s CEO was using the tool.

“We’re tapping into the things that make us all human,” Sedova explained. Amazon reviews work the same way: If you know someone else is using something, you’re likely to try it.

Security Snapshot reinforces good behavior by awarding employees virtual badges when they do things like detect all of their phishing email, report suspicious behavior, or complete a training. This intrinsic motivation doesn’t work for everyone, Sedova said, but it’s effective for many.

“As a security professional, I’ve seen security teams do a great job of punishing people who do the wrong thing” but rarely tell employees when they do something right, she added.

The Snapshot approach worked. Sixty percent of employees were willing to engage with Snapshot emails, Sengirbay said. Each email shifted the average security score across the organization. Autodesk was ultimately able to increase the number of people with scores of 70 or above by 170%. Researchers noticed low performers only had a 17% open rate of the emails, while those with higher scores and better security practices had a 58% open rate.

“Data can help us see what reality is and stop driving our awareness programs based on assumptions we have,” Sengirbay added. With contextual information to inform their training strategy, the researchers saw opportunity for changes they didn’t know they needed.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/how-behavioral-data-shaped-a-security-training-makeover-/d/d-id/1335480?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Transport for London Oyster system pulled offline after credential-stuffing crooks board customers’ accounts

Exclusive Transport for London’s online Oyster travel smartcard system has been accessed by miscreants using stolen customer login credentials, The Reg can reveal, forcing IT bods to pull the website offline for a second day.

The UK capital’s transport authority has blamed the intrusions on passengers who have used email address and password combinations for their Oyster accounts that were also used for one or more hacked websites: criminals who have nicked login details from other sites can use that information to get into the Oyster accounts of people who reuse the same usernames and passwords everywhere. This technique is known as credential stuffing.

A TfL spokesperson told us: “We believe that a small number of customers have had their Oyster online account accessed after their login credentials were compromised when using non-TfL websites. No customer payment details have been accessed, but as a precautionary measure and to protect our customers’ data, we have temporarily closed online contactless and Oyster accounts while we put additional security measures in place.”

In fiscal year 2018/19 nearly a billion rail, tram and bus journeys were made using Oyster cards, netting TfL a cool £2.3bn in revenue, according to its own statistics.

Over the past couple of days, increasing numbers of users noticed that they could not log in online and check their smartcards’ balances or top them up with cash.

In tweets from Londoners asking why they can’t access their online accounts and do things like cancel standing orders or change card details, TfL repeatedly insisted that the problem was “performance issues impacting users”.

TfL’s response to the attack on the accounts included taking down staff access to Oyster systems as well, though Londoners using ticket machines to top up at stations seem unaffected so far.

TfL also told us: “We will contact those customers who we have identified as being affected and we encourage all customers not to use the same password for multiple sites.”

The transport authority did not say how many users had been affected. ®

Updated to add at 1629 UTC 8 August

TfL got in touch to tell The Reg: “We have identified around 1,200 accounts that have been accessed maliciously.

“While this is a very small proportion of our 6 million online Oyster card account holders, we want to be absolutely safe and to protect our customers’ accounts so have temporarily suspended online contactless and Oyster accounts while we put additional security measures in place.”

In short, don’t use the same username and password combination across multiple websites.

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/08/tfl_oyster_card_outage_online_topup/

Siemens S7 PLCs Share Same Crypto Key Pair, Researchers Find

Researchers at Black Hat USA reveal how security authentication weaknesses in popular Siemens ICS family let them control a PLC.

BLACK HAT USA — Las Vegas — Security researchers who built a phony engineering workstation that was able to dupe  and alter — operations of the Siemens S7 programmable logic controller (PLC) found that modern S7 PLC families running the same firmware also share the same public cryptographic key, leaving the devices vulnerable to attacks like the ones they simulated.

“All PLCs of the same model have the same key, which means if you crack one, you’ve cracked all of them,” said Avishai Wool, a professor at Tel Aviv University’s School of Electrical Engineering, of the S7-1500 PLCs he and his fellow researchers studied. “So if you are able to talk to one of them, you are able to talk to all of them.” 

Wool, Eli Biham and Sara Bitan of Technion, and Uriel Malin of Tel Aviv University reverse-engineered the S7’s cryptographic protocol and were able to attack the S7-1500 PLC with a fake engineering workstation posing as a Siemens TIA (Totally Automated Integration Portation) system that forced the S7 to power on and off and follow other commands, as well as download rogue code. An attacker sending a rogue command to the PLC could cause a disruption to a plant’s physical process, the researchers said.

They gained control of the PLC by surreptitiously downloading rogue command logic to the S7 PLC and hid it so that it was unnoticeable to an engineer. If the engineer were to check the code, he or she would only see the legitimate PLC source code, unaware of the malicious code running in the background and controlling the PLC.

The security weakness here is that in the S7 cryptographic handshake, the TIA does not authenticate to the PLC, according to Wool and Biham. The PLC just authenticates to the TIA, which allowed them to operate the fake TIA engineering workstation.

Overall, the Siemens S7 cryptographic protocol basically falls short, according to Biham, due to its key pair issue. “It authenticates only the device family, not the devices themselves. So it becomes quite easy to impersonate whatever side you wish, especially when you look at the engineering station,” Biham said.

The researchers here today will detail the Siemens security issues, which they reported to the PLC vendor.

Turn on S7 ‘Access Protection’
Siemens recommends that its S7 customers activate the Access Protection security feature, which it said helps protect against the unauthorized changes to the PLC. “No update is necessary,” a Siemens spokesperson told Dark Reading.

The company did not specifically confirm that it would alter the S7 protocol to address the security issue Wool and Biham’s teams uncovered, but said it’s looking at updates: “Siemens constantly enhances the security of its products. Further steps to improve security of the communication are under consideration,” the company said.

Attacks exploiting the S7’s crypto weaknesses would require a well-resourced threat group to pull it off, Wool and Biham note. It took them several years of work, with teams of crypto and ICS SCADA security experts. And Siemens’ protocols are proprietary and not documented publicly, so they had to reverse-engineer them. “Siemens also modified their protocols and software a number of times over the years” while the researchers were studying it, Wool said. So some of their early work actually became obsolete with new updates.

Jacob Baines, principal researcher at Tenable Security, whose team recently hacked a Siemens TIA workstation, calls the research “impressive.”

“But I’m sure it took months of research and reverse-engineering and required them to build upon years of experience in SCADA and network security,” Baines said. “To actually deploy such an attack at an ICS plant, assuming the plant follows the most basic physical and network security, would be incredibly difficult.”

The researchers said the S7’s authentication weaknesses could be improved by ensuring each TIA has its own private key, while the PLC retains and shares the public key. Or the Siemens PLC and TIA could be configured to use a pairing mode using a shared secret.

To prevent an attacker from attacking the PLC and installing malicious code, the PLC should activate a password-protected mode on each PLC, they said. In addition, the shared key pair for the Siemens PLCs leaves them vulnerable to attacks, so the S7 crypto protocol should be updated to address the weaknesses and prevent these attacks, they said.

But like any other industrial system update, it’s not a given that plants will be able to install any upcoming Siemens S7 patches or updates given the risk of potentially disrupting operations. “Every deployment, especially in SCADA, is different. Patch cycles can be very long. Given the varied nature of patch cycles, I can’t speculate as to how many customers are likely to apply updates,” Tenable’s Baines said.

Meanwhile, Siemens said it will publish “further information regarding product security” on its Siemens ProductCERT site.

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/siemens-s7-plcs-share-same-crypto-key-pair-researchers-find-/d/d-id/1335452?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Microsoft puts another nail in VBScript coffin

Listen up, VBScript fans: your favourite scripting language’s days are numbered. Microsoft has announced that it will turn off support for the language by default in pre-Windows 10 versions in its Patch Tuesday updates on 13 August.

Microsoft first began killing off VBScript in December 2016, when it deprecated it in Internet Explorer 11 displaying pages in IE11 mode. However, it still ran in webpages displayed in legacy document modes. These are display modes designed to support older versions of IE while web developers transitioned to the standards used in IE11.

The support for legacy document modes was a temporary solution, though. Those modes are deprecated in Windows 10 and the Edge browser doesn’t support them at all. In a 12 April 2017 post, Microsoft announced that it would be further stamping out VBScript in IE11 by blocking VBScript in all document modes. It added:

In subsequent Windows releases and future updates, we intend to disable VBScript execution by default in Internet Explorer 11 for websites in the Internet Zone and the Restricted Sites Zone.

Now, it is delivering on that promise. On 2 August, it announced that cumulative updates for Windows 7, 8, and 8.1 due next week will disable VBScript by default across the board. It already made that change for Windows 10 in its July 2019 cumulative update, it said.

Created in 1996, VBScript is a dynamic scripting language that Microsoft modelled on the Visual Basic programming language. Windows sysadmins could use it to automate computing tasks, although now many have switched to PowerShell. It is often used for server-side processing in web pages, typically in Microsoft Active Server Pages (ASP).

Microsoft considers VBScript a thing of the past and calls it a legacy language in its latest post. It abandoned VBScript in its Edge browser because JavaScript had become the de facto standard.

There seems little reason to use VBScript unless it is embedded in a legacy website that a company absolutely must use and for some reason can’t update. But there are definite reasons to turn it off. Attackers love VBScript, because it offers an easy way to manipulate a machine.

This doesn’t mean that you can’t use VBScript if you really have to. You can still change the settings for VBScript execution manually in IE11 in three ways. You can change it on a per-site basis by configuring the site security zone, you can alter the registry, or you can make a Group Policy change.

Microsoft also blocked activation of VBScript controls in Office 365 client applications last year.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OFyXmnmBnSQ/

S2 Ep3: Ransomware, surveillance and data theft – Naked Security Podcast

Episode 3 of the podcast is now live. This week, host Anna Brading is joined by Paul Ducklin, Mark Stockley and Ben Jones.

In this episode: Duck gives a short cybersecurity-flavoured eulogy for his father, who died last week [1’10”]; we lament the woeful state of stock imagery in the cybersecurity industry [3’27”]; Ben tells you how to keep the crooks out of your home network [8’21”]; we discuss whether the government should be able to read our private messages or not [18’10”]; and Mark shares the latest research from Sophos about the Baldr malware and the cybercrooks behind it [29’15”].

We love answering your questions on the show, so please comment below or ask us on social media!

Listen now and share your thoughts with us.

Listen and rate via iTunes...
Sophos podcasts on Soundcloud...
RSS feed of Sophos podcasts...

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/FNaVR9sPAMM/

More than 2m AT&T phones illegally unlocked by bribed insiders

The US has indicted a 34-year-old citizen of Pakistan, accusing him of being the leader of a conspiracy to illegally unlock more than 2 million ATT cell phones for profit – a conspiracy that led to the phones slipping loose from their phone service and/or payment plans and which has cost the company millions in revenue.

The Department of Justice (DOJ) announced on Monday that Hong Kong police arrested Muhammad Fahd on 4 February 2018 and, at the request of the US, extradited him on 2 August 2019 (last Friday). He appeared in federal court in Western Washington on Monday to face 14 counts, which are outlined in this indictment.

A web of insiders

According to the indictment, Fahd allegedly paid more than $1 million in bribes to ATT workers to plant malware and misuse the company’s computer networks to illegally unlock cellphones. To do that, the insiders disabled proprietary software that locked ATT phones and prevented them from being used on other carriers’ systems. Unlocked phones are a hot commodity: they can be resold and used on any compatible network around the world.

When people slip out of the proprietary locking software, they’re also slipping out of the long-term service contracts that bind them to ATT’s wireless network. That’s a lot of lost profit for ATT, as in, millions of dollars. As the indictment describes, the company subsidizes expensive phones – top-end iPhone models, for example, sell for over $500 – by subsidizing the purchase price or allowing customers to buy them on interest-free installment plans. Either way, customers agree to enter into one of those long-term service plans.

Fahd allegedly recruited and paid ATT insiders to use their computer credentials and access to disable the locking software that kept customers tied to the network and/or the payment plans. He allegedly paid them hundreds of thousands of dollars, with one co-conspirator making $428,500 over the five years Fahd allegedly ran this scheme.

The scheme also involved planting malware that could be used to issue bogus unlock requests.

What happened?

Between 2012 and 2017, Fahd allegedly recruited ATT employees at the company’s call center in Bothell, Washington. Some of the early recruits were paid to point out other employees who could be bribed and who would be good candidates for participating in the scheme. The DOJ says that so far, three of those co-conspirators have pleaded guilty, admitting they were paid thousands of dollars for helping out with Fahd’s alleged scheme.

Fahd allegedly started out by sending the bribed employees batches of international mobile equipment identity (IMEI) numbers for cell phones, none of which were eligible to be removed from ATT’s network. The insiders would then unlock the phones. Some of the employees got fired, so the next step was for the remaining co-conspirators to work with Fahd to allegedly develop and then install tools that would enable Fahd to remotely get into ATT computers and unlock cell phones. Fahd and a second co-conspirator (who the DOJ says is now deceased) allegedly dropped off bribes to the insiders in person or via payment systems such as Western Union.

Starting around April 2013, the alleged ringleader bribed employees to plant malware on ATT computers so that he could gather information on how the company’s network and software worked. Using that information, he and his conspirators allegedly developed malware that could generate bogus unlock requests – requests that Fahd and others could generate remotely.

Then, from November 2014 to September 2017, the conspirators allegedly bribed insiders to install hardware such as wireless access points (WAPs) in ATT’s physical facilities. The hardware was used to process those unauthorized unlock requests. Both the malware and the hardware relied on the use of insiders’ network credentials.

Up to 20 years

Fahd has been charged with conspiracy to commit wire fraud, conspiracy to violate the Travel Act and the Computer Fraud and Abuse Act (CFAA), four counts of wire fraud, two counts of accessing a protected computer in furtherance of fraud, two counts of intentional damage to a protected computer, and four counts of violating the Travel Act. If convicted, he’ll be looking at up to 20 years in prison, though maximum sentences are rarely handed down.

Stomping on bugs

The day after the DOJ announced Fahd’s extradition, ATT announced the launch of a new public bug bounty program on the HackerOne bug-reporting/bug bounty platform.

There’s far more to ATT than phones, of course, which is reflected in the new bug bounty’s program guidelines: it applies to “security vulnerabilities found within ATT’s Environment, which includes, but is not limited to, ATT’s websites, exposed APIs, mobile applications, and devices.”

According to Bleeping Computer, ATT launched the program in July. It was initially invite-only, with ATT reaching out to between 100 and 150 researchers whom it’s worked with in the past on the ATT Developer API Platform.

Since its launch, ATT has received 49 submitted bug reports and paid out a total of $8,150 in bug bounties. The average bounty is currently at $150, with the highest hitting $750.

This is what the telecom giant is paying for varying levels of bug severity:

  • Critical: $2,500
  • High: $750
  • Medium: $300
  • Low: $150

HackerOne told BleepingComputer that ATT is the first communications company of its size to launch a public bug bounty program of this scale with HackerOne.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/5KmgBCGFT2U/

Cisco 220 Series Smart Switch owners told to apply urgent patch

Businesses running any of Cisco’s 220 Series Smart Switches have some urgent patching work on their hands after the company announced that three serious flaws have been discovered by a researcher in its web management.

Two of the three flaws – CVE-2019-1913 and CVE-2019-1912 – are rated ‘critical’ because they allow attackers to execute remote code execution (RCE) and authentication bypass, respectively.

This means that hackers could gain root and compromise the switches by sending malicious requests.

Being able to do this does depends on whether HTTP/HTTPS is enabled (the switches can also be managed using the command line or using SNMP), which can be determined by entering the show running-config command on via the command line.

The switch is not vulnerable if the following lines appear in the configuration:

no ip http server

no ip http secure server

The web management feature is enabled under Security TCP/UDP Service.

The third flaw – CVE-2019-1914 – is a command injection issue that is less dangerous because an attacker would first need to authenticate themselves by stealing or cracking the management credentials.

Announced in 2014, the 220 Series would make an inviting target for any hackers simply because it’s a small business product used by large numbers of customers across the world.

The list of affected 220 Series models: SG220-50P, SG220-50, SG220-26P, SG220-26, SF220-48P, SF220-48, SF220-24P, SF220-24.

All three vulnerabilities were discovered by a researcher identified as ‘bashis’ and fed to Cisco through the disclosure program run by Israeli company VDOO which last year discovered security flaws in Foscam webcams.

What to do

For networks using the web management interface, the only solution is to apply firmware version 1.1.4.4 (all previous versions are vulnerable). Although Cisco says no mitigations are possible, logically, a short-term mitigation would be to turn off the management interface.

The 220 Series appears to have been patched for another medium-rated flaw in early 2019, with four further flaws fixed during the course of 2016.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/K1mhEB2NowY/

Twitter may have shared your data with its ad partners without your permission

Twitter may have been sharing some users’ data without their say-so, it announced.

There are two issues, both related to Twitter finding out that it’s been ignoring user preferences for how our data gets used and/or shared. As of Monday, both problems had been fixed.

Glitch No. 1: Sharing data without permission

The first bug has been active since May 2018. It had to do with users clicking or viewing an ad for a mobile app and then interacting with that app. If you did, Twitter says it may have shared certain data about you – for example, your country code, your device type, and information about the ad and when you interacted with it.

Even if you didn’t give it permission to share your data, it may have done so, passing it along to the business partners that it uses for ad tracking and measurement.

This is a full list of user data that was exposed, and these are the partners who may have gotten the information.

Twitter didn’t identify which mobile app(s) triggered the bug. Nor did it say how many users were affected. Its investigation is ongoing, Twitter said, and once it finds out more details such as number of affected users, it will let us know:

We know you will want to know if you were personally affected, and how many people in total were involved. We are still conducting our investigation to determine who may have been impacted and if we discover more information that is useful we will share it.

Glitch No. 2: Showing device-specific ads without permission

Twitter says that it messed up on a second issue starting in September 2018. This one has to do with a process it uses to tailor ads. It may have shown some users ads based on “inferences” it made about the devices you use, again without your say-so.

This time, the data stayed within Twitter instead of getting shared with its partners. It didn’t contain anything highly sensitive, such as passwords or email accounts.

As Twitter explains on this page about personalization, when you log in to Twitter on a browser or device, it associates that browser or device with your Twitter account. Even when you’re not logged in to Twitter, it may also get information about your devices or browsers. For example, one of its partners might share the information, or you might visit twitter.com, or you might visit one of its advertiser’s sites or use their mobile apps.

Most commonly, it will use your IP address and the time that it got the information and will infer that certain browsers or devices are associated with one another or with your account.

In order to personalize your Twitter experience, the platform figures out what browsers and devices are associated with your account. For example, if you use Twitter on Android around the same time and from the same network where you browse sports websites with embedded Tweets on a computer, Twitter might infer that that Android device is related to your laptop and will then suggest sports-related content, such as sports-related ads and Tweets, on your Android device. It makes similar inferences relating to your email addresses, which may share first names, last names or initials, and will later serve you advertisements from advertisers that were trying to reach email addresses with those elements.

You’re supposed to be able to control whether Twitter does all this. (You can customize your personalization and data settings here.) You should be able to customize whether or not it can make inferences based on the browsers or devices you use when you’re not actually logged in to Twitter, or the email addresses and phone numbers that are similar to the ones that you’ve linked to your Twitter account.

Well, oops redux. That problem, like the data-sharing with partners one, was fixed as of Monday.

What to do?

Nothing. Twitter says it doesn’t believe you need to do anything, besides check your settings.

Months of multi-goofs

These are the latest in a laundry list of Twitter’s “oops!” A few months ago – May 2019 – Twitter admitted to a similar sharing gaffe with a partner: it had been mistakenly collecting and sharing some iOS accounts’ location data with one of its partners, even if a user hadn’t opted in to sharing the data.

Before that – January 2019 – Android users had their own run-in with mistaken, unauthorized sharing. Twitter said that it found a bug that exposed some Android private tweets to public view. That one went unnoticed for more than 4 years.

And before that? In September 2018, Twitter admitted to senators that it was still allowing external app developers to access its users’ Gmail accounts, months after the Wall Street Journal published a story about it. The senators had sent queries about Twitter amassing large amounts of user data and sharing it, a la Cambridge Analytica.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zUx2z5TCqdU/

Update your iPhone – remote control holes revealed by researchers

Google Project Zero researcher Natalie Silvanovich has just published a fascinating blog article entitled The Fully Remote Attack Surface of the iPhone.

This work, carried out by Silvanovich and research colleague Samuel Groß, was also the topic of a presentation she gave at this year’s Black Hat conference in Las Vegas.

Silvanovich’s article is technical but not overly so, making it well worth a look even if you don’t have any formal coding experience.

Notably, she reminds us all how easy it is to open up software to remote attacks, even if that software isn’t what you’d conventionally think of as server-side code, and even if it’s running on a device that you wouldn’t think of a server.

By the way, despite the revelatory nature of the article and her talk, there’s no need to panic.

At least, you don’t need to be too worried if you’ve already applied the latest Apple updates, because the holes that Silvanovich is now talking about in detail are already patched.

If you haven’t brought your iPhone up to iOS 12.4 yet, do it now!
SettingsGeneralSoftware Update is the quick way to check.

To explain.

An exploit that gives RCE, short for remote code execution, does exactly what its name suggests – by doing something unexceptionable, and without seeing any warnings, even well-informed users can be tricked into giving crooks access to their device.

A fully remotable exploit is even worse, because there’s no need for users to do anything except have their devices turned on and running normally.

A booby-trapped website that crashes and takes over your browser gives the crooks RCE.

Likewise, before Microsoft turned off AutoRun by default for USB devices, the proverbial USB-stick-in-the-card-park attack was considered a reliable way to achieve RCE because the chosen malware typically launched as soon as someone plugged in the booby-trapped USB key.

There wasn’t any sort of Are you sure? or [Cancel]/[OK] popup to sound a warning and give you a chance to head off the malware.

But even though visiting a web page or plugging in a USB device isn’t a difficult bridge for crooks to talk you into crossing, those attacks aren’t quite the holy grail of RCE, because some user engagement is needed.

A fully remote attack “just happens”, like the infamous Internet Worm of 1988, or the super-widespread SQL Slammer virus of 2003.

Those attacks sent network data that your computer was deliberately listening out for – no trickery required to get a foot in the door – but that your computer mishandled.

This allowed the crooks to package executable code inside their data packets and to achieve RCE in an entirely unattended and automatic way.

One of the Internet Worm’s attack methods, for example, exploited badly-configured email servers on which debugging mode was incorrectly enabled.

If you’d inadvertently left the debug option turned on, emails laid out in a certain way were treated as commands to execute (!), not as messages to be passed on, so the email server ran the malware immediately after accepting it.

The worm’s emails were directly dangerous without any user ever needing to receive them, let alone to open them or extract and run attachments from them.

Phones ≠ Servers

You might imagine that devices such as mobile phones, which generally don’t operate as servers themselves, would largely be immune to this sort of fully remote attack.

After all, you don’t generally run a mail server or a SQL server on your phone, and even if you wanted to, Apple probably wouldn’t let that sort of software into the App Store.

Even if you were to jailbreak your phone to install server software, your ISP might not allow incoming network connections to reach your phone at all, even if you were willing to accept them.

But, as Silvanovich reminds us, phones are all about messaging, and there are many sorts of message that we expect to be told about even before they arrive in full.

(An incoming call is the most obvious example: we expect the phone to ring, and the calling line’s number to be extracted and displayed, not only before we tap any icon to accept the call but also even when our phone is at the lock screen.)

In other words, even though we think of phones as network clients rather than network servers, there are plenty of client-side apps that download, process, act upon and display data that came from an arbitrary outside source.

We’re not just talking about things like automatic software or anti-virus updates that come from a known, trusted and well-regulated service, but also about content such as text messages or emails that were carefully and maliciously crafted by an unknown, untrusted and deliberately malicious creator.

Silvanovich identified five main application areas of interest on the iPhone, covering iOS subsystems that are specifically designed to fetch, process and tell you about incoming content: SMS, MMS, Visual voicemail, email and iMessage.

In the end, the researchers didn’t find any exploitable holes in SMS or MMS, perhaps because these subsystems are rather old-school and therefore have functionality that is both well-understood and somewhat limited.

But the others weren’t so robust.

As you can imagine, the more features, the more message types, the more different options, the more plugins and the more file formats an app suports, the more likely it is for a bug to exist in handling unusual, little-known or malevolently crafted files.

For example, you’d expect image processing software that can only display old-school BMP files (simple structure and plain, uncompressed data) to be less likely to crash on weird files than software that can handle 72 different image files with varying levels of complexity.

The more code you need to write to process incoming data and to handle all the possible variations, the harder it is to get it right; the harder it is test throroughly; the more likely it is to contain subtle bugs; and the longer it will take for every possible path through the maze of code to get tried out when handling real data in the real world.

Simply put, we say that its attack surface area is larger.

More code, more bugs

Although Silvanovich and Groß did find vulnerabilities in Visual voicemail and in the iOS’s email-handling system, these weren’t terribly significant.

But via iMessage they found at least eight security holes, listed by their CVE numbers: CVE-2019-8624, -8663,-8661, -8646, -8647, -8662, -8641 and -8660. (That’s the order in which they are covered in the article, which is why they are not in numeric sequence here.)

Note that even though Apple lists CVE-2019-8661 as patched in its latest iOS security advisory, the Googlers haven’t disclosed details of this one yet because they don’t think Apple’s update has fully fixed the problem yet.

What to do?

  • Get the latest iOS update if you haven’t yet. Many or most of the bug numbers listed above become irrelevant once you’ve applied the patches.
  • Get the next update as soon as it comes out. It sounds as though Apple is still working on CVE-2019-8661, and that Google is giving the company some more time to knock the bug on the head completely.
  • Less is more. If you are a programmer yourself, beware of writing code that does more than it needs to, or that itself depends on so many other modules or plugins that you can’t easily vouch for the whole thing, no matter how confident you are that your code is bug-free.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Ri0849b3RDo/