STE WILLIAMS

Airbus Employee Info Exposed in Data Breach

Few details as yet on a cyberattack that hit Airbus’ commercial aircraft business.

Airbus has announced that its commercial aircraft business information systems were breached and some personal information of its employees was accessed.

According to the company’s announcement of the breach, “This is mostly professional contact and IT identification details of some Airbus employees in Europe.”

Airbus said that it is investigating the breach and strengthening its existing IT security measures. The company said it informed all affected employees and relevant authorities within 72 hours of becoming aware of the breach, in compliance with Europe’s General Data Protection Regulation (GDPR).

For more, read here and here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/airbus-employee-info-exposed-in-data-breach/d/d-id/1333770?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

8 Cybersecurity Myths Debunked

The last thing any business needs is a swarm of myths and misunderstandings seeding common and frequent errors organizations of all sizes make in safeguarding data and infrastructure.

Cybersecurity plays an integral role in the realm of good business models. You’d be hard pressed to come across an enterprise which doesn’t have some form of cybersecurity policy as part of its infrastructure. But even cybersecurity programs built with good intentions can fall short. Why? The best intentions are often based on an array of myths perpetuated by a combination of mistrust, misunderstanding, and lack of information. These are the myths of cybersecurity, and I’m going to break down some of the most common ones found throughout the tech industry.

Myth 1: You’re Too Small to Be Attacked
You read about data breaches all the time. Big companies suffer penetration attacks with millions of user data compromised by the nebulous realms of hackers. “Well,” you think, “that’ll never happen to my business, there’s not enough value, we’re too small.” And that’s just wrong. In 2016, 43% of all cyberattacks were conducted against small to medium-sized businesses. This is a growing trend, with malware and malicious attacks escalating in both complexity and frequency. You’re as likely as a target as any major enterprise, so don’t buy into this line of thinking.

Myth 2: Passwords Are Good Enough
The downfall of any security policy is the lazy “set it and forget it” mentality. Cultivating this lethargic approach is the adoption of complex passwords and believing it’s good enough. You have your staff memorize a 12-character login phrase with special characters, caps, and numbers? That must be enough!

It’s not, because a mix of social engineering and complex malware attacks can circumvent it  with alarming ease. Password reuse across multiple platforms makes you dependent on the security of other organizations, where a breach of their password database places accounts at risk on your systems. Malicious third parties employ a wide range of bots and auto-attacks to hasten their process, and without two-factor authentication and a level of encryption (especially on vulnerable public networks), one password just isn’t sufficient in today’s dangerous cyber world.

Myth 3: Antivirus Is Good Enough
Much like the “set it and forget it” password philosophy, this equally applies to your antivirus setup. It’s tempting to believe the fancy software your enterprise invested so much capital in will thwart any and all attackers, but again, that’s not true. Antivirus is of foundational importance, but good cybersecurity requires a rigorous program that includes protection, detection, and response preparation along with safe practices for user behaviors.

Myth 4: It’s IT’s Problem
Computers are hard, so let IT handle everything, right? This, again, is a foolish way to look at cybersecurity. Some businesses lack the capital to hire experienced staff. And, even with a good IT team, said staff are limited in what they can handle. If you expect your IT team to manage every single tech-related problem, from resetting logins to managing network infrastructure and dealing with potential intrusions, you’re asking for trouble. Every staff member should be familiar with good cybersecurity practices.

Myth 5: BYOD is Safe
While a BYOD (bring your own device) policy is popular and cost-effective, it’s a whole new avenue of risk for a business. Assuming smartphones and mobile devices brought by staff are secure is a serious error in judgment. Apps with personal data, logins, and business-related info are easy to compromise, and every unsecure device is just another potential hole in your cybersecurity foundation. It’s important that employees follow rigorous guidelines when using their own hardware.

Myth 6: Total Security Is Possible
The eternal struggle of cybersecurity is its constant need to adapt to new threats. As security teams adapt strategies and tactics to meet those threats, attacks evolve to counter the changes. It’s a constant battleground, meaning total security is impossible to achieve. A business should always expect some form of cyberattack and should always have backup, incident and crisis preparedness, and disaster recovery (BDR) measures in place. You can only take a proactive approach towards malicious threats, not counter them in their entirety.

Myth 7: You Don’t Need Assessments and Tests
I couldn’t think of a more disastrous approach to a cybersecurity plan. This is like working on a term paper and submitting it with zero revisions, edits, or extra eyes. You cannot reasonably expect your current cybersecurity plans to be foolproof without conducting assessments and penetration tests. These self-evaluations are invaluable, revealing where you’re weakest and strongest.

Myth 8: Threats Are Only External
Competent security requires just as a hard a look at internal staff and policies as do the various third-party attacks. This is because — whether from human error or malign intent — cybersecurity risks are as likely to emerge from your own enterprise as outside of it. More is at risk, too, considering staff are the pathway to the most sensitive info.

Related Content:

Brian Engle’s role as CISO/Director of Advisory Services allows him to lead the delivery of strategic consulting services for CyberDefenses’ growing client base with risk management support, information security program assessment, and cybersecurity program maturity … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/8-cybersecurity-myths-debunked/a/d-id/1333746?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Apple kicks Facebook’s snoopy Research app out of the App Store

For three years, Facebook has been secretly paying volunteers – including teens – to install a virtual private network (VPN) app called Facebook Research that plants a root certificate on their phones, according to Tech Crunch.

That certificate gets the company “nearly limitless access” to the device, TechCrunch reports.

It’s unclear exactly what data the Facebook Research app is sniffing for, but Will Strafach, a security expert with Guardian Mobile Firewall, said that it can get anything it wants:

If Facebook makes full use of the level of access they are given by asking users to install the Certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats from in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed.

When the BBC visited one of the app’s sign-up pages, it stated that Facebook would use the information to improve its services, and that there are “some instances” when the data is collected “even where the app uses encryption, or from within secure browser sessions”.

Yes, this is for real, Facebook says, but it was so not secret. The app’s name had “Facebook” in it, the company said in a statement:

Key facts about this market research program are being ignored. Despite early reports, there was nothing ‘secret’ about this; it was literally called the Facebook Research App. It wasn’t ‘spying’ as all of the people who signed up to participate went through a clear on-boarding process asking for their permission and were paid to participate. Finally, less than 5 percent of the people who chose to participate in this market research program were teens. All of them with signed parental consent forms.

As far as enrolling teens goes, when BuzzFeed’s Ryan Mac tried to sign up, he found that the parental consent process was a bit of a joke: all it required was an email address and a click.

And as far as “how secret is it when it says Facebook in the name” goes, the page that the BBC came across stated that participants had to agree…

…[not to disclose] any information about this project to third parties.

The report from TechCrunch’s Josh Constine is very detailed and very much worth a read, but here are some of the takeaways:

Oh no, here we Onavo go again

If news about a snooping VPN app from Facebook is giving you déjà vu, it’s because Facebook Research is a kissing cousin to the company’s Onavo VPN. It was Strafach who detailed, in March 2018, how Onavo Protect was snooping on users even when the VPN was turned off, telling Facebook:

  • When users’ mobile device screens were turned on and off
  • Total daily Wi-Fi data usage in bytes
  • Total daily cellular data usage in bytes
  • How long the VPN was connected to Facebook even when a user’s screen was on or off.

As the Wall Street Journal had reported in 2017, Facebook had used the Onavo-supplied data to track its competition and scope out new product categories. Private, internal emails from Facebook staff that were published last month revealed that Facebook had relied on the Onavo data when it decided to purchase WhatsApp, for example. The company also used the Onavo data to track usage of its rivals and to block some of them – including Vine, Ticketmaster, and Airbiquity – from accessing its friends data firehose API.

In August 2018, Apple politely suggested that the privacy-violating app shove off. Facebook agreed and pulled it out of the App Store.

That was good for the privacy of iOS users, but the past few weeks have brought new revelations about Android apps secretly sharing data with Facebook, even when users are logged out or don’t even have a Facebook account.

TechCrunch reports that it got a tip that Facebook was paying users (up to $20) to sideload a similar VPN app after Apple gave Onavo the boot.

Sure, Apple banned Onavo, but that didn’t cure Facebook’s data thirst. TechCrunch’s investigation found that starting in 2016, Facebook had been working with three app beta testing services to distribute Facebook Research: BetaBound, uTest and Applause. Following the Onavo backlash, since at least mid-2018, the company’s been calling Facebook Research “Project Atlas.” It had yet another similar program called “Project Kodiak.”

Worming into Apple

Just as Onavo before it, the Facebook Research app also took advantage of Apple to get where it wanted to go. Namely, it circumvented the App Store by using testing tools from Apple that are typically used to install software that’s still in development. Those tools are supposed to be used only in certain, specific cases, such when companies want to install internal apps on iPhones – including, for example, monitoring apps or those that give extra security – that they provide to their employees.

But as the BBC reports, Apple’s Developer Enterprise Program License Agreement makes it clear that the installation of root certificates must only be used for “specific business purposes” and “only for use by your employees”…

…not for people recruited by app beta-testing companies in ads that deeply bury Facebook’s involvement.

On Tuesday, seven hours after TechCrunch published its report, Facebook said it would shut down the iOS version of its Research app. On Wednesday, an Apple spokesperson told TechCrunch that the company had already blocked Facebook Research the day before, before Facebook “voluntarily” pulled the app.

Apple confirmed that Facebook had violated its policies:

We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.

Facebook Research will still run on Android, adding yet more botheration to a month in which some Android users have found that they can’t scrape Facebook off their devices.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Pe_LaBgHBQo/

Phone cloner gets 65 months in jail

A US court has sentenced a man to over five years for his part in a massive telecommunications fraud involving stolen cellphone accounts and reprogrammed phones.

After pleading guilty last November, 54-year-old Braulio De la Cruz Vasquez received a 65-month sentence this week for participating in a scheme that used stolen cellphone credentials to route tens of thousands of international calls to countries including Cuba, Jamaica, and his home country the Dominican Republic, from which he was extradited in 2018.

Vasquez, formerly of West Palm Beach, Florida, was the fifth person to be sentenced for his part in the phone fraud ring, which investigators codenamed Operation Toll Free.

Vasquez was indicted in 2016 for committing wire fraud and for aggravated identity theft. According to the original 2016 indictment, Vasquez worked the scheme in Florida with fellow defendants Ramon Batista (aka ‘Porfirio’), Edgar Lopez, Farintong Calderon, Edwin Fana and Jose Santana (aka ‘Octavio Perez’).

From approximately August 2009 until February 2013, the group routed international telephone calls through stolen cellphone accounts, which they shared via email and online chat. Vasquez received more than 700 emails containing 2,158 identifying numbers associated with cellphone account holders around the US.

They got control of the numbers by stealing personal information, which they used to either set up a new account in a victim’s name or convince a phone carrier to give them control of the victim’s cellphone account so that they could clone the victim’s SIM. They would also sometimes change phone-specific serial numbers, incrementing them and checking to see if they could use them to access an existing cellphone account.

The gang used specialist hardware to program new phones with the stolen numbers and then connected those phones to VOIP telecommunications equipment at call sites that they ran, which would take incoming calls and route them to the phones.

The criminals acted as a telecommunications reseller, charging carriers to connect international calls on their behalf. Vasquez, who operated a call site in West Palm Beach, admitted to collecting tens of thousands of dollars in payments from a single VOIP communications carrier.

The stolen accounts would carry the calls, the phone bills for those calls would go to the victims, and the victims’ carriers would usually be unable to retrieve payment for the stolen calls, effectively making them the ultimate victims of the fraud.

Vasquez’s indictment could land him a maximum penalty of 20 months’ imprisonment on one count of wire fraud and five years for conspiracy to commit wire fraud, along with a two-year sentence for identity theft that would have to be served consecutively with any other sentence. Other defendants have already received penalties ranging from 36 months to 75 months.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RcCVb78a_XM/

14k HIV+ records leaked, Singapore says sorry

For the second time in seven months, Singapore has lost control of its citizens’ private medical records. This time, it’s the records of people diagnosed with HIV.

The Ministry of Health (MOH) on Monday announced that police had alerted the ministry that the HIV status of 14,200 people, plus confidential data of 2,400 of their contacts, is in the possession of somebody who’s not authorized to have it and who’s published it online.

The records were those of 5,400 Singaporeans diagnosed with HIV up to January 2013 and of 8,800 foreigners diagnosed with HIV up to December 2011. They included names, identification numbers, phone numbers, addresses, HIV test results, and related medical information. Also included were names, identification numbers, phone numbers and addresses of 2,400 of the patients’ contacts.

The MOH has been notifying, and offering help to, affected people since Saturday. The ministry says that it’s also worked with “relevant parties” to disable access to the records.

However, whoever published the confidential records still has them, so they could be published yet again. The ministry is scanning the internet for signs of further disclosure.

The MOH didn’t specify whether the sensitive information had been seized in a cyberattack or whether the US man it suspects of having the records got his hands on them some other way.

Regardless of the “how,” the ministry thinks it knows the “who.”

It’s accusing two men: the one who it says possesses, and who allegedly published, the HIV records is Mikhy K. Farrera Brochez. Brochez is a male US citizen who was living and working in Singapore from January 2008 until June 2016, when he was arrested. In March 2017, Brochez was convicted of fraud and drug-related offenses and sentenced to 28 months in prison.

The fraud offense relates to Brochez lying about having HIV to the Ministry of Manpower so that he could work. Singapore in the past banned and blacklisted foreigners with HIV from working, though it amended that regulation in 2015 to allow holders of short-term visit passes who have HIV to enter.

Brochez was also convicted for furnishing false information to the police during a criminal investigation, and using forged degree certificates in job applications. When he finished his prison sentence, he was deported, though the ministry didn’t say where he was sent.

The other man who the ministry says was responsible for what would ultimately be the doxxing of the HIV records was Brochez’s then-boyfriend and current spouse, a male Singaporean doctor named Ler Teck Siang. Ler, who headed up MOH’s National Public Health Unit (NPHU) from March 2012 to May 2013, had access to, and responsibility for the safekeeping, of the HIV Registry.

Two years after Ler resigned in January 2014, he was charged with failing to take reasonable care of HIV records, of helping Brochez to pull off his fraud, and of lying to the police and the MOH.

According to Channel News Asia, Ler also helped Brochez by supplying his own blood for government tests. Ler was sentenced to two years in jail in September 2018 – a sentence that he’s now appealing.

The MOH said that it filed a police report in May 2016 after learning that Brochez was in possession of confidential information that appeared to be from the HIV Registry. Police searched his, and Ler’s, properties. But it wasn’t until after Brochez had been deported that the MOH learned that Brochez still had part of the HIV records that he’d had, but apparently hadn’t yet doxxed, two years prior.

That brings us up to last Tuesday, 22 January, when the MOH got a heads-up about Brochez potentially still having data from the HIV registry… information that, this time around, he allegedly published online.

Singaporean police are currently looking for assistance from unspecified foreign countries as they continue to investigate Brochez. The ministry is appealing to the public: if anybody comes across related information, please don’t share it, the MOH asks.

The second medical records pratfall

The last time Singapore suffered a medical-records pratfall was in July 2018, when 1.5 million patients’ records – including that of Prime Minister Lee Hsien Loong – were seized and illegally copied in a malicious cyberattack on the databases of the city state’s SingHealth hospital group.

A committee of inquiry published its report into the hack earlier in January, saying that the attacker(s)’ success in obtaining and exfiltrating the records wasn’t inevitable. From the report:

IHiS and SingHealth should have been better prepared and more robust in their actions. If they had done so, the cyber-attack could have been limited or even stopped.

The inquiry found that staff had inadequate cybersecurity awareness, training, and resources; that there were a number of vulnerabilities, weaknesses, and misconfigurations in the SingHealth network and database system that could have been remediated before the attack; and that SingHealth “had no management line of sight with regard to the assessment of cybersecurity risks.”

Although it’s not clear that a cyberbreach took place in the doxxing of the HIV records – the records could have been in paper form, for all we know – the government’s assessment of the SingHealth breach makes clear that there were ample vulnerabilities and lack of preparedness that could allow breaches to happen.

That’s the core problem of any potential breach: without the right precautions in place, you have little choice but to assume the worst. Even if nothing “cyber” actually led to the HIV records being breached, it sounds as if the fruit was ripe for the cyber plucking.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/J-dZlSD3Z9E/

Update now! Chrome and Firefox patch security flaws

It’s 2019’s first browser update week with both Google and Mozilla tidying up security features and patching vulnerabilities in Chrome and Firefox for Mac, Windows, and Linux.

But for Chrome security in version 72, it’s more about what’s being taken out than what’s being added.

One of these changes is the deprecation of support for obsolete TLS 1.0 and 1.1 protocols with a view to removing support completely by Chrome 81, scheduled for early next year (the same will apply to Firefox, Microsoft Edge and Apple’s Safari). This will affect developers rather than users who will still be able to connect to the tiny number of sites using TLS 1.0/1.1 for another year.

However, one standard that is completely banished in Chrome 72 is HTTP-Based Public Key Pinning (HPKP), deprecated from version 67 last May.

An IETF security standard designed to counter digital certificate impersonation, HPKP’s problem wasn’t obsolescence so much as doubts about the unintended problems it could cause. Consequently, uptake was low.

Also on the slippery slope is FTP, which Google considers to be a legacy protocol that it’s time to migrate away from. The latest version will only render directory listings, downloading anything else.

An interesting tweak is the integration of WebAuthn APIs to allow users to authenticate using FIDO U2F keys and Windows Hello. Although still not defaults – and no major websites offer WebAuthn in anything other than a test state – it’s a necessary stage for enabling this by default in a future release.

Security fixes

Chrome 72 fixes 58 CVE-level flaws, including 17 rated ‘high’ severity and one ‘critical’, identified as CVE-2019-5754 and described simply as an “inappropriate implementation in QUIC Networking.”

Continuing its six-week schedule, the next version, Chrome 73, is due out on 12 March, with version 74 appearing on 23 April.

Part of this update will see Chrome warn users when they visit lookalike URLs meant to resemble popular websites.

Firefox 65

Naked Security has already covered the new content blocking setting added to Firefox 65, but this also patches seven CVEs, including three marked ‘critical’ and two ‘high’.

The criticals include CVE-2018-18500 (reported by SophosLabs’ researcher Yaniv Frank), described as:

A use-after-free vulnerability that can occur while parsing an HTML5 stream in concert with custom HTML elements.

Also fixed are CVE-2018-18501 and CVE-2018-18502, both memory safety flaws plus CVE-2018-18504, a memory corruption issue, and CVE-2018-18505, a privilege escalation affecting Inter-process Communication (IPC) authentication.

Continuing the memory theme, Linux, macOS and Android versions get protection against ‘stack smashing’, which attackers can use to take control of a browser process.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/1e672i106UQ/

Personal data slurped in Airbus hack – but firm’s industrial smarts could be what crooks are after

Comment Airbus has admitted that a “cyber incident” resulted in unidentified people getting their hands on “professional contact and IT identification details” of some Europe-based employees.

The company said in a brief statement published late last night that the breach is “being thoroughly investigated by Airbus’ experts”. The company has its own infosec business unit, Stormguard.

“Investigations are ongoing to understand if any specific data was targeted,” it continued, adding that it is in contact with the “relevant regulatory authorities”, which for Airbus is France’s CNIL data protection watchdog. We understand no customer data was accessed, while Airbus insists for the moment that there has been no impact on its commercial operations.

Airbus said the target was its Commercial Aircraft business unit, which employs around 10,000 people in the UK alone, split between two sites. The company said that only people in “Europe” were affected.

The Broughton site “focuses primarily on manufacturing but also houses engineering and support functions such as procurement and finance”, according to the company website, while Commercial Aircraft’s other site at Filton, near Bristol, does “the design, engineering and support for Airbus wings, fuel systems and landing gear”.

Airbus is also the design authority for the Eurofighter Typhoon fighter jet’s landing gear, though detailed design work is done by a subcontractor, French aerospace firm Safran’s Landing Systems division.

Airbus sent us a prepared statement but did not respond to follow-up inquiries.

It might not be who you think

Such an “unauthorised access to data” incident immediately raises suspicions about industrial espionage.

Airbus has a growing manufacturing presence in China, with a final assembly line for both its A320-series narrowbody airliners and A330 long haul, twin-aisle aircraft. Francois Mery, COO of Airbus’s Chinese commercial aircraft division, told state news outlet China Daily last year that Airbus wants to “form a vertical integration supply chain in China”. This tends to show that state-backed hackers from China, at least, would have comparatively little to gain; Airbus technology is already making its way to the Middle Kingdom through entirely legitimate channels.

On the other hand, the A330 is also used by western air forces as a troop-carrier-cum-airborne-refuelling aircraft. Filton also assembles wings for the A400M military transport aircraft (Airbus’s attempt to compete with Lockheed Martin’s famous C-130 Hercules). Those wings make use of advanced composites to save weight, as a 2011 Composites World feature explained in detail – and that kind of technology is still valuable today.

This could also have been an attempt to harvest personal data for later targeting of individuals, perhaps as a means of getting inside the company’s networks for further exploitation. Airbus is one of the Eurofighter consortium members and its military division is also responsible for a number of helicopter designs in military service worldwide, including the Puma medium-lift helo flown by the RAF. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/31/airbus_hacked_eurofighter_link/

Texas lawyer suing Apple over FaceTime bug claims it was used to snoop on a meeting

A Texas lawyer is suing Apple over its FaceTime eavesdropping bug, claiming it allowed someone to overhear a meeting with a client.

Larry Williams II filed the case in Harris County, Houston, following revelations that it was possible for callers to listen in to the mic on a person’s phone or Mac before that person accepted or rejected the call.

The gaping security defect, in iOS 12.1 and 12.2 at least, affected FaceTime Group calls. A caller could go through a few simple steps to activate the microphone of a recipient’s device without that person having agreed to enter the call.

Williams said in the filing (PDF), published by the Courthouse News Service, that the bug “essentially… converts a person’s personal iPhone into a microphone” for a third party to listen in without consent.

The lawyer said he was in a private deposition with a client when “this defective product breach allowed for the recording” of the meeting.

The suit states that Apple – which it held as entirely responsible for the design, testing and distribution of the iOS update – had failed to provide warnings or instructions to alert the public to the risks.

Apple failed to exercise reasonable care in the design, the suit said. And the vendor “knew, or should have known, that its Product [affected iOS] would cause unsolicited privacy breaches and eavesdropping” and that users “would foreseeably suffer injury” as a result.

Making his case for damages, Williams said he had suffered “permanent and continuous injuries” that included a “lost ability to earn a living” that would continue into the future.

After the news went public on 29 January, Apple disabled FaceTime Group until it could push out a software fix. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/31/texas_lawyer_sues_apple_eavesdropping_facetime_bug/

For a Super Security Playbook, Take a Page from Football

Four key questions to consider as you plan out your next winning security strategy.

The Big Game is just days away. Whether it’s the Patriots or Rams who win the Super Bowl, we know for sure that the end of the season brings with it a period of turnover and uncertainty – feelings familiar to many of us in cybersecurity.

After trophies and parades, bloggers and talk radio turn to a favorite staple: forecasting which teams’ assistants will earn head coaching jobs based on the perceived power of their playbooks. This parallels playbook buzz in security, in which a host of community voices are touting playbook-style approaches to security challenges, from expediting repetitive tasks to identifying malware to simulating attackers. Playbooks appeal to the emotional needs of anyone facing high-stakes, must-win scenarios, whether in a stadium or a security operations center (SOC). It is only natural to seek an edge by studying someone’s winning formula.

Yet history is full of coaches taking a winning scheme to a new city, where their vaunted playbooks fall short because of different talent, timelines, and owner idiosyncrasies. The same applies to security leaders. So how can you avoid that outcome? Here are four key questions to ask as you study your playbook options.

1. What Does Your Organization Look Like?
Playbooks are supposed to create mismatches – but not in locker rooms and team meetings. Many a coaching guru finds it hard aligning trainers, scouts, general managers, and players around their strategies.

However, there are no “rebuilding years” in cybersecurity. Every new tool or formula you introduce has to make a positive difference from Day One. Make sure any playbook approach you are signing up for pairs well to your team, as well as executive sponsors’ culture and timetable. What are the stakes? If you just received the resources to pick up MITRE ATTCK and tinker with a few offensive exercises, that has very different blowback risks compared with swapping out part of your production security stack. Make sure you are on the same frequency with “owners” so that everyone can be upfront about purpose, needs, and benefits.

2. Is It Your Playbooks – or the Play-Calling?
The entire premise of a playbook’s value is the idea that a valid body of experience and community – coaches, athletes, or security experts – found that “in situation [X], action [Y] is usually the most productive option.” On the gridiron, it could be a designated quarterback run out of a four-receiver set to fool the defense. On a network, it could be rapidly initiating processes to find and contain files meeting a range of attributes before a payload detonates. But how do you know which play to call and when?

Coaches rely on sideline or press box views to compare what their eyes see with options on a clipboard. In the SOC, the field of action is defined by the complex plumbing of layered security products’ consoles, threat intelligence feeds, SIEM dashboards, and other monitors. Hiccups and misalignment in this plumbing prevent security coaches from knowing the true “down and distance,” offsetting any playbook’s value. Before replacing your plays, make sure you are calling the game with clear eyes and ears.

3. Do Position Coaches and Players Think?
The best coaches adapt systems to fit their players’ unique mix of skills and experience. The same is true in cybersecurity. When you go all-in on a new playbook, you are bound to introduce new roles and assignments. Staff will have to shift how they spend their time, get trained on new tools, or become comfortable handing some of their work over to software. Seek out the players and coaches on your team who will tackle these changes head-on.

In football, certain plays are routine, such as a running play meant to gain at the last five yards. Similarly, in security many plays are routine, too, like updating rulesets and filters. The outcome of the game does not hang in the balance. Conversely, just like a blocked punt or kick-off return for a touchdown can change the whole complexion of the game, as the cliché goes, SOC teams need to make sure new wrinkles like automation and playbook twists do not trip up the most important things to execute when they matter most.

4. What Do the Numbers Say?
In the metrics-driven sports world, scoreboards are all that matter. If a newly installed offense coincides with a spectacular season, fans thank the playbook before wondering whether fewer injuries or rival teams’ down years made the difference.

Unfortunately, there are no universal closing whistles or scoreboards in the art and science of cyber risk. Wins and losses are subjective labels handed out according to organizations’ different risk tolerances, assets, and industries. Security leaders have to crunch the right numbers necessary to give boardroom and C-suite decision-makers both skybox and sideline views of the game. Before you swap out playbook code or approaches, consider how they impact the data you must or want to collect and compare.

Vital numbers can take many forms. Consider immediate hard figures, like the rate of incidents detected and investigated and time to remediation but press for a sense of incident responders time and stress level as well. There needs to be sound correlation. If a playbook seems to be crushing the numbers but the team still feels overwhelmed or unsure whether new actions are getting to the root cause of issues, you might not have the metrics necessary to back up your coaching decisions so you’ll still need to press playbook developers for improvements.

Winning Strategy
In sports and cybersecurity, change management is the true test of champions. Players get hurt, free agency steals veterans, and opponents get stronger. In every organization, shifts in the business, IT fabrics, and third-party risks constantly send us back to the whiteboard. Accept that no playbook can replace leadership, bypass all constraints, or anticipate the fundamentally unthinkable.

I am optimistic about playbooks these days. Many of us in security were drawing our own plays up in the dirt years ago, comparatively speaking, so the advent of engaged collaboration and communities distilling new security workflows is a good thing. But we need to keep any playbook in perspective. Focus on what improves your day-to-day outcomes, but be careful of falling into a near-sighted obsession with tactics in a game where alignment and organization are the variables between you and success.

Related Content:

Andy Singer is a security industry veteran, with more than 20 years of experience igniting growth, bringing products to market, and entering new markets while also developing strong customer relationships. Prior to joining enSilo, Andy held global marketing leadership roles … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/for-a-super-security-playbook-take-a-page-from-football/a/d-id/1333747?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The D in SystemD stands for Danger, Will Robinson! Defanged exploit code for security holes now out in the wild

Those who haven’t already patched a trio of recent vulnerabilities in the Linux world’s SystemD have an added incentive to do so: security biz Capsule8 has published exploit code for the holes.

Don’t panic, though: the exploit code has been defanged so that it is defeated by basic security measures, and thus shouldn’t work in the wild against typical Linux installations. However, Capsule8 or others may reveal ways to bypass those protections, so consider this a heads-up, or an insight into exploit development. Google Project Zero routinely reveals the inner magic of its security exploits, if you’re into that.

Back to SystemD.

In mid-January, Qualys, another security firm, released details about three flaws affecting systemd-journald, a systemd component that handles the collection and storage of log data. Patches for the vulnerabilities – CVE-2018-16864, CVE-2018-16865, and CVE-2018-16866 – have been issued by various Linux distributions.

Exploitation of these code flaws allows an attacker to alter system memory in order to commandeer systemd-journal, which permits privilege escalation to the root account of the system running the software. In other words, malware running on a system, or rogue logged-in users, can abuse these bugs to gain administrator-level access over the whole box, which is not great in uni labs and similar environments.

Nick Gregory, research scientists at Capsule8, in a blog post this week explains that his firm developed proof-of-concept exploit code for testing and verification. As in testing whether or not computers are at risk, and verifying the patches work.

“There are some interesting aspects that were not covered by Qualys’ initial publication, such as how to communicate with the affected service to reach the vulnerable component, and how to control the computed hash value that is actually used to corrupt memory,” he said.

Manipulated

The exploit script, written in Python 3, targets the 20180808.0.0 release of the ubuntu/bionic64 Vagrant image, and assumes that address space layout randomization (ASLR) is disabled. Typically, ASLR is not switched off in production systems, making this largely an academic exercise.

The script exploits CVE-2018-16865 via Linux’s alloca() function, which allocates the specified number of bytes of memory space in the stack frame of the caller; it can be used to manipulate the stack pointer.

Basically, by creating a massive number of log entries and appending them to the journal, the attacker can overwrite memory and take control of the vulnerable system.

Sad penguin photo via Shutterstock

The D in Systemd stands for ‘Dammmmit!’ A nasty DHCPv6 packet can pwn a vulnerable Linux box

READ MORE

“Our general approach for exploiting this vulnerability is to initially send the right size and count of entries, so as to make the stack pointer point to libc’s BSS memory region, and then surgically overwrite the free_hook function pointer with a pointer to system,” explains Gregory. “This grants us arbitrary command execution upon the freeing of memory with content we control.”

One of the challenges in creating this exploit involves controlling the output of the hash function used to encode the journal entries. The PoC code has been tuned to this specific Vagrant image, meaning those values have been computed in advance.

To adapt the PoC to other Linux distributions requires hash preimaging, something that can be done with available tools thanks to the fact that the hash is not cryptographically secure. Capsulate8 intends to explore this further in a follow-up post, though the company may withhold some details to avoid helping script kiddies defeat ASLR defenses.

“We are also considering providing a full ASLR bypass, but are weighing whether we are lowering the bar too much for the kiddies,” Gregory added.

In a phone interview with The Register, co-founder and chief scientist Brandon Edwards said, “We provide enough information under certain conditions to exploit the the Vagrant image. There will be at least one more post, depending on how we feel about disclosing an ASLR bypass. We will be writing up how to compute the preimage required.”

Edwards said PoC code was developed to verify the efficacy of Capsule8’s real-time attack detection. The flaw enabling the exploit code, he said, demonstrates that alloca() is not how memory should be dynamically allocated.

“The other thing this highlights is there are some compiler flags that would have prevented this from being exploitable,” he said, pointing to GCC’s -fstack-clash-protection option. “Some distros compile with it out of the box, and others don’t.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/31/systemd_exploit/