STE WILLIAMS

NSA Issues Advisory for ‘BlueKeep’ Vulnerability

The National Security Agency joins Microsoft in urging Windows admins to patch ‘wormable’ bug CVE-2019-0708.

The National Security Agency has issued a release and advisory pushing Microsoft Windows administrators to patch “BlueKeep” (CVE-2019-0708), a critical remote code execution bug in Remote Desktop Services (RDS) on supported and unsupported versions of Windows.

BlueKeep affects Windows 7, Server 2008, Server 2008 R2, Vista, XP, and Server 2003. When it patched the vulnerability earlier this month, Microsoft also released fixes for out-of-support versions of Windows. In a blog post published this week, company officials said they are “confident” an exploit exists for the bug; research shows 1 million devices are still vulnerable.

NSA officials echo Microsoft’s concern that BlueKeep could be “wormable” if exploited. The vulnerability is pre-authentication, requires no user interaction, and can spread across machines in the same way WannaCry did when it caused global damage back in 2017.

“It is likely only a matter of time before remote exploitation code is widely available for this vulnerability,” NSA officials wrote in a news release. “NSA is concerned that malicious cyber actors will use the vulnerability in ransomware and exploit kits containing other known exploits, increasing capabilities against other unpatched systems.”

In its advisory, the NSA provides additional measures businesses can take as they patch and upgrade larger networks. Officials suggest blocking TCP Port 3389 at their firewalls; this port is used by RDP and can block attempts to establish a connection. They also advise enabling Network Level Authentication (NLA), which will require attackers to authenticate to RDS to exploit BlueKeep. Finally, they recommend disabling RDS if it’s not required for employees.

Read the full NSA advisory here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/nsa-issues-advisory-for-bluekeep-vulnerability/d/d-id/1334880?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How to Get the Most Benefits from Biometrics

Providing an easy-to-use, uniform authentication experience without passwords is simpler than you may think.

Biometric systems have many benefits that enhance cybersecurity. But organizations must learn how to leverage and simplify this complex environment — consisting of mobile devices and sensors that are unified under the common FIDO standard — to reap the most benefits from it.

First of all, IT and cybersecurity teams must take a firmer stance on mobile security because mobile devices are where biometric functions are most often found. Second, having the right user experience (UX) for biometrics is essential because many users may reject an approach that is counterintuitive or too cumbersome. Cybersecurity and UX are no longer mutually exclusive, and many of today’s new password-free solutions can provide a uniform experience that is accessible to all users regardless of their technical acumen. Over time, biometrics will be part of the physical world, allowing users to unlock their laptops, devices, offices, and conference rooms, all underpinned by the FIDO standard, which will ultimately deliver the best protection across the enterprise.

Here are some recommendations for leveraging the biometric ecosystem in the most beneficial ways possible.

Mastercard, Aetna, and First Citrus Bank used biometrics to abandon the risk of holding centralized credentials and passwords for a portion of their users in order to reduce cybersecurity attacks and breaches. The primary benefit these enterprises and we practitioners observe is that we’re replacing the “something you know” factor of user authentication with the more difficult to reproduce “something you are.” The majority of sensors on modern mobile devices have a 1/50,000 minimum false acceptance rate, which makes it difficult to mimic a biometric template — that is, an image of a fingerprint or a face, or a subset of a voice.

Using these sensors paired with standards-based authentication (such as FIDO Alliance protocols) that eliminates shared secrets means that service providers can slow down adversaries while making the UX easier to use. This disrupts hackers’ fraud model to the point where they would have to go from device to device in order to obtain a single user’s credentials, which are often encrypted and isolated in the most secure area of the device. Or they must physically have access to the device of each user they want to target. This approach makes it unfeasible to have a mass credentials breach such as those we’ve been seeing on a regular basis. 

But what of the fragmented array of choices among operating systems, authentication modes (e.g., touch, face, voice, behavioral), and devices, particularly in the Android space? The best way to defragment this ecosystem is by adopting an open standard — such as FIDO — that uses biometric capabilities, meeting security targets while providing a uniform UX. Consumers know how to use the biometric capability of their mobile device or laptop without issue, and the UX is similar across devices even though they come from different manufacturers and function across different operating systems. 

Take a Tougher Stance on Mobile Security
Many of our financial services are available as mobile apps, which has led to a rapid increase in the attack surface.

Enterprises must take a much harder stance on mobile security because they continue to be affected by breaches because of the credential reuse success rate, which is currently at 2% to 4%, according to Shape Security’s “Credential Spill Report.” The mobile platform is not immune to this growth in credentials-based fraud. Therefore, we should get ahead of fraud’s migration to mobile by ending shared-secret forms of authentication to mobile apps. Standards like FIDO are mobile-centric and make the marriage of device biometrics and public key infrastructure the cornerstone of secure, seamless experiences across mobile devices, desktops, and the Internet of Things.

But we shouldn’t stop there. As hackers realize their wholesale model of mass credential breaches has been disrupted, they will target devices with malware — for example, keyloggers. So, while mobile security will see an improvement with strong authentication without shared secrets, we’ll need more robust malware intrusion, device health, and defense capabilities on mobile devices.

Make User Experience a Top Priority
Any method of access alternative to passwords should be simpler and faster, or consumers will balk at adoption. In today’s business atmosphere, keeping the user’s attention is critical because it’s easy to lose it. Ease of use should be the top priority for every organization. 

Providing an easy-to-use, uniform experience for biometrics is simpler than one may think. Most employees already have a company or personal smartphone with one or more biometric capabilities. Cybersecurity teams should ensure that all mobile devices across their enterprise can be leveraged seamlessly to authenticate to workstations, to apps using single sign-on, and to physical access systems. Organizations can then remove the password from the login process — and from existence with FIDO standards — and provide a seamless UX by having the user authenticate with the familiar biometric capability on their mobile device. 

An Iterative Process
Cybersecurity teams will succeed with biometrics if they embrace it as a gradual process. Find areas of your business where biometrics can have the greatest effect quickly and deploy the capabilities there. This can be for internal use cases or consumer-facing apps.

Related Content:

 

Bojan Simic is the Chief Technology Officer and Co-Founder of HYPR. Previously, he served as an information security consultant for Fortune 500 enterprises in the financial and insurance verticals conducting security architecture reviews, threat modeling, and penetration … View Full Bio

Article source: https://www.darkreading.com/application-security/how-to-get-the-most-benefits-from-biometrics/a/d-id/1334869?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

SentinelOne Raises $120M in Series D Funding

The endpoint security company already has specific plans for the new funds.

SentinelOne has closed a Series D funding round of $120 million. This brings the endpoint security company’s total funding to more than $230 million.

Insight Partners led this latest funding round, with participation from investors including Samsung Venture Investment Corp., NextEquity, Third Point Ventures, Redpoint Ventures, Granite Hill, and Data Collective.

“One important thing to note is that it’s been two-and-a-half years since we had gone out and done a raise,” says Nicholas Warner, SentinelOne COO. “That speaks to the sustainability of what we built — we’re not unnecessarily burning and blowing through cash.”

SentinelOne already has specific plans for the new funds. “We’re going to be using the funds to continue to foster deep innovation in our product and our solutions,” Warner says. “And then we’re also going to be using some of the funding for marketing and additional go-to-market efforts.”

Read more here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/perimeter/sentinelone-raises-$120m-in-series-d-funding/d/d-id/1334883?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Apple battles Facebook and Google with rival sign in service

Apple’s World Wide Developers Conference (WWDC) on Monday was full of surprises. One of them was a new feature designed to make signing in to apps and websites more private: ‘Sign In with Apple’.

You know how you’ve signed up for dozens of accounts on websites over the years? You have to enter your email address, choose a  password that meets requirements, store it (hopefully with a password manager)… and soon after comes the flood of junk mail from the site’s needy marketing team.

Some folks use a throwaway-email address service for each new account. But what if you want to see some of that mail? And how sure are you that the dummy address won’t get reused in the future by someone else? And how do you know if the website’s going to store your password securely?

The other option is to use a single sign-on service from one of the two big providers: Google or Facebook. When you see a ‘Sign In With Google’ or ‘Sign In With Facebook’ button on a web site, it’s offering to let you use your Google or Facebook ID for a quick, one-click sign up or sign on, no password required, as long as you’re signed into Google or Facebook.

The problem with services like these is that the companies running them (and their hidden partners) end up knowing more about you than your grandmother.

Sign In with Apple is Cupertino’s privacy-conscious version of those services. The idea is to make signing in – and signing up – to websites as simple as possible, without having to provide any personal information.

When a website or a mobile app supports Sign In with Apple, you’ll be able to register for an account by authenticating on your device (with a suitably-specced iOS device, that means FaceID or TouchID). So just like Facebook and Google’s social sign-in features, you can create an account with a single button. Apple then acts as a proxy for you, managing your login credentials for that website or app.

Privacy-focused

Unlike Google and Facebook’s sign-in features, though, Apple’s focuses on privacy in addition to convenience. It won’t send the third-party app any data about you, and it even gives you the option to use an email address that it randomly creates and manages for you instead of your real address. When the app mails that address, Apple forwards it to you, but you can choose to kill the address at any time so that you don’t have to unsubscribe from a needy app’s email list.

Is this a direct broadside at Facebook and Google? Apple CEO Tim Cook told CBS:

We’re not really taking a shot at anybody.

The fact that Apple software engineering chief Craig Federighi displayed the Sign In with Facebook and Sign in with Google buttons on a big screen when announcing the feature suggests otherwise. But we digress. Cook added:

We focus on the user. And the user wants the ability to go across numerous properties on the web without being under surveillance.

What’s under the hood?

What’s the technology behind this service? At the time of writing, Apple hadn’t revealed if it’s using an industry standard service to support this operation, or if it’s going it alone.

Google and Facebook both use OAuth 2.0, an industry standard for online authentication from the IETF, for their single sign-on services.

However, Apple has been experimenting with Web Authentication (WebAuthn), which is another password-free sign-in mechanism supported by the FIDO Alliance.

WebAuthn combined with version 2 of another protocol called Client to Authenticator Protocol (CTAP) make up the FIDO 2 standard, which also streamlines two-factor authentication. It lets you use USB keys to sign into browser-based apps without using a password. That’s what Apple shipped in a preview version of the Safari browser in December.

A blow for monetization?

Sign In with Apple sounds very neat, but there’s a small catch: It’s an offer that developers can’t refuse. In an update to its developer guidelines, Apple said:

It will be required as an option for users in apps that support third-party sign-in when it is commercially available later this year.

So, as with most things Apple, developers are in a kind of gilded cage. Those supporting third-party sign-in from Facebook or Google won’t have a choice but to add this feature, effectively removing their direct relationship with the user, just as App Store subscriptions put Apple in between the content or service provider and the user. It could force online content and service providers to rethink their monetization models overnight. Maybe that’s no bad thing.

On the other hand, this looks like a good thing for many users fed up with handing over their privacy when they sign up for online services. It’s also fantastically convenient because it makes it even easier to sign up for (and into) a service on an iOS device. You won’t even have to bother storing a password in Apple’s keychain now. It will also work via the browser on other platforms, Apple guarantees us.

What do you think? Will you use this service? Let us know in the comments.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/CipFXfX1nhs/

ATM skimming crook behind bars after draining bank accounts for 2 years

A Boston federal court on Monday sentenced a Romanian national to 65 months in federal prison for a multi-state ATM card-skimming scheme through which he and his gang drained $868,706 from 531 people’s bank accounts.

The Justice Department said that Bogdan Viorel Rusu, 38, was also sentenced to five years of supervised release and ordered to pay restitution and forfeiture of $440,130.

Rusu pleaded guilty in September 2018 to one count each of conspiracy to commit bank fraud, bank fraud, and aggravated identity theft. He had been arrested November 2016 and has been in custody since then.

ID’ed through his asylum application photos

According to court documents, video surveillance cameras picked up a man installing a pinhole camera and a skimmer device on a bank ATM machine located in Chicopee, Massachusetts in August 2014.

Thomas Roldan – a special agent with Homeland Security’s Immigration and Customs Enforcement (ICE) within the US Department of Homeland Security (DHS) – said in an affidavit that he identified Rusu based on photos that Rusu submitted in support of an asylum application to US Citizenship and Immigration, as well as Roldan’s own physical surveillance of the suspect.

The skimming devices were plugged in at around 16:26, and then the video cameras picked up footage of somebody else picking up the pinhole camera and skimmer a few hours later, at 20:01. Bank records showed that 85 customers used the ATM during that time, and 12 of them later reported losses totaling $8,399.43.

Next day, same thing, but this time, Rusu plugged in the skimming devices and picked them back up himself after a few hours. That time, customers lost $9,823.50.

It went on like that for almost two years: between August 2014 until his arrest in November 2016, Rusu and his skimming buddies skipped from bank to bank, from Massachusetts on down to New York and on to New Jersey, grabbing people’s account details through ATMs and then using those details to steal money from their bank accounts.

Their take: they lifted $364,419 from Massachusetts banks, $75,715 from New York banks, and another $428,581 from New Jersey banks.

The devices

According to the DOJ, Rusu and/or his co-conspirators installed electronic skimming devices on the ATMs to surreptitiously record customers’ bank account information on the banks’ card-readers at the vestibule door, the ATM machine, or both.

They also installed other devices – generally, either pinhole cameras or keypad overlays – to intercept the PINs people typed in to access their bank accounts.

Then, the skimming crooks came back, removed their devices, and went on to transfer the details onto counterfeit payment cards. From there, they’d visit other ATMs to use counterfeit cards – before the bank or the customers became aware of the ripoff – in order to withdraw money.

They used the risky type of skimmer

There are multiple types of card skimmers, and Rusu and his gang were apparently using the kind that sets crooks up to get caught, since they have to physically install the devices and then come back to the scene of the crime to retrieve them and their valuable stolen data.

Say hello to the nice people scrutinizing video camera footage, guys!

There are other types that enable crooks to get the stolen information via text message or from Bluetooth. From a thief’s point of view, Bluetooth has limitations, notably that the wireless technology has limited range, so any thief who uses a Bluetooth-enabled skimmer needs to hang around nearby.

It also means that anybody else using Bluetooth in the vicinity could see the payment card details and perhaps intercept them, thereby beating the crooks to the punch.

Speaking of which, no, you can’t really sniff out gas station card skimmers using Bluetooth, though there was a Facebook half-hoax (mostly a bunch of half-truths) that promised you could. That one made the rounds back in February.

Software skimmers

We’ve also seen incidents of credit card skimming code planted on websites: in April, skimming code showed up on the ecommerce site for the Atlanta Hawks basketball team.

The obfuscated code turned out to be keylogging software.

There are more varieties still of skimming tools. Security journalist Brian Krebs has cataloged all sorts of them installed at all manner of locations, from self-checkout lanes at some Walmart locations to gas stations to Safeway grocery stores to yes, bank ATM machines.

What to do?

You can wiggle the card point of entry on the reader device to see if it’s a fake that’s been installed over the authentic slot – is it a bit too big? Color or texture’s not quite a match? However, that won’t help with keylogging software like that found on the Atlanta Hawks’ site.

So make sure that you also grab and wiggle your bank account and credit card statements to see if any phishy transactions fall out. If they do, notify your card-issuing institution as soon as possible.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ux-9yjQV-4w/

Apple bans ads, third-party tracking in apps meant for kids

On Monday, at its World Wide Developers Conference (WWDC), Apple had a big on-stage announcement of its new Sign In with Apple offering.

But it also made a less ballyhooed tweak: the company swept kids up in its privacy march.

On Monday, Apple updated the Kids category in its App Store developer guidelines to include a new ban on third-party advertising or analytics (which are ostensibly used for tracking) in content aimed at younger audiences.

Previously, the guidelines only restricted behavioral advertising tracking – e.g., advertisers weren’t allowed to serve ads based on kids’ activity, plus ads had to be appropriate for young audiences.

The current guidelines also (still) stipulate that apps can’t include links that take a user outside of the app, or other things that would “distract” kids, unless they’re behind a parental gate: a feature used in apps targeted at kids that keeps them from buying stuff or following links out of an app to websites, social networks, or other apps without the knowledge of their parent or guardian.

Apple also reminded developers to pay attention to privacy laws around the world when it comes to the data they collect from kids.

Is Apple a hypocrite over privacy claims?

Before giving Apple the thumbs-up on this move to protect kids’ online privacy, we should point out that other tech players have found its strenuous privacy marketing a bit disingenuous. In April, for example, Mozilla called out Apple over its “Privacy. That’s iPhone.” slogan. Mozilla said Apple’s done great with privacy, but it could do more: specifically, tweaking a little-known feature in iOS devices that could make it harder for advertisers to track mobile users.

Mozilla compared the Identifier for Advertisers (IDFA) – a hexadecimal code unique to every iPhone – to “a salesperson following you from store to store while you shop and recording each thing you look at. Not very private at all.” Mozilla wants Apple to change the IDFA on its phones every month: doing so would still allow advertisers to track what you do on your phone, but only for a few weeks, instead of forever.

Besides Mozilla, Apple’s come in for a good amount of heat in other quarters over its recent marketing campaign, in which it claims that “what happens on your iPhone stays on your iPhone.”

Oh, really? Well, maybe not so much. Last week, the Washington Post reported that its “privacy experiment” showed that in a single week, 5,400 hidden app trackers were guzzling iPhone data. From the article:

Even though the screen is off and I’m snoring, apps are beaming out lots of information about me to companies I’ve never heard of. Your iPhone probably is doing the same – and Apple could be doing more to stop it.

Technology columnist Geoffrey A. Fowler found that several iPhone apps were tracking him, passing information to third parties while he was asleep, including, perhaps unsurprisingly, IBM’s the Weather Channel – the app that Los Angeles sued over selling users’ location data.

Another investigation earlier this year – this one by TechCrunch – found that some apps were using so-called session replay technology: a type of analytics software that records the screen when an app is open. Apple told developers to knock it off, TechCrunch later reported, noting that apps using the technology included ones built by some big names.

But think of the children

Kids’ privacy has been jeopardized by all manner of internet-enabled products targeted to them, be they eavesdropping Barbies, GPS-tracking smartwatches that are vulnerable to being taken over by attackers, or devices with flaws that could allow hackers to remotely take control and spy on the 3- to 11-year-old children for whom they’re marketed.

A complaint to the Federal Trade Commission (FTC) filed by consumer advocacy groups recently prompted Google to slap its Google Play policies into shape when it comes to kids’ apps.

Apple’s latest move will hopefully truly prevent apps from tracking kids. We don’t want to find ourselves writing about it being only a skin-deep marketing move. Hopefully, this “no-tracking-kids” move will truly keep developers from doing just that with, say, kids’ iPhones while they sleep – or anytime, for that matter.

If not, Apple will potentially hear from the FTC.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SPewCcuL9E0/

Patch Android! June 2019 update fixes eight critical flaws

Unbeknown to most users, devices running supported versions of Android are supposed to get small amounts of new software every month, mostly security updates.

Unfortunately, as we pointed out in May, when and whether that happens is a matter of whim for each device’s manufacturer.

Updates for Google’s Pixel smartphones will arrive sometime this week – covering functional issues as well as security patches.

But if your device is made by another vendor, June’s Android patches could turn up any time from next month to some point later this year.

Given that June’s two patch levels (2019-06-01 and 2019-06-05) comprise only 13 CVEs plus another 9 from Qualcomm, this might not sound like that big a loss.

But if the same device is also missing previous updates, as many will be, the number of missing patches rises to dozens.

Amplifying the update confusion is Android’s version fragmentation, which gave Apple CEO Tim Cook cause to gloat when he mentioned at this week’s WWDC 2019 conference that the newest version of Android is still only running on 10% of Google’s mobile devices compared to 85% of iPhones running the latest iOS.

June patches

Despite the modest vulnerability count, the fact that 8 are marked ‘critical’ and 14 ‘high’ is good enough reason to want them as soon as possible, with 2 of the criticals (CVE-2019-2094 and CVE-2019-2095) affecting only version 9.

Seven are elevation of privilege (EoP), four are remote code execution (RCE), leaving the remaining flaws without designation.

By policy, Google doesn’t furnish much detail on individual flaws, but does mention that the most serious of this month’s vulnerabilities is in media framework which might allow:

A remote attacker using a specially crafted file to execute arbitrary code within the context of a privileged process.

Meanwhile, CVE-2019-2097 in the Android system could:

Enable a remote attacker using a specially crafted PAC file to execute arbitrary code within the context of a privileged process.

Luckily, the advisory continues, Google has had no reports that any of the serious flaws are being exploited.

What to do

Anyone looking to understand the difference between Android’s two patching levels should read the explanation we offered as part of April’s Android patch coverage.

Individual vendors often publish their own advisories that often offer clearer information than Google’s official Android updates. For instance, here are the June 2019 updates for Samsung, Nokia, Motorola, LG, and Huawei.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/M2hhOxFx0rE/

Crime doesn’t pay? Crime doesn’t do secure coding, either: Akamai bug-hunters find hijack hole in bank phishing kit

Exclusive Phishing kits – used by miscreants to build webpages that steal victims’ personal information and money by masquerading as legit websites – harbor vulnerabilities that can be exploited by other miscreants to pilfer freshly stolen data.

It’s not far off burglars breaking into a mafia den to steal loot swiped just hours earlier from a jewelry store.

And while it’s not unknown for software developed by criminals for criminals to be buggy and exploitable, proof of such bungling comes this week from researchers at Akamai who have been studying crimeware for vulnerabilities. They’ve found holes in installations of phishing kits that allow other hackers to sneak in and commandeer operations.

Phishing kits are typically bought or otherwise obtained by criminals to build webpages that are designed to look and function exactly like a legit website, such as a bank, in order to fool marks into typing in their usernames and passwords to login or hand over personal information, such as driving license or passport scans.

These bogus webpages collect this cyber-booty and pass it along to its masters, and are usually installed on hacked websites for a while, with links spammed out to victims as phishing emails. The key thing, for the crooks, is that the emails and webpages look as genuine as possible.

Akamai senior security researcher Larry Cashdollar, with the help of colleague and researcher Steve Ragan, have found a bunch of phishing kits – particularly those that invite victims to upload files – with classic security vulnerabilities that can be exploited by hackers to take over the installation. That means sites belonging to small businesses, government departments, and so on, that have been compromised to host these phishing pages can wind up being hacked a second time by opportunist thieves seeking to swipe victims’ information for themselves once all the luring emails have been sent out.

“The real risk and concern in this situation goes to the victims: the server administrators, bloggers, and small business owners whose websites are where phishing kits like these are uploaded,” said Cashdollar in a research memo shared with The Register ahead of publication.

“They’re getting hit twice and completely unaware of the serious risk these phishing kits represent.

“While Akamai hasn’t determined if there have been successful secondary attacks due to these vulnerabilities, it’s a real possibility. Many phishing kit developers have a background in application security, and chase bugs like these for money and notoriety. The idea that they would search for, discover, and exploit such flaws for their own gain isn’t a stretch.”

Michele Orru. Image: Darren Pauli / The Register.

Hacker dishes advanced phishing kit to hook clever staff in 10 mins

READ MORE

Ragan told El Reg the vulnerable kits studied were observed being used by miscreants to impersonate “two known commercial banks, a file storage and sharing service, and one online company that deals with payments,” with at least one of them promoted via phishing emails.

These kits used insecure 2017-era source code lifted from a GitHub repository to implement file uploads: people would be enticed into handing over to fraudsters scans of sensitive documents and similar data via these web forms. However, the code behind the forms performed no security checks nor input sanitization, meaning it is possible to upload code to the web server hosting the phishing kit via these forms, such as a PHP webshell, and then open it in your browser to start running it. To open it, you’ll need to figure out the resulting URL for the uploaded file, which shouldn’t be too hard.

At that point, you now, hopefully, have code execution within the phishing site’s environment, with no authentication or passwords needed, and you can launch whatever commands and cron-scheduled scripts you like as the web server process. From there, you can try to elevate your privileges, or simply snoop on victims hitting the phishing site. Most sites hacked to host phishing pages have lax security, making all of this possible.

“These vulnerabilities are exploitable during the upload process, which is where the kit will ask the victim to upload pictures of their IDs, bank card, etc,” Ragan explained. “So if you’re on a domain, find one of these kits, and get to the upload stage, you can instead send a shell as there are no checks with regard to file type.”

Specifically, the kits featured insecure PHP scripts named class.uploader.php, ajax_upload_file.php, and ajax_remove_file.php.

“A user could upload executable code to the web root. If the upload path doesn’t already exist, the uploader class file will create it,” as Cashdollar put it in his research note.

“The code in the file remove script doesn’t sanitize user input from ‘..’ allowing directory traversal, enabling a user to delete arbitrary files from the system if they’re owned by HTTPd. Code cloning and copying is as common in the criminal world as it is in traditional, legitimate application development.

“Server security configuration is rarely hardened, and often file permissions are left wide open allowing full read and write access to directories. Attackers compromising these kits using this vulnerability could gain additional footholds on the web server. One PHP shell and an improperly secured script ran by cron is all an attacker needs to take over the whole server.”

By the time you read this, Akamai should have more details up online over here. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/05/akamai_phishing_kit_vuln/

CISOs & CIOs: Better Together

An overview of three common organizational structures illustrates how NOT to pit chief security and IT execs against each other.

For certain critical IT deliverables, CIOs and CISOs embody the inherent tension between cybersecurity and operational requirements. Where the CIO is charged with delivering efficient IT infrastructure at low cost, the CISO is charged with ensuring that the same IT infrastructure operates within the risk tolerance parameters set by the board and CEO. Organizational structure has a lot of influence over how these functions operate and interact, and it can either exacerbate power struggles or facilitate alignment. Let’s look at three common organizational structures and how CIOs and CISOs can work together to achieve their objectives.    

Most Challenging: CIO controls CISO budget and rates CISO performance
When the CISO reports to the CIO, the onus is on the CIO to decide whether to fund and support cybersecurity initiatives, or the core deliverables that the CIO is charged with delivering. If a compromise has to be made, the CIO may be tempted to sacrifice security over functionality or infrastructure improvements.

This reporting structure can create an environment that discourages the CISO from fully disclosing risk to the CEO and board. In other words, CISOs who answer to CIOs are more likely to shape their message to please the boss.

Advice: Create a safe environment where honesty is valued
A CIO must make it safe for the CISO to be honest without fear of retaliation, and in turn, the CISO must have the courage to trust the CIO and communicate openly about risk. CISOs are responsible for helping CIOs understand risk and making it easy for them to mitigate that risk. If the CIO chooses to ignore the risk and can’t articulate why, then the CISO must be prepared to escalate the issue to other executives and/or risk owners.

Better: The CIO and CISO are separate roles, reporting to different execs
When the CIO and CISO report to different executives, some of the challenges discussed above are removed. But tension can arise between the two missions at an abstracted level. The lack of patching of the Apache Strut vulnerability that led to the Equifax breach of 2017 illustrates the point. As Richard F. Smith, the CEO of Equifax, explained to Congress:

On March 9, Equifax disseminated the U.S. CERT notification internally by email requesting that applicable personnel responsible for an Apache Struts installation upgrade their software. Consistent with Equifax’s patching policy, the Equifax security department required that patching occur within a 48-hour time period. We now know that the vulnerable version of Apache Struts within Equifax was not identified or patched in response to the internal March 9 notification to information technology personnel.

While it’s not clear why IT personnel did not patch the vulnerability, it is clear that the warning from the cybersecurity department and the security patching policy were not followed. This type of breakdown is more likely to occur where the IT personnel report up to a CIO and the cybersecurity personnel report to the CISO with separate sponsoring executives. Neither has complete and unambiguous responsibility for patching, which is not conducive to decision-making.

Advice: Rise above the conflict
In this scenario, the CISO and CIO must be careful not to amplify whatever misalignment exists between the executives above them. A good CISO and CIO will be “bigger” than the roles they’re in and decide between themselves what’s best for the business. The priority should be visibility and effective execution, even if it means compromise. Constant, open communication in this scenario is crucial.

Best-Case: Separate roles reporting to a single executive
Ideally, the CIO and CISO are two separately-defined peer roles that report to one executive responsible for delivering a secure IT environment that supports the business strategy. This helps ensure that the CIO and CISO have mutually complementary requirements. When a disagreement arises, one executive is accountable for making a decision that is beneficial to the business.  

Advice: Maintain transparency across the organization
Everyone needs to be on the same page when it comes to evaluating and prioritizing different types of risk (information security, operational, and financial). Ideally, transparency and healthy communication exist across the environment. When there’s transparency across all types of risk, the business can make high-level executive decisions regarding which ones to transfer, mitigate, and assume. The CISO isn’t in a position to minimize or overstate risk. Everyone puts their cards on the table, and decisions are made based on what’s best for the business.

To be successful in their missions, CIOs and CISOs must be in alignment. A vulnerable IT infrastructure won’t withstand today’s threats, and without an IT infrastructure, there’s nothing to secure. At the end of the day, it’s about enabling the business, and that can only be done together.

Related Content:

Leo Taddeo is responsible for oversight of Cyxtera’s global security operations, investigations and intelligence programs, crisis management, and business continuity processes.  He provides deep domain insight into the techniques, tactics and procedures used by … View Full Bio

Article source: https://www.darkreading.com/network-and-perimeter-security/cisos-and-cios-better-together- /a/d-id/1334850?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Labs are for nerds, it’s simply Kaspersky now – just hold still while we cyber-immunise you

Logowatch The strategy boutique opened a pop-up shop on the wild steppe of Kaspersky Lab yesterday as the Russian antivirus developer revealed a daring redesign that involves dropping the word “Lab”.

Gone too are the beautifully random red triangles, the evocative over-sized “SS”, and the multiple fonts – thrown overboard in favour of tasteful mint-green minimalism.

Kaspersky's new logo

Inoffensive

Helping onlookers to fully digest the idea that labs and proper nouns are for nerds, the company said the new look “reflects the evolution of our business focus from ‘cybersecurity’ towards the wider concept of ‘cyber-immunity’.”

The logo is constructed from “geometric and mathematically exact letter forms, representing the top-class software engineering expertise that the company originated from and to which we remain committed. In line with the name change, we have also dropped the word ‘Lab’.”

And just when people got used to using the singular.

Spy pointing gun at himself

Enough about me, why do you hate Kaspersky so much? Revealed: Insp Clouseau-esque bid to smear critics as shills

READ MORE

On the rebrand, the man himself, Eugene Kaspersky, graced us with this fizzy can of quote: “Since we founded our company more than 22 years ago we’ve seen both the cyberthreat landscape and our industry evolve and change beyond recognition, while witnessing the growing role of technology in our lives both at work and at home.

“Today the world has new needs, and our rebranding reflects our vision to meet those needs – not just for today, but well into the future. Building upon our successful track record in protecting the world from cyberthreats, we’ll also help build a safer world that’s immune to cyberthreats. A world where everyone is able to freely enjoy the many benefits that technology has to offer.”

Perhaps as an aside to Kaspersky’s recent escapades into finding out why industry experts are critical of the company, their marketing department also gently enquired about what people hated about their logo.

Secret service agent in silhouette on white background

Sir, you’ve been using Kaspersky Lab antivirus. Please come with us, sir

READ MORE

Indeed, the security house is no stranger to controversy. Before the new hotness was banning Huawei kit from everything, Kaspersky fell prey to a US campaign forbidding the vendor from bringing its red triangles anywhere near federal boxen in 2017. Even more controversially, serif and sans serif (pic) lay together like lamb and lion across its branding portfolio.

But “in pursuit of its mission to save the world” its visual identity now reflects its “core values” and the essence of what Kaspersky stands for as an organisation.

The jury’s out as to whether this beautiful logo and mission will sweeten up Uncle Sam, but The Register, for one, looks forward to our updated socks and Eau d’Eugene. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/05/not_kaspersky_lab_kaspersky/