STE WILLIAMS

Why FedRAMP Matters to Non-Federal Organizations

Commercial companies should explore how FedRAMP can help mitigate risk as they move to the cloud.

The word “compliance” is not a popular one for any technology provider. Whether an organization is in retail, manufacturing, healthcare, or government, navigating complex regulatory environments and processes can hinder technology advancement. While it’s often considered a “necessary evil,” compliance helps companies maintain high security and data management standards and mitigate security risk, which in the long run is good for everyone. We need to look for better ways to be compliant and still innovate.

Consider the Federal Risk and Authorization Management Program (FedRAMP). FedRAMP is a “government-wide program that provides a standardized approach to security assessment authorization” for cloud service provider (CSP) solutions. FedRAMP ensures cloud offerings are secure enough to be used by federal agencies, including agencies handling sensitive information and data.

FedRAMP was introduced in 2011 to support the Cloud First policy (now the Cloud Smart policy), which aimed to accelerate the adoption of secure cloud solutions among federal agencies. Rather than allow agencies to continue using individual requirements, FedRAMP continues to streamline authorizations by creating unified standards based on the risk needs of agencies. The goal is to improve agencies’ protection of information, streamline and reduce costs of security assessments, and better enable agencies to modernize operations with secure cloud solutions.

This movement to capitalize on the benefits of the cloud is not exclusive to federal agencies — far from it. FedRAMP was created to accelerate adoption within federal government, but non-federal organizations are starting to benefit as well, including critical infrastructure and commercial entities.

FedRAMP for Critical Infrastructure
Any non-federal organization working directly with the government will likely need to comply with the same FedRAMP requirements as agencies themselves. This includes universities and state agencies receiving federal grants, as well as defense contractors that support agency operations in cloud environments. Any program supported by federal funding for non-federal organization needs to adhere to FedRAMP guidelines, and so will the technology providers involved in that program.

Perhaps the biggest non-federal group of organizations looking at FedRAMP solutions today are those in critical infrastructure. Some organizations — for instance, a manufacturer providing components for a missile system — work directly with federal agencies and are required by law to use FedRAMP solutions. However, other organizations across banking, healthcare, insurance, and utility industries are being directed to FedRAMP, too, because these are highly regulated sectors. So, federal agencies are pushing companies in these industries to better ensure the security of citizens’ personal information.

Like federal agencies themselves, critical infrastructure companies are feeling the pressure to modernize operations with the cloud while simultaneously being cautious because of security concerns. These organizations are weary, and rightfully so, of being tomorrow’s data breach headline. FedRAMP authorized cloud solutions can ease concerns by delivering the same security as federal agencies and exploring the latest cloud solutions vendors have to offer.

Using FedRAMP to Guide Commercial Investment
Commercial businesses across industries understand the benefits that cloud solutions offer in terms of greater operational flexibility and reduced costs, but they don’t face the requirements or direct government pressure to use FedRAMP-authorized products. In fact, not all FedRAMP-approved solutions are even available to commercial customers. This depends on how agreements are set up between the software-as-a-service provider and the federal agency that sponsored the authorization, according to FedRAMP guidelines. Of the roughly 17,000 cloud applications available, only around 300 are approved by FedRAMP, and only a limited selection of those will be available for commercial use.

So, why should commercial organizations pay attention to FedRAMP? The answer is trust and confidence.

By understanding the federal cloud marketplace, commercial organizations can identify cloud solutions that can benefit their adoption strategy and be assured those solutions meet the highest security standards. Not only do FedRAMP solutions undergo hundreds of security control checks, the program also includes continuous monitoring that requires providers to immediately flag any product issues to the sponsoring agency.  This security transparency further gives everyone — federal agencies, critical infrastructure, or other commercial companies — more confidence in both the cloud solution and the CSP providing it.

And while a FedRAMP solution may not be authorized for commercial use, many CSPs are leveraging the benefits of going through the validation process and incorporating those into their commercial solutions. In other words, many commercial products are becoming more secure as a result of CSPs investing in FedRAMP. Tracking the technology providers that have achieved FedRAMP authorization gives commercial companies a good sense of where to look for advanced, highly secure cloud solutions.

A “Universal FedRAMP” Future
The FedRAMP program today is a primary driver in the acceleration of cloud adoption for federal agencies. While the program is still evolving, it has undoubtedly delivered, providing organizations with trusted cloud solutions that federal agencies can be confident using.

As the program matures, there will be an opportunity to embrace a more consistent, “universal FedRAMP” experience which will guide secure cloud investments across non-government. For now, commercial companies should be proactive in exploring how FedRAMP can help mitigate risk on their journey to the cloud.

Related Content:

Daniel Kent is the Chief Technology Officer for the Cisco Systems Public Sector organization. His group assists government transformation by delivering technologies, services, and offerings into public sector vertical markets including DoD, education, government, and … View Full Bio

Article source: https://www.darkreading.com/cloud/why-fedramp-matters-to-non-federal-organizations/a/d-id/1334849?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How Today’s Cybercriminals Sneak into Your Inbox

The tactics and techniques most commonly used to slip past security defenses and catch employees off guard.

Cybercriminals are constantly altering their techniques to bypass increasingly advanced technical controls in order to deliver credential phishing attacks, business email compromise (BEC), and different forms of malware to unsuspecting users who rarely think twice about clicking.

Between Oct. 2018 and March 2019, researchers with the Cofense Phishing Defense Center analyzed 31,429 malicious emails. At 23,195, credential phishing attacks fueled the bulk of emailed cyberattacks, followed by malware delivery (4,835), BEC (2,681), and scams (718). Subtle tactics like changing file types, or using shortened URLs, are giving hackers a hand.

“We do continue to see them evolve with simple adjustments,” says Cofense co-founder and CTO Aaron Higbee. Credential-phishing emails using fake log-in pages are tough to stop at the gateway because often associated infrastructure doesn’t seem malicious. Some campaigns, to disguise malintent, send emails from genuine Office 365 tenants using already compromised credentials or legitimate accounts – and a fake login page hosted on Microsoft infrastructure is “nearly impossible” to distinguish.

Researchers report many secure email gateways don’t scan every URL; instead, they only focus on the type of URLs people actually click. But with more phishing attacks leveraging single-use URLs, enterprise risk grows. Attackers only need one set of legitimate credentials to break into a network, which is why credential phishing is such a popular attack technique, they explain.

Cloud adoption is changing the game for attackers hunting employee login data. Businesses are shifting the location of their login pages and, consequently, access to network credentials.

“As organizations continue to move to cloud services, we see attackers going after cloud credentials,” Higbee says. “We are also seeing attackers use popular cloud services like SharePoint, OneDrive, Windows.net to host phishing kits. When the threat actor can obtain credentials, they are able to log into the hosted service as a legitimate user.”

It’s tough for organizations to defend against cloud-based threats because they don’t always have the same visibility to logs and infrastructure as they do in the data center. It’s a complex issue, Higbee continues, because businesses may engage with cloud providers and fail to include security teams, which need to be kept in the loop on monitoring and visibility needs.

Special Delivery: Malware

More attackers are using atypical, different file types to bypass attachment controls of email gateways and deliver payloads. As an example, researchers point to when Windows 10 changed file-handling for .ISO files, which gave hackers an opportunity to shift away from the .ZIP or .RAR files usually inspected by security tools. In April 2019, Cofense saw attackers rename .ISO files to .IMG, successfully transmitting malware through secure gateways.

“The gateway sees this as a random attachment, but when you download the file to the device, Windows 10 treats it as an archive and opens it in explorer allowing the victim to click the contents within,” says Higbee. “Nothing changed in the malware, just the file extension name.”

There is a challenge in defending against these types of threats because, as Higbee points out, there are legitimate attachment types you can’t block without disrupting the business. “We see this with PDF files that include links to the malicious site that might be a spoofed login page where they could capture credentials,” he adds. Businesses can’t blindly block these file types.

Some attackers rely on “installation-as-a-service,” through which they can pay to have malware installed on a machine, or group of machines, anywhere in the world. As Cofense points out, Emotet started as a banking Trojan but gained popularity as a loader for other malware; now, its operators have transformed Emotet into a complex bot responsible for several functions.

Cyberattackers who sent malware via malicious attachments in the past year exhibited a strong preference (45%) for exploiting a Microsoft Office memory corruption vulnerability (CVE-2017-11882). In previous years, they had heavily used malicious macros, which only accounted for 22% of malware delivery tactics this past year.

CVE-2017-11882 lets attackers abuse a flaw in Microsoft Equation Editor that enables arbitrary code execution. They can chain together multiple OLE objects to create a buffer overflow, which can be used to execute arbitrary commands often involving malicious executables. Equation Editor is an old application and lacks security of recent Microsoft programs; researchers expect the flaw will be exploited less as companies patch and use newer apps.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/perimeter/how-todays-cybercriminals-sneak-into-your-inbox-/d/d-id/1334872?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

All G Suite users to get Gmail ‘confidential’ mode

Google announced on Wednesday that on 25 June 2019, its Gmail confidential mode will be switched on by default as the feature becomes generally available.

The feature gives G Suite users who use Gmail the option to send emails with expiration dates or to revoke previously sent messages. It also prevents recipients from forwarding, copying, printing, or downloading messages. Since confidential mode will be switched on by default, admins will have to switch it off if they so choose – for example, if they’re in industries that face regulatory requirements to retain emails.

Google introduced confidential mode for personal Gmail accounts last year and made the beta available in March 2019.

The screenshot/photo caveats still apply

As with other ephemeral-messaging services, including Snapchat and ProtonMail, there’s nothing stopping recipients from doing a screen grab of a message or simply taking a photo of it.

And as we noted in April 2018, when Google first gave admins a heads-up about confidential mode, there’s a reason why the company called it “confidential” rather than “private.”

For one thing, an email sent in confidential mode isn’t encrypted end-to-end. That’s unlike ProtonMail, the end-to-end, encrypted, self-destructing email service.

Into the Vault with you

For another thing, confidential emails are going to live on Google’s servers.

As Google explains on its help center, its confidential mode works with Vault, a web-based Google storage spot where organizations can retain, hold, search, and export data to support their archiving and eDiscovery needs.

When somebody sends a message in confidential mode, Gmail strips out the message body and any attachments from the recipient’s copy of the message and replaces them with a link to the content. Gmail clients make the linked content appear as if it’s part of the message, while third-party mail clients display a link in place of the content.

Vault can hold, retain, search, and export all confidential mode messages sent by users in your domain, Google says. Vault has no visibility into confidential messages’ content when it comes to messages sent to your organization from external parties, though.

To support Vault’s requirement to access confidential mode messages, Gmail attaches a copy of the confidential mode content to the recipient’s message, Google says. There are a few things to be aware of when it comes to that copy, namely:

  • It’s attached only when the message sender and recipient are in the same organization.
  • It’s only available to Vault.
  • Senders and recipients cannot access the copy from Gmail.
  • Third-party mail archiving tools cannot access the copy.
  • To delete all copies of a confidential mode message, you must delete it from the sender account and all recipients’ accounts.

How to use confidential mode

Confidential mode can be used on a desktop or through the mobile Gmail app.

Sending a confidential email

To switch it on:

  1. On your computer, go to Gmail, or on a mobile go to the Gmail app.
  2. Click Compose.
  3. In the bottom right of the window, click Turn on confidential mode.
    Tip: If you’ve already turned on confidential mode for an email, go to the bottom of the email, then click Edit.
  4. Set an expiration date and passcode. These settings impact both the message text and any attachments.
    • If you choose No SMS passcode, recipients using the Gmail app will be able to open it directly. Recipients who don’t use Gmail will get emailed a passcode.
    • If you choose SMS passcode, recipients will get a passcode by text message. Make sure you enter the recipient’s phone number, not your own.
  5. Click Save.

Revoke access to a sent email

You can also remove access early to stop a recipient from viewing the email before the expiration date. Here’s how:

  1. On your computer, open Gmail.
  2. On the left, click Sent.
  3. Open the confidential email.
  4. Click Remove access.

Receiving a confidential email

If you’re the recipient of an email sent in confidential mode:

  • You can view the message and attachments until it expires or the sender revokes access.
  • You can’t copy, paste, download, print, or forward the message or attachments.
  • You might need to enter a passcode to open the email.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/j2EUo0wJ38c/

Apple sunsets iTunes

Sayonara, music lovers: you won’t have Apple’s much-maligned, bloated iTunes to kick around anymore. Instead, you’ll have to aim your kicks in three directions, since Apple has decided to split its 18-year-old digital hub into three standalone desktop apps called Music, Podcasts and TV.

The move was announced on Monday at Apple’s Worldwide Developer Conference.

Splitting up iTunes into three desktop apps will be similar to how those services are already divided on iPhones and iPads. According to CNN, Apple is keeping iTunes as a standalone iOS app and on Windows PCs.

Content storefronts like iTunes have pulled disappearing acts on content before. Like, say, when Apple removed movies from its Canadian Store and left a miffed Canadian man purchased-movie-less.

Fear not (or rather, fear as much as normal, given the above content whisk-aways), for iTunes’ disappearance isn’t going to mean that your libraries or previous purchases are going up in smoke. They’ll be maintained in each new app on Mac computers, an Apple spokesperson told CNN.

According to Ars Technica, Apple didn’t explain what’s going to happen with iTunes for Windows. Nor did it give details about how existing users will be able to port media libraries from iTunes to the new apps. Neither did it mention books, though much of iTunes’ book and audiobook functionality has already been moved to Apple’s Books apps, according to Ars.

However, Ars followed up with Apple to get some broad answers to users’ questions about old libraries, Windows and more. One such answer:

Apple Music in macOS Catalina will import users’ existing music libraries from iTunes in their entirety, Apple says. That includes not just music purchased on iTunes, but rips from CDs, MP3s, and the like added from other sources.

This news was anticipated. Subscription services have taken over the digital music market: according to IHS Markit Research, in 2018, subscription services for music, such as Spotify, accounted for 80% of online music and video revenue, while there were more subscriptions worldwide to online video services (613.3 million) than for cable (556 million): a 27% jump over 2017.

As goes the industry, so goes Apple: it’s been keen on pushing users to sign up for its Apple Music subscription service. Why wait for customers to buy songs piecemeal? Charging a monthly fee instead is a nice, reliable revenue stream.

Logical as this all is, we have to ask the big question: is our new app trio going to open up the floodgates to more extraneous, potentially unwanted material, a la U2 and that free album, Songs of Innocence, that got force-fed onto people’s devices without opt-in or say-so?

Time will tell! Though one imagines that Apple, and Bono, learned their lesson when some users (and some of us security curmudgeons) looked that gift horse in the mouth and said get off my lawn, you darn gift horse.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/G0bNi7Nxf8Y/

US visa applicants required to hand over social media info

Visa applicants to the US are now required to submit five years of social media account information.

This will give the government access to personal data we share on social media, such as photos, locations, dates of birth, dates of milestones and more.

For now, the State Department is only requesting account names. Thus, the access to social media account handles will enable the government to get at whatever data we’ve publicly shared.

No passwords required (yet)

However, the idea to request passwords has been floated in the past: In 2017, then-Homeland Security Secretary John Kelly suggested that asking for passwords was under consideration.

The US State Department already collects information on visa applicants such as previous addresses and contact information. The new policy, which went into effect on Friday, requires “most” visa applicants, including temporary visitors, to list their social media “identifiers” in a drop-down menu along with other personal information, the Hill reported.

Those social media identifiers will be used as one part of a background check that includes reviews of watchlists maintained by the US. At this point, the drop-down menu only includes the big social media platforms, though an official told the Hill that the form will soon accommodate all sites that visa applicants may use.

Visa applicants have the option of saying that they don’t use social media, but the official told the publication that lying could lead to “serious immigration consequences.”

Hill.TV quoted the official:

This is a critical step forward in establishing enhanced vetting of foreign nationals seeking entry into the United States. As we’ve seen around the world in recent years, social media can be a major forum for terrorist sentiment and activity. This will be a vital tool to screen out terrorists, public safety threats, and other dangerous individuals from gaining immigration benefits and setting foot on US soil.

“Extreme vetting” in action

The State Department published its intent to implement this policy last year. At the time, it said that it was also looking to collect telephone numbers, email addresses and international travel for the previous five years, whether the applicant has been deported or removed from any country, and whether specified family members have been involved in terrorist activities.

From the sounds of it, that’s what’s likely to come next. Asking for five years of social media information is just the start: in the future, visa applicants will be required to hand over more extensive information on their travel history, the Hill reports.

All of this stems from President Trump’s March 2017 executive order seeking to put “extreme vetting” into place. The  “Protecting The Nation From Foreign Terrorist Entry Into The United States” order imposed a 90-day travel ban, with some exceptions, on the citizens of seven predominately Muslim countries: Iraq, Syria, Iran, Sudan, Libya, Somalia and Yemen.

Invasive and ineffective?

The new vetting policy has been put into place over the scorn and ridicule of those who’ve pointed out that…

  • “Nefarious” people don’t share their cunning plans for terrorist attacks on social media (with at least one notable exception).
  • Nothing would stop evil-doers from lying about their social media presence or providing fake account names or even framing others by providing their targets’ social media handles (besides the fact that lying to the federal government is illegal and will have serious consequences, as the State Department official noted in the Hill.TV interview).
  • Rights groups and civil liberties organizations call it “highly invasive” and “ineffective”.

Hina Shamsi, the director of the American Civil Liberties Union’s National Security Project, said on Sunday that the new vetting policy is “a dangerous and problematic proposal” that “does nothing to protect security concerns but raises significant privacy concerns and First Amendment issues for citizens and immigrants.”

Research shows that this kind of monitoring has chilling effects, meaning that people are less likely to speak freely and connect with each other in online communities that are now essential to modern life.

As rights groups have pointed out in the past, cultural and linguistic barriers increase the risk that social media activity will be misconstrued.

Would you trust somebody from another culture who doesn’t speak your language to understand your jokes or your sarcasm?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/_hCsh-olcD8/

GandCrab ransomware crooks to shut up shop

The authors of the GandCrab ransomware strain are shutting their ransomware-as-a-service portal, allegedly walking away with a cool $150m.

The announcement appeared on a hacker forum, and cybersecurity researcher ‘Damian’ tweeted the news on 1 June:

GandCrab, which first appeared in January 2018, operated using a ransomware-as-a-service (RaaS) model – meaning the authors aren’t the only people using it (if they use it at all). Instead, they let others launch their own campaigns with it and take a small cut of the profits.

In a message on the hacking forum, the perpetrators explained that their broader community of customers had made far more money:

For all the good things come to an end. For the year of working with us, people have earned more than $2 billion.

They said that the community earned $2.5m per week on average, adding that they personally earned over $150m per year as part of the cybercrime venture.

We successfully cashed this money and legalized it in various spheres of white business both in real life and on the internet.

GandCrab is a slick operation and its logo, modern web interface, vanity Dark Web URL and unusual choice of the Dash cryptocurrency for payments gives it an innovative and professional veneer.

The ransomware is popular with cybercriminals and widely used, appearing in attacks as diverse as “spray and pray” malicious email campaigns and devastating targeted ransomware attacks.

The crooks behind it even showed a bit of PR savvy when they decided to “help” the beleaguered people of Syria by releasing the decryption keys for systems they’d crippled in that country for free.

They didn’t offer to compensate victims in that country for lost income though, or pay for people to do the decryption, despite their professed riches, and their conscience wasn’t pricked by indiscriminate GandCrab attacks on other targets, including hospitals, in other countries.

Signs of a conscience were also noticeably absent from the valedictory post. The malware authors used their sign off to congratulate themselves on a “well-deserved retirement” and to thumb their noses at both white hats and law enforcement agencies:

We have proven that by doing evil deeds, retribution does not come. We proved that in a year you can earn money for a lifetime.

And just in case you’d forgotten this is just a bunch of gangsters extracting money with menaces, they also had a message for victims, and it wasn’t sorry.

Victims – if you buy, now. Then your data no one will recover. Keys will be deleted.

In other words: cough up immediately.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/H_zOzHfEvSU/

Synthetic clicks and the macOS flaw Apple can’t seem to fix

What’s more embarrassing than a researcher revealing a security oversight in a company’s software?

In the case of Apple, it would be when that software, macOS 10.15 ‘Catalina’, hasn’t even shipped to users yet.

The bearer of bad news was noted researcher Patrick Wardle of Digita Security, who used last weekend’s Objective by the Sea conference in advance of macOS 10.15’s launch this week to reveal a weakness through which malicious apps could exploit ‘synthetic clicks’ – automated clicks or keystrokes made by an app in the interests of accessibility.

Hijacking this, malware could automatically generate synthetic clicks to bypass prompts that ask the user to authorise actions such as installing software, hijacking webcams and microphones, or accessing Apple’s Keychain password manager, none of which would be a good thing.

Because macOS security depends on the response to such alerts, malware that can simulate these clicks on behalf of the user would have a dangerous amount of power.

In 2017 it was realised that FruitFly malware had adopted the technique as far back as 2008, as did DevilRobber in 2011 and Genieo in 2014, so the threat is more than theoretical.

The flaw

To counter this, Apple introduced a whitelist that limited access to synthetic clicks to applications approved by the user.

However, for reasons of backwards compatibility it was discovered that Apple had built in some exceptions to this rule through the Transparency Consent and Control system (TCC), including for the open source VLC media layer, Adobe Dreamweaver, and the Steam games platform.

According to Wardle, the problem of the whitelist is that while it checks that an app is allowed access, it doesn’t check what that app is doing. If an attacker appended code to a legitimate app, the control would fail. Wardle said to ZDNet:

The issue is that the verification is incomplete, so they only end up checking that the app is signed by who they think it should be (i.e. VLC, signed by VLC developer), but not the executable code or application resources.

Running sore

Apple’s embarrassment over the latest discovery will be compounded by the fact that Wardle has been scratching away at the same weakness for years.

In 2017, Wardle revealed how macOS High Sierra’s mouse keys feature (a way of controlling the mouse pointer from the keyboard) could be abused to sneakily bypass the OS’s protection against synthetic click exploits.

Apple patched the issue but in 2018 he was back with another proof-of-concept that made possible a partial bypass of protections in macOS Mojave.

Every time Wardle discovers a weakness in macOS security Apple patches it after which he returns with another gotcha timed for maximum effect to coincide with the release of a new version of the OS.

It’s uncomfortably reminiscent of another researcher, José Rodríguez, who has developed a habit of finding flaws Apple thought it had fixed in the iOS lock screen.

As with previous weaknesses in this layer, a patch will be released at some point. But it’s hard to escape the impression that, in these two areas at least, Apple’s security approach is to fix holes one at a time rather than analysing their underlying causes.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/tlofJGPOGwI/

Strewth: Hackers slurp 19 years of Oz student data in uni’s second breach within a year

The Australian National University (ANU) today copped to a fresh breach in which intruders gained access to “significant amounts” of data stretching back 19 years.

The top-ranked Oz uni said it noticed about a fortnight ago that hackers had got their claws on staff, visitor and student data, including names, addresses, dates of birth, phone numbers, personal email addresses, emergency contact details, tax file numbers, payroll information, bank account details and passport details. It said the breach took place in “late 2018” – the same year it ‘fessed up to another lengthy attack.

Students will be miffed to find out that someone knows they had to retake second-year Statistics since academic records were also accessed.

The uni insisted: “The systems that store credit card details, travel information, medical records, police checks, workers’ compensation, vehicle registration numbers, and some performance records have not been affected.”

The news comes less than a year after the Canberra-based uni admitted its networks had been hit by a months-long attack, which many in the country’s media theorised had originated in China – a claim the People’s Republic strenuously denied. At the time, ANU said it had “been working in partnership with Australian government agencies for several months” to fend off the attack.

In a statement released today, the institution’s vice-chancellor, Brian Schmidt, admitted that if the uni had not made those upgrades last year in the wake of the early 2018 attacks, this breach would have gone undetected.

He said: “As you know, this is not the first time we have been targeted. Following the incident reported last year, we undertook a range of upgrades to our systems to better protect our data. Had it not been for those upgrades, we would not have detected this incident.”

Schmidt described the attacker as a “sophisticated operator” and said the uni had “no evidence that research work has been affected”.

The uni is home to the ANU Research School of Astronomy and Astrophysics and operates the country’s largest optical observatory. Among other things, it houses the SkyMapper project, which is robotically creating the “first comprehensive digital survey of the entire southern sky” and has been releasing the data set on the internet.

CSIRO's PAF being hoisted into position at Parkes

Interview: AARNet’s Peter Elford on Australia’s national research infrastructure

READ MORE

Boffins at the uni are still looking for human eyeballs to grok Planet 9, the theorised but undiscovered planet beyond Pluto, in images released by the project. Those interested can seek it or other objects at our solar system’s edges here.

ANU is also home to iTelescope.Net, which looks after a network of internet-connected public telescopes popular among amateur and semi-professional astronomers across the globe.

The place is ranked 24th in the QS World University Rankings, but has a strong academic reputation. According to the rankings, it has more citations per faculty member than Cambridge.

The vice-chancellor, who chummily signed off as “Brian”, said:

For the past two weeks, our staff have been working tirelessly to further strengthen our systems against secondary or opportunistic attacks. I’m now able to provide you with the details of what occurred.

We believe there was unauthorised access to significant amounts of personal staff, student and visitor data extending back 19 years.

Depending on the information you have provided to the University, this may include names, addresses, dates of birth, phone numbers, personal email addresses and emergency contact details, tax file numbers, payroll information, bank account details, and passport details. Student academic records were also accessed.

The University has taken immediate precautions to further strengthen our IT security and is working continuously to build on these precautions to reduce the risk of future intrusion.

The uni set up a direct phone and email help lines and increased its “counselling resources” for those affected.

Not to let us down, the outfit said it took the breach “extremely seriously” and had “profound regret”.

As the uni’s motto, Naturam Primum Cognoscere Rerum*, attests, above all, find out the “nature of things”. Perhaps the next upgrade will help it to actually fend off an attack. ®

* Derived from the Lucretius poem “De Rerum Natura” (book III, 1072)… the point of the poem was to explain Epicurean philosophy – moderation in everything – to a Roman audience.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/04/hackers_slurp_19_years_of_aussie_student_data/

What Cyber Skills Shortage?

Employers can solve the skills gap by first recognizing that there isn’t an archetypal “cybersecurity job” in the same way that there isn’t an archetypal “automotive job.” Here’s how.

It feels like every day, there’s another article citing the “cybersecurity skills shortage” as an obstacle to filling needed security jobs for the next decade. I disagree. There isn’t a significant skills gap. There is a market mismatch. Most employers aren’t looking at the people who are actually available; they toss up their hands, credit the skills shortage, and move on. But what’s really going on?

First off, the idea of cybersecurity skills is a pretty one-dimensional view of the landscape of what the modern worker needs to bring to the table. Sometimes, it evokes the image of a black-hoodied hacker who can break applications; or maybe the security operations center (SOC) analyst watching alerts from the application security tool that monitors that application.

Even these two workers have skills that aren’t really parallel. A hacker could be seen as just a quality assurance engineer, testing the negative space of an application (what it shouldn’t do), while the SOC analyst is an operator/incident manager, looking for anomalous operations and following time-tested investigative steps to understand what’s happening. So, how did we get to a belief in an insurmountable skills gap?

I suspect we glorified the polymaths of the industry: These are the security architects who can build complex software, break applications, understand distributed systems, manage complex organizations, reason about new and novel situations on the fly, and then cogently discuss them with executives and press.

That starts our hunt:

  • Employers look for candidates from top-tier universities who have enough experience to demonstrate competence, and target recruiting efforts around those individuals.
  • We complicate this in the US with incentives from different labor policies. We encode specific requirements for a position around degrees and years of experience. Companies limit their flexibility partly to comply with the “objective tests” standard for nondiscrimination and also to support visa-eligibility for technical staff.
  • Even if a talent acquisition team will be flexible on published requirements, it may be too late for many candidates, especially diverse ones. The confidence gap suggests that we’ll dissuade more women than men, and likely minorities as well. We’re choking off our pipeline before we even get started.

Bridging the Gap
Employers can solve their skills gap by recognizing, first and foremost, that there isn’t an archetypal “cybersecurity job” in the same way there isn’t an archetypal “automotive job.” Think about cars for a moment. There are diverse jobs, from mechanics to engineers to drivers to sales to adjusters to washers to fleet managers. And probably dozens more I’m not thinking of. That’s what the cybersecurity career field looks like.

We have hackers and analysts, certainly, but we also have program managers, educators, librarians, safety engineers, software engineers, architects, sales engineers, data scientists, finance officers, marketers, people managers, journalists, and even executives.

There isn’t one cybersecurity skill set across that group, nor is there only one way into the career field. So, stop looking in only one pipeline. You can create several pipelines, and focus on developing talent, which is something you should be doing with all of your staff anyway.

Internships
Probably the most obvious place to start is through your internship program. An internship program is just a way to find candidates, but it isn’t the end of talent acquisition. Internships are just the start. Too often, companies hire interns, and then effectively abandon them as entry-level workers. Considering the resources invested in recruiting through interns, post-hire programs designed to advance and accelerate their skills careers seem prudent.

We follow up internships in Akamai’s infosec team with an extended mentorship through our Architect Studio, where our newly hired researchers get support for several years, developing the skills needed to contribute successfully as complex-system architects. Some of our staff work directly for the Studio, with assignments on projects that help them grow and develop new skills with success. Some staff work in other teams, but collaborate in development activities alongside the Studio. The goal is to create scaffolding around high-potential junior employees, with an eye to getting them out of junior roles as quickly as they are able to develop.

Technical Reskilling
An Akamai program I’m especially pleased with is our Akamai Technical Academy. This program takes candidates who haven’t necessarily gotten the “right” degree, entered a different career field, or have taken time out from the workforce. It’s a six-month classroom-based program, where incoming staff learn the bedrock skills to enter into a six-month placement contract with an Akamai team, after which we usually hire them to a full-time job.

For infosec jobs, we don’t run a separate technical academy. We identify candidates in the core cohorts for quality assurance engineers, program managers, or operators, who look like they’d be good fits for us (often, by hearing them ask just the right number of hard questions), and bring them into a security job.

Insertion Jobs
Sometimes, we just hire right out of other career fields. Most cybersecurity jobs aren’t entry-level positions. They’re midcareer positions, requiring skill and competence in non-security areas. All too often, we promote cybersecurity staff into these jobs, taking them away from work they might be good at, and assigning them to areas where they have less experience. A better approach is to find career fields that already have the skills you really need.

The heart of a security compliance program, for instance, is a library of documentation, so we’ve hired librarians. Our threat research is a set of publications, so we hire journalists. Our risk governance activities are wide-scale safety programs, so we hire engineers with backgrounds in safety and logistics. Then we support these folks with on-the-job training and experience in the cybersecurity essentials to succeed.

In reality, almost all hires are “insertion” jobs, because they’re coming from a different environment to yours. Surrounding all of your staff with good scaffolding to help them make the adjustment to your environment and to a new set of work duties is going to maximize the benefits for everyone. And it’s going to give you access to a wider, deeper, and more diverse talent pool.

And that’s how you close the cybersecurity skills gap.

Related Content:

Andy Ellis is Akamai’s chief security officer and his mission is “making the Internet suck less.” Governing security, compliance, and safety for the planetary-scale cloud platform since 2000, he has designed many of its security products. Andy has also guided Akamai’s IT … View Full Bio

Article source: https://www.darkreading.com/application-security/what-cyber-skills-shortage/a/d-id/1334848?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Devs slam Microsoft for injecting tech-support scam ads into their Windows Store apps

Application makers are crying foul after some of their programs distributed via the Windows Store popped open tech-support scam ads on users’ desktops.

A thread in Microsoft’s support forum, active since April 28, details the scandal: programmers who use Redmond’s Advertising Software Development Kit (SDK) to display ads in their apps, available via the Windows Store, to generate revenue say bad banners are being forced open in their users’ browsers.

According to the developers, netizens who downloaded their apps are being interrupted by their browsers opening up new tabs displaying an alarming message that wrongly claims their PC is infected or damaged, and are directed to a scammer-owned website that offers to fix the supposedly broken machine for cash. It even affects Xbox One owners who use software that employs the SDK, it is claimed.

It appears, therefore, that miscreants were, and still are, able to book rather intrusive and misleading ads with Microsoft to appear via apps using the SDK – and users have been complaining to the affected software developers. The devs, in turn, have gone to Microsoft to figure out why their applications are forcing open fraudulent adverts for scareware.

“This is very worrying given Microsoft tout store apps as being totally secure,” said Reg reader Dave Wade who tipped us off to the kerfuffle. “Their support team provide a temporary work around and then suggest you open a paid for support contract.”

Headshot of Trojan horse

Microsoft mandates browser-extension defence to malvertising

READ MORE

Indeed, developers grew frustrated with Microsoft’s response, or lack thereof, over the course of the thread. A number of programmers noted that users were blaming them for the unwanted pop-ups, and that their review scores and download numbers were suffering as a result of the outbreak.

“This is killing the user experience for any app which uses Microsoft’s Advertising SDK,” said a disgruntled coder at GameFace.LLC. “How do you expect users to trust using any app on the Microsoft Store when they keep having a browser popped open with an obvious scam site?”

El Reg asked Microsoft for comment, though Redmond was only able to return a boilerplate response via a spokesperson.

“Microsoft does not send unsolicited messages to request personal or financial information, to provide technical support to fix your computer, or to send intimidating alerts with threats to take action,” the US tech giant’s non-answer answer read. “If you receive an unsolicited message or see a pop-up on your computer—don’t take the risk—just close your browser.”

In the meantime, users and developers alike remain in the dark and frustrated over the continued barrage of nuisance ads. Avoid Redmond’s Advertising SDK, it seems. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/04/scareware_ads_windows_store/