STE WILLIAMS

How bad can the new spying legislation be? Exhibit 1: it’s called the USA Liberty Act

Analysis The US Senate Judiciary Committee has unveiled its answer to a controversial spying program run by the NSA and used by the FBI to fish for crime leads.

Unsurprisingly, the proposed legislation [PDF] reauthorizes Section 702 of the Foreign Intelligence Surveillance Act (FISA) – which allows American snoops to scour communications for information on specific foreign targets.

It also addresses the biggest criticisms of the FISA spying: that it was being used to build a vast database on US citizens, despite the law specifically prohibiting it; was being abused to do a mass sweep of communications, rather than the intended targeting of individuals; and that there was no effective oversight, transparency or accountability built into the program.

But in case you were in any doubt that the new law does not shut down the expansive – and in some cases laughable – interpretations put on FISA by the security services, you need only review the proposed legislation’s title: the USA Liberty Act. Nothing so patriotic sounding can be free from unpleasant compromises.

And so it is in this case. While the draft law, as it stands, requires the FBI to have “a legitimate national security purpose” before searching the database and to obtain a court order “based on probable cause” to look at the content of seized communications, it still gives the domestic law enforcement agencies the right to look at data seized on US citizens by the NSA. And agents only need supervisory authority to search for US citizens’ metadata.

Huh

That is very, very far from what FISA was intended to do: the clue being in the “F” for “Foreign” in FISA. This legislation would legitimize the highly questionable interpretation that the NSA and FBI decided to place on Section 702: that the information gathered under FISA didn’t require another step of authorization to look for American citizens’ information – something that many claim breaks the Fourth Amendment on unreasonable search.

This legislative approach lends weight to the argument pushed by the security services in the wake of other illegal spying operations: that metadata is sufficiently innocuous that it does not require legal protections. That is a conclusion that many civil liberties and privacy groups fiercely disagree with.

Wonder why Congress doesn’t clamp down on its gung-ho spies? Well, wonder no more

READ MORE

The “safeguards” set out in the proposed law are similar to those introduced to other spying programs: the surveillance services must keep records of their queries and submit to Congressional oversight; and the Director of National Intelligence (DNI) must report to Congress twice a year on the number of US citizens whose communications are collected, and the number of requests that identified US citizens.

Again, though, there is implicit acceptance of the snoopers’ questionable assumptions over Section 702 built into this approach. The details on US citizens are referred to as being “incidentally collected” – language that is used by the security services to justify not providing constitutional protections.

There is also precious little evidence that forcing the DNI to provide a report to Congress has a knock-on impact on the spies’ accountability or transparency. All it has resulted in so far is the DNI either outright lying to Congress, or pretending to having heard a different question to the one asked.

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/05/usa_liberty_act/

Another W3C API exposing users to browser snitching

Yet another W3C API can be turned against the user, privacy boffin Lukasz Olejnik – this time, it’s in how browsers store and check credit card data.

As is so often the case, a feature created for convenience can be abused in implementation. To save users from the tedious task of entering the 16 characters of their credit card numbers, four for date, and three for CCV number, the Web Payments API lets Websites pull numbers stored in browsers.

Olejnik, an invited expert in the W3C’s Privacy Interest Group, writes that even without a full privacy assessment, it was easy to discover some serious vectors for misuse: fingerprinting, a frequent interest of Olejnik’s; and in Chrome, he found a way to reliably detect users in “incognito” mode, “a thing that generally should not be possible”.

Web Payments API is supported in Chrome and Edge, and is on the real-soon-now list in Firefox and WebKit.

The fingerprinting he discovered is quite specific: a site can detect which different cards the user may have stored. That’s because while the API tries to prevent against enumeration attacks by rate-limiting the canMakePayment call (to once every 30 seconds), that’s inefficiently applied:

“A website could simply use a bunch of iframes with scripts effectively running in different origins, meaning that the 30m quota is functionally irrelevant … one iframe could test for “visa”, another for “mastercard”, etc. At the end, iframes communicate test results to the parent frame.”

The result, he writes, is that iframes could capture all the payment instruments available to an individual user.

The second issue, incognito detection, arises because incognito mode skips a rule applied to normal attempts at payment.

The vector for abuse arose, perhaps ironically, because the API’s designers wanted to protect users from sites that might scam them by calling payments from multiple stored cards.

So the canMakePayment call can only be used once by a site requesting payment: a second call raises the exception NotAllowedError: Not allowed to check whether can make payment.

The slip-up is that when Olejnik tested this in Chrome, he found it didn’t work properly in Incognito mode. If a user had stored values for MasterCard and Visa, for example, the second call to the API returns a “true” value for both cards.

It would, he wrote, behave like that for all the cards a user stored, turning “a fingerprinting vector into an information leak!”.

Olejnik noted that he reported the issue to the Chrome team here. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/06/another_w3c_api_exposing_users_to_browser_snitching/

How Businesses Should Respond to the Ransomware Surge

Modern endpoint security tools and incident response plans will be key in the fight against ransomware.

The global rise of ransomware has businesses taking a closer look at their protective tools.

More than one-third (35%) of security pros in Dark Reading’s “The State of Ransomware” survey detected ransomware on their systems in the past year. Only 27% say modern antimalware tools are very effective in stopping ransomware; 56% think they are somewhat effective.

Half of IT practitioners believe it will be harder to prevent ransomware from infecting their systems two years from now, researchers found. This begs the question: what are security vendors doing to improve the effectiveness of their systems, and which should businesses use?

“Because ransomware is high-profile, it’s an opportunity for practitioners to be proactive and have a discussion about response and upgrading defenses,” says Mike Rothman, analyst and president at Securosis. “They go after everybody, and everybody can pay ransom.”

Advancing endpoint security

“One of the things we see businesses doing is turning to their messaging security provider first for answers and solutions,” says Rob Westervelt, research manager within IDC’s security products group. “That’s blocking it before it even gets to the end user, which ultimately is best as opposed to having the end user click a malicious attachment or malicious URL.”

When attackers bypass messaging filters and employees start clicking malicious attachments that made it into their inboxes, it becomes an endpoint security problem. While he doesn’t see many companies building new products to specifically protect against ransomware, Westervelt says there is more messaging from vendors about their ransomware capabilities. Some have begun to add new “bells and whistles” to monitor strange system behavior.

“You have to advance your endpoint protection,” says Rothman. If you’re dealing with a system from 2013, you don’t really stand much of a chance against the attacks that are happening today.”

Most endpoint vendors, both traditional antivirus and disruptive startups like Cylance, can monitor for abnormal activity like signs of files being encrypted quickly. Some tools, like Sophos’ Intercept X, has technology that can roll back encryption, Westervelt explains. Some solutions, instead of simply alerting to an attack, quarantine a system to ensure it doesn’t spread.

“Everyone in endpoint protection is starting to add file monitoring as a new capability in their system,” says Rothman. “Looking for anomalous file activity on the endpoint and stopping that … when folks start accessing files that haven’t been accessed in a long time, something funky is going on.”

Westervelt points to the growth of companies with a stronger focus on file access monitoring. Varonis, for example, solely focuses on data access. It’s not so much about looking for malware as it is about monitoring files for abnormal activity. CyberArk, another, focuses on privileged account security. It’s not standard AV, he says, but it looks for ransomware behavior.

In addition to monitoring for anomalous file activity, Rothman also advises ensuring you have strong exploit protection and the ability to fight fileless attacks; those that don’t use the file system but store the encrypted payload in the registry.

“It’s about making sure you’re using modern defenses to deal with modern attacks,” he continues. “A lot of technology out there is not modern defense.”

The problem with additional ransomware protection is the heightened risk of false positives, Westervelt says. A system may start to flag employees who do a lot of encryption and file changes as part of their job, and block behavior that is abnormal but still valid.

“It only takes one false positive, one disruption of an important business deal to cause the CISO to lose their job,” he notes.

Preparing a response plan

Regardless of the level of your technical control, Rothman emphasizes the importance of developing a response plan. Many companies don’t have a plan, particularly midmarket organizations that pay little attention to security.

“They have to have that initial conversation about what to do if their machines get locked up,” he explains. “When your machines are mostly encrypted and showing the ‘Pay Us’ screen, that’s not the time to be figuring this stuff out.”

Rothman advises businesses to work through their response processes and what their tolerance would be for a certain set of scenarios. When those are decided, it’s time to practice.

“Practice identifies the holes and gaps in your process,” he explains. “The only way to figure out what works and what doesn’t work is to actually do it … some organizations use tabletop exercises. I can’t recommend that enough.”

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/endpoint/how-businesses-should-respond-to-the-ransomware-surge/d/d-id/1330060?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Russian Hackers Pilfered Data from NSA Contractor’s Home Computer: Report

Classified information and hacking tools from the US National Security Agency landed in the hands of Russian cyberspies, according to a Wall Street Journal report.

Turns out the National Security Agency (NSA) may have suffered yet another data breach: in 2015, Russian state hackers stole classified cyber-attack and defense tools and information off of the home computer of an NSA contractor, according to a Wall Street Journal report today.

The hack reportedly occurred via Kaspersky Lab antivirus software on the contractor’s home computer, where the AV flagged the NSA cyberspying tools and code. The breach wasn’t detected until the spring of 2016, and wasn’t known publicly until the WSJ report published today.

Just how the NSA contractor’s Kaspersky Lab software was apparently abused and exploited — or not — is under debate by experts; it could be a case of the application’s detection of the tools on the contractor’s system inadvertently landing in the wrong hands, they say, or the software could have been hijacked and hacked by the attackers during a software update, for instance.

The WSJ report meanwhile appears to shed light on what ultimately may have led to the US government’s recent ban of the Russian security vendor’s software. The Trump administration ordered all federal agencies to remove Kaspersky Lab’s products and services from their systems, citing concerns of a link between the company and the Russian government, which is already under fire for its role in meddling with the 2016 US presidential election.

The unnamed NSA contractor reportedly moved the data to his home to work after-hours, even though he was aware that removing classified information without approval is against NSA policy and potentially a criminal offense, the report said. The case is under investigation by the federal government. NSA employees and contractors have always been prohibited from using Kaspersky Lab software at work, and the NSA prior to this incident had recommended they not use it at home, either, the report said.

This marks the third case of an NSA contractor exposing or leaking classified information: the first being, of course, Edward Snowden, whose infamous theft and leak to journalists of NSA files in 2013 served as a wakeup call for the insider threat; and the second, the recent arrest of contractor Harold Martin, who had hoarded more than 50 terabytes of NSA documents for 20 years in his home and the trunk of his car.

Whether this latest NSA contractor leak leads directly to the mysterious Shadow Brokers group that since 2016 has been leaking and later offering for sale online a trove of NSA hacking tools and exploits is unclear at this point, but some security experts say this could be the long-awaited link to Shadow Brokers. “It seems to point in that direction,” John Bambenek, threat systems manager at Fidelis Cybersecurity, says of today’s report.

Meantime, just how Kaspersky Lab’s AV software fits into the case is unclear from the report. According to the WSJ, the software may have detected some of the NSA files as suspicious code, somehow cluing Russian hackers into the machine full of NSA classified information. According to the report, “But how the antivirus system made that determination is unclear, such as whether Kaspersky technicians programmed the software to look for specific parameters that indicated NSA material. Also unclear is whether Kaspersky employees alerted the Russian government to the finding.”

Antivirus and other security software routinely vet newly detected, suspicious-looking samples to their malware databases and other threat intelligence resources, so the Russian threat actors may have either intercepted that traffic or even spotted it in another intelligence-sharing forum, security experts told Dark Reading. “The reality is they [antivirus programs] all do that,” Bambenek says.

He says he’s even seen classified documents posted on VirusTotal, the online malware-checking tool used by researchers and even victim organizations to crowdsource malware discoveries. And threat intel-sharing is common practice among security researchers as well, he says.

“Malware systems that make use of the cloud often send your documents upstream for analysis,” explains Gary McGraw, vice president of security technology at Synopsys.

Kaspersky Lab researchers have worked closely with Interpol on cybercrime investigations, and the firm has outed multiple Russian advanced persistent threat actors, or nation-state groups, which confounds security experts analyzing the feds’ suspicions of Russian state involvement with Kaspersky Lab.

“I’ve worked with Kaspersky Lab for a long time, fighting antivirus back in the day, and they’ve always been stand-up guys who want to fight the good fight against malware actors,” says Joe Stewart, formerly the director of malware research at Secureworks and now a security researcher with Cymmetria.

One possible explanation for the NSA contractor’s machine compromise, Stewart notes, is a hack of the AV software. “Any time you’ve got a situation where software running on a machine has an update process, it can be compromised,” Stewart says.

Several major AV products, including Kaspersky Lab’s, have been outed with security vulnerabilities by researchers over the past few years.

Fidelis’ Bambenek says there’s always a chance a mole resides in any security software firm or organization. “That’s how espionage is done,” he says. He says he has no firsthand knowledge of that being the case at Kaspersky Lab, and the argument of collusion between the firm and the Russian government so far remains as “weak tea,” he says.

Other security experts see subterfuge. Dan Guido, co-founder and CEO of red-team and security research firm Trail of Bits, said via Twitter: “There are only 2 good answers: Either the Russian gov rides on KAV infrastructure globally or Kaspersky helps them do it one at a time.”

Kaspersky Lab denies any wrongdoing and shot down the WSJ report: “Kaspersky Lab has not been provided any evidence substantiating the company’s involvement in the alleged incident reported by the Wall Street Journal on October 5, 2017, and it is unfortunate that news coverage of unproven claims continue to perpetuate accusations about the company. As a private company, Kaspersky Lab does not have inappropriate ties to any government, including Russia, and the only conclusion seems to be that Kaspersky Lab is caught in the middle of a geopolitical fight,” the company said in a statement.

“The company actively detects and mitigates malware infections, regardless of the source,” and “Kaspersky Lab products adhere to the cybersecurity industry’s strict standards and have similar levels of access and privileges to the systems they protect as any other popular security vendor in the U.S. and around the world,” the company said.

Insider Problems
Bambenek says the NSA contractor moving classified agency data onto his home laptop or computer should never have happened in the first place. “The problem is the NSA is not following its own rules,” he says. “Shouldn’t there be technical controls controlling [and detecting] when top-secret stuff goes out of the NSA building? This just keeps happening there. I’m more concerned about a spy agency consistently have a problem keeping its secrets.”

There’s a fine line of what constitutes legitimate and acceptable cyber espionage. Nations spy on other nations: that’s a given. And sometimes, security software firms find themselves inadvertently in the crosshairs, experts point out. And it’s likely the NSA could be using antivirus software similarly to spy on other nations, they argue.

Even so, the US federal government’s ban on Kaspersky Lab products comes amid a backdrop of renewed distrust in the Russian government in the wake of the intelligence community’s findings of election-meddling, as well as investigations into possible collusion between the Trump campaign and Russian operatives.

Jim Christy, former director of futures exploration at the federal government’s Defense Cyber Crime Center (DC3), notes that the feds are traditionally “risk-averse,” so the ban of Kaspersky Lab software should come as no surprise.

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/cloud/russian-hackers-pilfered-data-from-nsa-contractors-home-computer-report/d/d-id/1330056?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hackers pounce on 3 vulnerable WordPress plugins

Remember the old saying about bad things coming in threes? Flaw hunters Wordfence would probably agree with the sentiment after uncovering some nasty zero-day flaws in a trio of WordPress plugins.

Not a great start, then, but much worse is that the vulnerabilities were already being exploited when the company discovered them by chance during recent attack investigations – meaning anyone running them is vulnerable and should update immediately.

The plugins are (with fixed versions):

A bookings plugin to help small businesses schedule appointments and manage customer contacts.

Integrates Flickr images but now discontinued. This plugin has only been tested up to WordPress 3.0.5 which is over six years old. Please don’t run anything this ancient.

Offers a range of features around managing user registrations.

How long attackers have been exploiting them isn’t clear but all are rated “critical” and given a rather alarming Common Vulnerabilities Scoring System (CVSS) rating of 9.8. Any one of the three could be used to create a backdoor to take complete control of a vulnerable website.

Tracking them down required detective work so it’s a tad fortunate they were found at all:

The exploits were elusive: a malicious file seemed to appear out of nowhere, and even sites with access logs only showed a POST request to /wp-admin/admin-ajax.php at the time the file was created.

Putting a backdoor into a vulnerable site is as simple as sending the exploit in a POST request to the WordPress AJAX endpoint admin-ajax.php or, in the case of Flickr Gallery to the root URL, at which point it’s game over. No authentication or elevated privilege is needed.

The good news is that none of the three are widely used, with a combined install count of only 21,000, tiny next to the tens of millions of sites running WordPress. Needless to say, any one of the sites running these plugins and failing to heed the warnings could pay a high price.

WordPress plugin flaws are an ongoing worry but it’s not always a simple thing to fix.

Earlier this year, 200,000 websites were affected by malicious spam code hidden inside a plugin called Display Widgets, which was duly removed from the WordPress repository. Except that each time it was re-admitted, the problem reoccurred, four times in all.

In the end, the plugin was re-submitted as an older, clean version.

The incident highlights a weakness in WordPress plugin security. The core of WordPress is well maintained and supported by a diligent security team that can deploy security updates to millions of WordPress installs automatically. The plugin ecosystem, a collection of tens of thousands of pieces of third party software that can turn your site into anything from a job site to a photo gallery, is the wild west by comparison.

In large part, your WordPress site’s security depends on the quality of the plugins you install.

Site owners running a vulnerable plugin are reliant on the plugin author to respond to problems quickly so look for software that is actively maintained and updated regularly. When plugin updates are available notifications will appear in your site’s admin interface in the Plugins tab and in Dashboard Updates. Log in and check often, every day if you can, or pay someone to do it for you (the same applies to other CMS software like Drupal, Joomla or Magento.)

Good web hosts will keep you up to date or alert you if they think you’re running vulnerable software. Some specialist WordPress web hosting companies also keep their own allow lists of vetted plugins.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qsmtugwIhME/

Google Timeline – bug or feature? [VIDEO]

We recently published an article about Google’s Your Timeline, which for many people turned out to be the Google tracking feature they didn’t know they’d switched on.

As author Matt Boddy wrote:

Using GPS, Wi-Fi and cell tower data, Google’s Your Timeline can paint a very accurate picture of your daily life. If you’ve got it switched on, it stores every step you take and everywhere you go. And the thing is, lots of people seem to have it switched on without even realising, including me, and my favourite hats come in tinfoil.

That article provoked a wide range of comments, including some strongly for, and others vigorously against, the Timeline feature. So we figured we’d take the debate to Facebook Live, to debate the question, “Google Timeline – bug or feature?”

(Can’t see the video directly above this line? Watch on Facebook instead.)

(You don’t need a Facebook account to watch the video, and if you do have an account you don’t need to be logged in. If you can’t hear the sound, try clicking on the speaker icon in the bottom right corner of the video player to unmute.)


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/FE3PELN-ymI/

Net neutrality becomes a battle of the bots

The good news: Bots will probably have little to no real influence in the current battle over net neutrality. They are almost embarrassingly obvious – not ready for the front lines of this kind of conflict.

The less good news: Bots – or rather the people behind them – tried pretty hard, and blew a fair amount of smoke in the process.

But try as they might, bots have done little to change opinions on the matter – at least one poll says a large majority of Americans, of all political persuasions, support net neutrality.

Freedman Consulting reported in July that 73% of Republicans, 80% of Democrats and 76% of Independents want to keep the Federal Communications Commission’s (FCC) 2015 Open Internet Order, which forbids internet service providers (ISPs) from discriminating against rival services or charging consumers and businesses more to use an internet “fast lane.” The aim is to prevent airline-style first-class “seats” for the rich, while everybody else rides in steerage.

But, thanks to bots, “public opinion” is less clear when it comes to about 21.8 million comments submitted to the FCC since the current chairman, former Verizon lawyer Ajit Pai, proposed rolling back significant elements of that order’s provisions.

Which is pretty much the point of bots – to obscure the opinions of actual humans. As Motherboard reported this week, data analytics company Gravwell found that only about 18% (3,863,929) of the comments submitted to the FCC website and through its API were “unique.”

The rest were likely from “automated astroturfing bots,” and overwhelmingly favored doing away with net neutrality – the opposite of the Freedman poll’s findings. However, Gravwell did find a few that supported net neutrality.

According to Gravwell founder Corey Thuen, the bot-generated comments weren’t that difficult to spot. He said the comment below, which referenced the 2015 order to classify internet broadband as a “telecommunication service” under Title II, therefore establishing net neutrality, was sent 1.2 million times:

The unprecedented regulatory power the Obama Administration imposed on the internet is smothering innovation, damaging the American economy and obstructing job creation.

I urge the Federal Communications Commission to end the bureaucratic regulatory overreach of the internet known as Title II and restore the bipartisan light-touch regulatory consensus that enabled the internet to flourish for more than 20 years.nnThe plan currently under consideration at the FCC to repeal Obama’s Title II power grab is a positive step forward and will help to promote a truly free and open internet for everyone.

Another was sent nearly 1.1 million times in August alone.

The report’s findings are no surprise – complaints about phony comments have come from those on both sides of the debate since before the FCC comment window opened.

ZDNet reported in May 2017 that more than 128,000 identical comments had already been submitted, even though the official comment period didn’t open until 18 May 2017. Some of those whose names were on those comments told ZDNet they had not submitted them. One confessed to not even knowing what net neutrality was.

Joan Marsh, executive vice president of regulatory and state external affairs for telecom ATT, complained in a blog post on 30 August 2017 that, of millions of “mass-produced comments,” most of them:

…appear to us to be fraudulent. Millions of comments were generated using phony email addresses. Millions of others were generated using duplicative email or physical addresses. And still others originated overseas.

She also contended that, “when only legitimate comments are considered, the large majority of commenters oppose Title II regulation of internet access.”

That claim, in turn, was loudly mocked by ARS Technica, which called it “absurd” and noted that a study by consulting firm Emprata (funded by Broadband for America – an opponent of net neutrality) found that 98.5% of comments that were individually written favored maintaining net neutrality.

At present, however feverish the debate, the chances of preserving net neutrality, at least in its current form, appear to be dubious.

Pai, the target of a “fire FCC Chairman Ajit Pai” petition from the consumer advocacy group Free Press and loud opposition from most Democrats in Congress, won reappointment to another five-year term this week, with four Democrats joining Republicans to confirm him.

He argues that net neutrality, rather than benefiting consumers, harms them by stifling innovation and competition. In a speech at the Newseum in April 2017, he contended that, “It’s basic economics. The more heavily you regulate something, the less of it you’re likely to get,” and that rolling back regulations would, “restore internet freedom.”

He told PBS that a “lighter” regulatory touch would increase competition and lead to what consumers want – a “better, faster and cheaper internet.”

That draws both scorn and outrage from net neutrality supporters, who say Pai’s proposed changes would amount to a giveaway to ISP giants like Verizon and ATT and to cable companies. If the rollback happens, “old media wins and new media loses – and consumers will be left with the bill,”  wrote Bruce Kelly in Investopedia.

And Naked Security’s Bill Camarda reported in July 2017 that Battle for the Net, a coalition of dozens of organizations, companies and content providers including giants like Twitter, Mozilla and Netflix, held an “Internet-Wide Day of Action to Save Net Neutrality” on 12 July 2017. As part of it, numerous websites asked users, “to imagine cable companies interfering with the equal delivery of their content.”

At least those are all real people arguing. Which is the way it should be.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nQrJaY6byb0/

Dumb bug of the week: Apple’s macOS reveals your encrypted drive’s password in the hint box

Video Apple on Thursday released a security patch for macOS High Sierra 10.13 to address vulnerabilities in Apple File System (APFS) volumes and its Keychain software.

Matheus Mariano, a developer with Brazil-based Leet Tech, documented the APFS flaw in a blog post a week ago, and it has since been reproduced by another programmer, Felix Schwartz.

The bug (CVE-2017-7149) undoes the protection afforded to encrypted volumes under the new Apple File System (APFS).

The problem becomes apparent when you create an encrypted APFS volume on a Mac with an SSD using Apple’s Disk Utility app. After setting up a password hint, invoking the password hint mechanism during an attempt to remount the volume will display the actual password in plaintext rather than the hint.

Here’s a video demonstrating the programming cockup:

Youtube Video

Apple acknowledged the flaw in its patch release notes: “If a hint was set in Disk Utility when creating an APFS encrypted volume, the password was stored as the hint. This was addressed by clearing hint storage if the hint was the password, and by improving the logic for storing hints.”

The Keychain flaw (CVE-2017-7150) was identified last week by Patrick Wardle, from infosec biz Synack. It allowed unsigned apps to access sensitive data stored in Keychain.

“It becomes clearer every day that Apple shipped #APFS way too early,” wrote Schwartz in a tweet on Thursday.

Other coders have said as much. Shortly after Apple released the High Sierra upgrade, aka macOS 10.13, in late September, Brian Lopez, an engineering manager at GitHub, mused via Twitter, “Legitimately wondering of Apple accidentally shipped a pre-release version of High Sierra. So much of it is unfinished and unpolished.”

Marco Arment, another developer, suggested Apple’s focus on iOS has hurt its quality control elsewhere. “The biggest problem with Apple putting less effort into macOS isn’t that it stagnates — it’s that they make buggier, sloppier updates,” he wrote via Twitter on Thursday.

Asked to comment, an Apple spokesperson directed The Register to its published security update notification and an accompanying knowledge base article. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/05/apple_patches_password_hint_bug_that_revealed_password/

Private, Public, or Hybrid? Finding the Right Fit in a Bug Bounty Program

How can a bug bounty not be a bug bounty? There are several reasons. Here’s why you need to understand the differences.

After nearly five years of managing bug bounty programs, it’s easy for me to lose sight of the fact that this is still a relatively new concept. As a result, there are several common questions and misconceptions about bug bounty programs. Two of the most common go hand in hand: not all bug bounty programs are public, and not all bug bounty programs are, in fact, bug bounty programs.

Let’s start with the second misconception because it’s likely the most confusing. How can a bug bounty not be a bug bounty? The simplest answer is that the term bug bounty has come to mean any type of vulnerability disclosure program, which is the correct umbrella term for the industry. A vulnerability disclosure program is any program where external researchers can submit vulnerabilities to an organization, whether there is a reward or not. A bug bounty is more specifically a program by which organizations reward these vulnerability submissions.

Disclosure programs run the gamut, providing anything from a time-bound assessment (much like a penetration test) to an ongoing assessment — a vetted, private, crowd-to-crowd of thousands of global researchers. Equally vast are the types of targets the crowd can test, from public Web domains to not-yet-released mobile applications, and even hardware. This leads to the next point: not all bug bounty programs are public.

Organizations that are new to bug bounties, perhaps considering implementing one for the first time, often have concerns about their ability to handle submissions, fearing their application won’t stand up to the volume of testing, fearing the unknown of “the crowd,” or simply not knowing how to provide the crowd access to what they need tested. Although public programs are great solutions for many organizations (and we believe a public vulnerability disclosure channel is best practice) these are valid, yet addressable, concerns.

Enter Private Bug Bounty Programs
Private bug bounty programs allow organizations to harness the power of the crowd — diversity of skill and perspective at scale — in a more controlled environment. A private program includes only those researchers who have a proven track record. Those who have proven their skill and trustworthiness receive invitations to private programs. Private programs can be scoped or built around a customer’s testing needs and parameters. Need a mobile app tested? Pull from mobile experts. Need an expert in virtualization? Build a customized “crowd” to fulfill your testing needs. A private program can also meet requirements around background checking, ID verification, or even location.

Private programs are open to a select, vetted group of researchers while public ones are open to thousands of global researchers; however, these are just two ends of a spectrum. At Bugcrowd, this summer we launched a recruitment effort for a top-secret program that offered a hybrid, allowing the organization to recruit security experts that specialized in the company’s unique attack surface in a controlled way. This means that while not just any researcher can “hack on” the program, anyone can apply to.

When To Go Public
Public bug bounty programs provide all the benefits of a private program, at scale. This means more eyes, more skills sets, more submissions. With the added benefit of the publicity these programs naturally see, public programs tend to attract more talent not only to those programs, but also to the crowd as a whole. We almost always see big surges in signups after a public program launch.

A public bug bounty program is a fantastic means to ensure continuous risk assessment, and ultimately mitigation. Yet, as sure as bug bounties are an effective solution for most organizations, they are not a one-size-fits-all endeavor. In many cases, organizations can choose to run private, on-demand, and ongoing programs simultaneously. There can also be a healthy number of private to public stories, where the organization takes a crawl-walk-run approach, slowly building their muscle for receiving and remediating vulnerability submissions to ensure maximum effectiveness when they launch their public program to the full crowd.

This also highlights the value of program management. Most organizations — regardless of size — become quickly overwhelmed by the process of starting and running a program: defining scope, defining disclosure inputs, identifying program security owners, establishing a vulnerability management program, and even determining time-to-fix agreements within that program. And this doesn’t even address how to establish attractive payout ranges or set up an efficient triage and validation process — much less attract a solid crowd of researchers to actively participate. Program management can help ensure the right type of program, at the right time, and can work with your organization to ensure the success of the problem over time.

Regardless of company size, a vulnerability program is a good idea. Given the increase in security incidents, followed by a new, yet quickly growing scrutiny of organizations’ security programs, it’s unsurprising that they’re becoming a best practice. Whether looking for a public bug bounty program with high rewards or to access a diverse group of skilled researchers in a controlled environment or anything in between, there is a vulnerability disclosure program to suit your needs.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Jason is the head of trust and security at Bugcrowd. Jason works with clients and security researchers to create high value, sustainable, and impactful bug bounty programs. He also works with Bugcrowd to improve the security industry’s relations with researchers. Jason’s … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/private-public-or-hybrid-finding-the-right-fit-in-a-bug-bounty-program/a/d-id/1330037?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

10 Steps for Writing a Secure Mobile App

Best practices to avoid the dangers of developing vulnerability-ridden apps.PreviousNext

Image Source: Pedrosek via Shutterstock

Image Source: Pedrosek via Shutterstock

More than 4 million mobile apps are currently in production, but only 29% on average are tested for bugs, and nearly a third of these contain significant vulnerabilities, according to a recent Ponemon Institute survey.

Enterprises, meanwhile, are expected to accelerate their mobile app development in the coming months, according to a recent Gartner survey. On average, enterprises deployed eight mobile apps from the start of the year, with nearly nine more on tap or planned through June, the survey found.

“Developers are less careful when developing apps for internal use because they want to develop it fast, so it can achieve some purpose,” says Vivien Raoul, chief technical officer and co-founder of Pradeo, which recently published the Mobile Application Security Guide.

Whether enterprises are developing mobile apps for internal use or to aid customers in using their service, they face consequences if a mobile app is vulnerable security-wise. In addition, organizations whose apps are for customers stand the risk of getting hit with a civil complaint if the apps aren’t up to snuff security-wise.

The Federal Trade Commission, for example, has been slapping companies with civil lawsuits over the way enterprises have handled the security of their mobile app development efforts. Enterprises that have felt the FTC’s wrath include Upromise, Credit Karma, and Fandango.

Here are key steps for creating a secure enterprise mobile app:

 

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full BioPreviousNext

Article source: https://www.darkreading.com/endpoint/10-steps-for-writing-a-secure-mobile-app-/d/d-id/1330040?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple