STE WILLIAMS

Facebook Rolls Out ‘Data Abuse Bounty’ Program

The social media giant also got hit with a lawsuit the day before unveiling its new reward program.

All eyes are on Facebook as the company wades its way through a sticky controversy centered on users’ privacy. As CEO Mark Zuckerberg testified before Congress this week regarding the Cambridge Analytica scandal, the social media giant rolled out a “Data Abuse Bounty” program to catch applications on the platform inappropriately using personal data.

Meanwhile, Facebook and Cambridge Analytica, along with SCL Group Limited and Global Science Research Limited (GSR), on Monday April 9 were hit with a class-action lawsuit filed by lawyers in the US and the UK who accuse the defendants of misusing data belonging to 71.6 million Facebook users. The suit also names Steve Bannon, Donald Trump’s former campaign manager and White House advisor, and Aleksandr Kogan, GSR founding director and Cambridge University neuroscientist.

The lawsuit claims Cambridge Analytica, SCL Group, and GSR collected users’ personal data to develop campaigns for the purpose of influencing the 2016 US presidential election and British EU referendum. Facebook, they say, should be held accountable for not taking the proper steps to secure users’ information.

Cambridge Analytica reportedly collected this data through a personality quiz created by Kogan as a Facebook app. About 270,000 Facebook users submitted their data through the app; however, the app’s design enabled Cambridge Analytica to also collect the information of these participants’ friends – bringing the total users affected from 270k to more than 72 million.

This data, reportedly used to build profiles of Facebook users, includes public profile information, names, home and email addresses, page likes, hometown, birthday, and political and religious affiliations.

“Facebook utterly failed in its duty and promise to secure the personal information of millions of its users, and, when aware that this … information was aimed against its owners, it failed to take appropriate action,” says co-lead counsel Robert Ruyak, The Guardian reports.

Data Abuse Bounty Program

Facebook has made a series of moves around better protecting users’ data. It claims data belonging to most its 2 billion users could have been accessed without their permissions, and the data of 87 million people was taken by Cambridge Analytica. Changes affect Facebook’s Events API, Groups API, Call and Text History, App Controls, and Login.

One of its new privacy-focused initiatives is the Data Abuse Bounty Program, which will reward people who report application developers misusing people’s information. The project was inspired by Facebook’s existing bug bounty program, used to address security flaws, and Facebook had first hinted about launching such an initiative last month.

This bounty program, the first of its kind, will reward those with firsthand knowledge and proof of instances in which an app on the Facebook platform collects and transfers users’ data to another party to be sold, stolen, or used for scams or political influence, Facebook explains.

Marten Mickos, CEO of HackerOne, says “it makes perfect sense” for Facebook to seek outside help in testing and vetting apps that have access to consumer data. This will help it achieve results sooner, he says, but Facebook has to make sure it has the right steps in place.

“Like any bounty program, for Facebook to be successful they must offer clear guidance to researchers, prioritize the incoming reports and necessary fixes, and offer hackers competitive recognition for their contributions,” he explains.

As with the bug bounty program, the value of each award will depend on the impact of each report. There is no maximum, Facebook says, but it has awarded as much as $40,000 for high-impact bug reports in the past.

All legitimate reports will be reviewed and receive a response “as quickly as possible” when a credible threat to user data is identified. If abuse is confirmed, the app will be shut down and if necessary, appropriate legal action will be taken against the company buying or selling the data. The person who reported the issue will be paid, and those affected will be alerted.

“Facebook has lost ground on many fronts, and they need to try to regain that lost ground,” says Mickos, though he points to the company’s willingness to listen. “There are many things Facebook needs to do, and this initiative is a good one.”

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/facebook-rolls-out-data-abuse-bounty-program-/d/d-id/1331520?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

3 critical Flash vulnerabilities patched. Update now!

In news that can surely only be a surprise to people who’ve learned to use a computer since the middle of March 2018, or who’ve been trapped in their own fridge for the last decade… last Tuesday was Patch Tuesday, there’s a Critical Flash vulnerability and, if you’re still using Flash, it’s time to reexamine your attitude to risk and reward (and while you’re doing that, update to the latest version).

Did I say a critical vulnerability? I meant three.

Adobe Bulletin APSB18-08 lists six security issues fixed in the latest release, version 29.0.0.140, three RCE (Remote Code Execution) vulnerabilities rated critical and three information disclosure vulnerabilities rated Important.

Updates for all platforms have been given a priority of 2, which means that to Adobe’s knowledge there are currently no known exploits and none are expected imminently.

Flash plug-ins for Google Chrome on all platforms, or for Microsoft Edge and Internet Explorer 11 on Windows 10 and 8.1, will update themselves automatically.

Everyone else should download the latest version:

Adobe recommends users of the Adobe Flash Player Desktop Runtime for Windows, Macintosh and Linux update to Adobe Flash Player 29.0.0.140 via the update mechanism within the product [1] or by visiting the Adobe Flash Player Download Center.

The good news is that, in this case, Adobe and the independent researchers who found the holes in its product are one step ahead of the bad guys this month (provided you install the update).

The bad news is that the rate at which critical, remotely exploitable flaws are found in a product that barely changes shows no signs of slowing, even after all these years.

So, if you find yourself downloading the latest version, ask yourself what you’re planning to use it for and whether you really need it.

Why? Because cybercriminals love that you run Flash.

Over the years its many remotely exploitable flaws have been a reliable source of joy for them – giving them a bunch of ways to reach through your browser and persuade your computer to run malware.

Millions of users have uninstalled Flash completely, Steve Jobs ensured that iPhone and iPad users have never had it, browsers are burying it as deeply as they can, and even Adobe has called time on it.

Cybercriminals still love it though, and they want you to love it too, or at least tolerate it enough to keep it hanging around because if past performance is indicative of future results – a 0-day is coming.

If you’re determined to keep it, I’ll see you here again in a month.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nttZGuXC7-M/

Hack Back: An Eye for an Eye Could Make You Blind

Attackers have had almost zero consequences or cost for stealing data from innocent victims. But what if we could hack their wallets, not their systems?

As Gandhi once said, “An eye for an eye will only make the whole world blind.” The same could be said about using “hack back” technology for vengeful purposes, such as security defenders who respond to attackers with the intent to harm their systems. What would happen if we let corporations take cyber justice into their own hands? Critics fear it will make the Internet less safe and unintended harm will be directed at innocent bystanders. But should we live at the mercy of attackers who have more control over our data than we do? Or is it possible to hack back in an ethical and safe way?

Legislation has been proposed in Congress that would make it legal for folks to defend themselves in an attack by hacking back. Even if the language of the legislation is inherently ambiguous, the intent is clear: change the asymmetric cyberwar to at least provide equal footing to the defenders. Attackers have always had the high ground. It’s time to change that.

It is understandable that the concept of hacking back has been met with loud opposition by some academics, security professionals, and policy analysts, claiming that it’s the worst idea in cybersecurity ever. (That’s certainly debatable; purely signature-based antivirus is perhaps worse.) They believe attribution of the true attacker is just not solvable and could lead to mistaken identities or hacking the wrong person. I disagree with these knee-jerk reactions, but that also depends on the definition of hacking back. When there are many sides to an argument, it’s important to make sure we’re all talking about the same thing.

How to Define Hacking Back
Hacking back is one of the best-kept secrets by some defenders and clearly runs afoul of the Computer Fraud and Abuse Act (CFAA). It is illegal for a defender to probe a remote source IP implicated in an attack on them and exploit any found vulnerabilities to implant code in the abusive machine, even if the defender seeks to recover or destroy stolen data. The cost to the defender is very high, especially if the target of their revenge turns out to be an innocent bystander. Under CFAA, the penalties can be quite stiff.

For these and other reasons, the Active Cyber Defense Certainty Act (ACDC) seeks to limit or entirely eliminate the liability of the corporation that seeks to defend itself and recover its own lost data by retaliatory strikes against the perpetrator. But being certain of the true source of an attack — true attribution — remains elusive and misdirected revenge could do far more harm, even if it is legal. There must be a safer way to legitimately hack back to recover or destroy stolen data.

Target the Attackers’ Knowledge, Not Their Systems
Attackers have had almost zero consequences or costs for stealing data from innocent victims. What if we could hack their wallets, but not their systems? The goal of hacking back should be to confound and confuse them, especially attackers who have the primary goal of data exfiltration for monetary gain. Make them pay a price for stealing data from an innocent victim. Cost should now be part of the game.

But how do we do that without causing damage to an innocent bystander who served merely as a stepping stone for the true attack hiding in the shadows? Unmitigated (and vengeful) hacking back plays directly into the hands of the attacker who executes an old school reflector attack, for example. How might we reach past the stepping stones and serve up their just rewards to the true attacker?

One way is by feeding attackers with unbounded, exfiltrated bogus data. This strategy not only makes them think twice about whether they were snookered, but they now have the expense of figuring out what of their quarry actually has any value to them. Of course, the same may be true of nation-state actors; they, too, should not operate freely any longer, even if their goal is nonmonetary.

Deception in Depth
Deception security is a growing marketplace, and it’s an obvious choice for safely hacking back (hackbacking?) with outcomes that favor the defender. But the key to successful deployment of deception security must incorporate strategic placement and replenishment of deceptive data throughout the operational networks of the defended enterprise, making it very hard to tell what is real and what isn’t for the attacker. Sophisticated attackers do well to identify the “tells” that can be found in honeynets, especially those that lack realistic data and data flows. And if the deceptive data and decoy document generation is automated and architected well, it will be nearly impossible for the attacker to tell if the data is real or not.

For this to work, deceptive materials must be believable, noninterfering with normal operations, conspicuous to the attacker, and plentiful to keep the attackers well fed and deeply frustrated. These guidelines for successful decoy data deployment within operational networks are achievable and could one day become part of any modern security architecture.

Deception and decoy data is clearly a knowledge attack that seems to me to be the best choice to safely hack back. A data deception strategy may work best to feed the attacker with the false sense of accomplishment, but with the real cost of determining what they stole is real or bogus.

Revenge may best be served cold, but defenders can bask in the warmth of knowing their hack-back method, serving tons of decoys, caused the attacker as much frustration and anger as they experienced in the past when their network was pierced, and their corporate data was stolen as reported in the headlines. A knowledge attack is a safer alternative that no one can complain about from a judicial or legal perspective, and certainly no one will go blind to the fact that the defender now has the high ground.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for an intensive Security Pro Summit at Interop ITX and learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Dr. Salvatore Stolfo is the founder and CTO of Allure Security. As a professor of artificial intelligence at Columbia University since 1979, Dr. Stolfo has spent a career figuring out how people think and how to make computers and systems think like people. Dr. Stolfo has … View Full Bio

Article source: https://www.darkreading.com/perimeter/hack-back-an-eye-for-an-eye-could-make-you-blind/a/d-id/1331432?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Carbon Black Files IPO, Plans to Raise $100M

The endpoint security firm filed a registration statement with the Securities and Exchange Commission on April 9.

Carbon Black is pursuing an initial public offering, the company announced this week. The endpoint security firm plans to list its common stock on the NASDAQ Global Select Market under “CBLK.”

In a statement on the news, Carbon Black reports it has publicly filed a registration statement on Form S-1 with the US Securities and Exchange Commission, but the statement has not yet become effective. Carbon Black says it has not determined the amount of shares and price range; however, its SEC filing indicates the company plans to raise $100 million.

Carbon Black was founded in 2002 under the name Bit9. It acquired Carbon Black in 2014 and later adopted the same name. It focuses on multiple facets of endpoint security including application control, endpoint detection and response, and next-generation antivirus.

Read more details here.

Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/carbon-black-files-ipo-plans-to-raise-$100m/d/d-id/1331506?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Stats on the Cybersecurity Skills Shortage: How Bad is it, Really?

Is it just a problem of too few security professionals, or are there other reasons enterprises struggle to build infosec teams?PreviousNext

Image Source: Adobe Stock (Yong Hian Lim)

Image Source: Adobe Stock (Yong Hian Lim)

While plenty of CISOs today find ways to successfully build out effective cybersecurity teams, most industry pundits agree that the process is a bear. One of the biggest complaints is that there just aren’t enough experienced, talented security professionals to fill the roles available – but there is talent for the taking if organizations know where to look for it. Nevertheless, the numbers support the fact that market constraints on security brainpower are a very real factor. Here’s what the most recent data shows.

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full BioPreviousNext

Article source: https://www.darkreading.com/stats-on-the-cybersecurity-skills-shortage-how-bad-is-it-really/d/d-id/1331504?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Palo Alto Networks Buys Secdo for Endpoint Detection

The acquisition is intended to ramp up Palo Alto’s endpoint detection capabilities with new tech and talent.

Palo Alto Networks has announced plans to acquire Secdo, an Israel-based company that combines automation with endpoint detection and response (EDR). The deal is expected to close during Palo Alto’s fiscal third quarter.

Secdo’s approach to data collection goes beyond traditional methods of EDR that only collect general event data and gathers more detailed information to detect and respond to attacks. Its team of engineers will join Palo Alto as part of the acquisition.

Palo Alto will use Secdo’s technology to build on its EDR capabilities and add data collection and visualization to both its Traps endpoint protection tool and Application Framework. Once Secdo is integrated with Traps and the Palo Alto Networks platform, additional data will feed into the logging service and improve precision for applications running in the Palo Alto Networks Application Framework, the company reports.

Terms of the transaction were not disclosed. Read more details here.

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here. Register with Promo Code DR200 and save $200.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/palo-alto-networks-buys-secdo-for-endpoint-detection/d/d-id/1331512?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Congress grills Zuckerberg, day one: How does this online stuff work?

Yikes, Facebook CEO Mark Zuckerberg said in prepared remarks for a rare joint hearing of the Senate Judiciary and Commerce Committees on Tuesday and Wednesday: malefactors have used reverse-lookup “to link people’s public Facebook information to a phone number”!

Quelle surprise, according to Zuckerberg’s prepared remarks: Facebook only discovered the incidents a few weeks ago, they claim, and immediately shut down the phone number/email lookup feature that let it happen.

Zuckerberg’s remarks:

When we found out about the abuse, we shut this feature down.

And thus, to borrow the Daily Beast’s phrasing, Zuckerberg gaslighted Congress before the hearings even started.

On Tuesday, senators were ready, though, to grill the virgin-to-Congressional-grilling about that “Well, shucks, we just found out” bit. Sen. Dianne Feinstein was the first to jump in with the fact that Facebook learned about Cambridge Analytica’s (CA’s) misuse of data in 2015 but didn’t take significant steps to address it until the past few weeks.

Zuckerberg’s response, reiterated many times during five hours of testimony: We goofed. CA told us it deleted the data. We believed them. We shouldn’t have. It won’t happen again.

Sen. Chuck Grassley asked the CEO if Facebook has ever conducted audits to ensure deletion of inappropriately transferred data (it seemed to have an audit allergy, at least during whistleblower Sandy Parakilas’s tenure), and if so, how many times?

My people will get back to you on that, Zuckerberg said… Many times, to many questions.

But with regards to app developers’ handling of user data, Facebook will do better, he promised: It will take a more proactive approach to vetting how app developers handle user data, will do spot checks, and will boost the number of audits.

It was the email/phone lookup feature that data-analytics firm CA – one of the multiple rocket thrusters that pushed Zuckerberg into getting this call from Congress – used to scrape users’ public profile information. In CA’s case, we’re talking about profile information of 87 million users – that would be “most people on Facebook”, according to Facebook – who were subjected to data harvesting without their permission.

In response to CA (and Russia, and bot, and fake news) outrage, Tuesday’s testimony was the same litany of apologies and pledges to do better that Facebook’s been singing since its founding, 14 years ago. That’s 14 years of moving fast and breaking things, including any notion that it might choose to protect users from its customers. Wired has called it the “14-year apology tour.”

The more things change, the more things stay the same. Wired:

In 2003, one year before Facebook was founded, a website called Facemash began nonconsensually scraping pictures of students at Harvard from the school’s intranet and asking users to rate their hotness. Obviously, it caused an outcry. The website’s developer quickly proffered an apology. ‘I hope you understand, this is not how I meant for things to go, and I apologize for any harm done as a result of my neglect to consider how quickly the site would spread and its consequences thereafter,’ wrote a young Mark Zuckerberg. ‘I definitely see how my intentions could be seen in the wrong light.’

Tuesday’s testimony was more of the same, on the topics of CA and other Facebook app developers’ use and abuse of Facebook users’ data, on the topic of how Facebook could possibly have been unaware of what Russian actors were up to when using the platform to tinker with the 2016 US presidential election, on Russian bots spreading discord and fake news.

It wasn’t so much a grilling. It was more of a golden toasting. Much of this had to do with the fact that some senators proved themselves to be fairly clueless about the intricacies of technology and online business models.

An example: Sen. Bill Nelson rambled on for a bit about posting about dark chocolate and suddenly having ads for dark chocolate pop up on Facebook. Could it be that Facebook might, as COO Sheryl Sandberg suggested on the Today show, charge people to not see ads about dark chocolate?!

The idea of being charged for Facebook’s “free” services really must have resonated. An exchange between Sen. Orrin Hatch and Zuckerberg:

Hatch: ‘How do you sustain a business model in which users don’t pay for your service?’
Zuckerberg: ‘Senator, we run ads.’

Not all senators proved out of their depth. As CNN notes, Sen. Lindsey Graham was “smart and informed.” The same goes for Sen. Brian Schatz, who nailed Zuckerberg down on what it means when Facebook claims that every user “owns” his or her own information. Sen. Chris Coons highlighted the problems inherent in Facebook’s ad targeting: What if a diet pill manufacturer was able to target teenagers struggling with bulimia or anorexia?

But Zuckerberg stuck to a strict script. He likely made his coaches proud. He had, in fact, been coached like a politician getting ready for a televised debate.

According to the New York Times, Zuck’s been undergoing “a crash course in humility and charm,” including mock interrogations from his staff and outside consultants.

More takeaways from Tuesday’s testimony:

Facebook is open to the “right” regulation.

Sen. Maggie Hassan: Will you commit to working with Congress to develop ways of protecting constituents, even if it means laws that adjust your business model?

Zuck: Yes. Our position is not that regulation is wrong. [Facebook just wants to make sure it’s the “right” regulation.]

Cambridge University professor Aleksandr Kogan shared user data with other firms besides CA.

Sen. Tammy Baldwin asks whether Kogan sold the data to anyone besides Cambridge Analytica?
Zuck: Yes, he did.

He mentioned Eunoia as one of the companies but said there may be others.

Not banning CA in 2015 was “a mistake.”

Zuck corrected an earlier statement: CA was, actually, an advertiser in 2015, so Facebook could have banned the firm when it first learned of its data scraping. Zuck says not doing so was a “mistake”.

Where does Facebook go from here? As New Yorker writer Anna Wiener noted in a roundtable discussion, it’s in a bind:

To ‘fix’ Facebook would require a decision on Facebook’s part about whom the company serves. It’s now in the unenviable (if totally self-inflicted) position of protecting its users from its customers.

Well, we may not know how Facebook is going to figure that one out, but we know where it’s going today: back to Congress for more of the same.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/CgYHRyA5KFU/

Steve Wozniak explains why he deactivated his Facebook account

As his 5,000 Facebook friends are about to find out, Apple co-founder Steve Wozniak has well and truly left the building.

When it comes to Facebook, most celebrities tip-toe out the back door without saying much. But Wozniak is not most celebrities, and sent an email explaining the recent decision to deactivate his account to USA Today.

Given the recent fuss about Facebook’s privacy behaviour, most of it is not hard to second guess:

Users provide every detail of their life to Facebook and… Facebook makes a lot of advertising money off this. The profits are all based on the user’s info, but the users get none of the profits back.

Which had become a thinly-gilded cage:

I was surprised to see how many categories for ads and how many advertisers I had to get rid of, one at a time. I did not feel that this is what people want done to them. Ads and spam are bad things these days and there are no controls over them. Or transparency.

This compared unfavourably with another big tech company close to Wozniak’s heart:

Apple makes its money off of good products, not off of you. As they say, with Facebook, you are the product.

This echoes criticism of Facebook by Apple’s CEO Tim Cook who told reporters a few days ago that his company could do what Facebook does it if wanted to. However:

We’ve elected not to do that… We’re not going to traffic in your personal life. Privacy to us is a human right, a civil liberty.

It’s not clear how many Facebook users have left since the Cambridge Analytica scandal became public on 16 March, although #deletefacebook gained considerable traction, trending on Twitter in the following days.

Ironically, the first tech figure to endorse #deletefacebook was WhatsApp co-founder Brian Acton, who recently quit the company he sold to Facebook in 2014.

That high-profile desertion was noticed by Tesla and SpaceX’s CEO Elon Musk who offered this dead-pan response:

When it was quickly pointed out to Musk that both of his companies had Facebook pages, he had them deactivated.

Wozniak, meanwhile, told USA Today that he wouldn’t miss his 5,000 Facebook friends because he didn’t think he knew many of them to start with.

And there was one final anti-climactic twist – he would only deactivate his account, not fully delete it.

The reason he gave was that removing himself from Facebook forever would have meant giving up his “stevewoz” screen name, which someone else would have been free to use.

So Facebook still has Wozniak’s data, just as it still has yours if you’ve only deactivated your account (or if you’re still an active member).

If you want to fully delete your own Facebook account, or just want to swot up on app settings, read our article on how to protect your Facebook data.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Gk137HKNLZY/

Back to the future! 1990s Windows File Manager! NOW OPEN SOURCE!

You know you want to.

Actually, you know you DON’T want to – I certainly didn’t.

But you will anyway – I did.

Microsoft has released the File Manager program from Windows 3, which was released back in 1990.

When I say “released”, I mean “set free”, and that’s free in the threefold sense of speech, beer and download.

Yes, the venerable WinFile application is now open source software!

To kick off with an admission: I’ve never got on with single-pane file managers – from WinFile to the latest Mac Finder, I’ve always shoved them to one side in favour of two-panel viewers.

Why view one directory at a time when you so often want to view two, either to move files from A to B (or in the Windows world, probably from D: to C:), or to compare old and new versions of stuff?

As a result, I’ve always had a copy of Midnight Commander to hand on Mac and Linux boxen, as well as Servant Salamander back when I used Windows as a matter of routine. (I chose that last word very carefully to avoid giving the impression that it was a matter of choice.)

In truth, I never much liked Windows 3, and when I used it, I didn’t like WinFile at all.

WinFile made tasks that were somewhat complicated but perfectly reliable at the DOS prompt into tasks that were dead easy but liable to go weirdly wrong when moving clunky icons between two separate on-screen windows.

But time is a great healer.

Let’s be fair

WinFile was written for Windows, so it’s hardly surprising that it was written to use windows…

…and, to be fair, by opening two WinFile windows inside the main window, you could use it as a two-panel viewer anyway.

In other words: WinFile wasn’t actually that bad, and seen through the rose-tinted glasses of computer history, you’ll eventually realise, like me, that you want to bring it back to life.

Doing so was a lot easier that I thought.

First, I downloaded Visual Studio Community 2017 and did a basic, default install.

OK, there was about 1.3GB to download, and it took up more than 6GB when installed – but it was still 1,000,000 times easier (not to mention infinity times cheaper, given that it’s free) than setting up Microsoft’s developer tools and SDKs (software development kits) in the 1990s.

Second, I downloaded the WinFile source code from GitHub – I chose the code tagged original_plus to get the most authentic old-school experience.

Third, after unzipping the source, I opened the file winfile-original_plussrcWinfile.vcxproj.

Fourth, I chose the build options Release - x64 and hit Ctrl-Shift-B (Build Solution) to build the app.

(You can skip step #4 – I tried it to see what would happen, but the 64-bit native build failed dismally with a cascade of errors.)

Fifth, I switched to Release - Win32 for a 32-bit executable instead, and did another build.

Sixth, well, there isn’t a sixth, because the build succeeded, leaving me with a 264KB program called WinFile.exe, ready to run.

There you have it

And there you have it: because you can.

There’s simply no other reason you need.

Which is just as well, because there is no other reason.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mDBpqWILuyc/

No password? No worries! Two new standards aim to make logins an API experience

A pair of authentication standards published this week have received endorsement from Mozilla, Microsoft and Google: the WebAuthn API, and the FIDO Alliance’s Client-to-Authenticator Protocol.

The aim of WebAuthn and CTAP is to offer an authentication primitive that doesn’t rely on server-stored passwords, since a user’s fingerprint or even their unlock pattern is safer for both user and Web site owner.

Just before the WebAuthn API wrapped up after more than two years’ work, the World Wide Web Consortium (W3C) last month asked developers to start work on their implementations.

In typically-opaque language, the W3C said WebAuthn is “an API enabling the creation and use of strong, attested, scoped, public key-based credentials by web applications, for the purpose of strongly authenticating users.”

WebAuthn sees a user agent store public key credentials. The API is designed so that access to those credentials is handled in a way that preserves user privacy.

For example, a user is authenticated against their credentials (like fingerprint) entirely on their client device: WebAuthn tells the Web application the user is authenticated, but doesn’t send the credentials up to the server.

Credential protection is the job of “compliant authenticators” such as a trusted applet, TPMs (trusted platform modules) of SEs (secure elements) in the user’s environment. External elements like USB, Bluetooth, and NFC devices can also store credentials.

As the W3C explains in its document, the user agent (such as, for example, a phone) should let users store logins under multiple identities in a WebAuthn-compliant implementation.

In welcoming the completion of the standard, the FIDO Alliance notes that the WebAuthn API standard is part of its FIDO2 project (which WebAuthn and CTAP completed).

FIDO’s associated CTAP project sets down the detail of external authenticator behaviour (the Bluetooth, NFC and USB devices).

It covers the application protocol between the authenticator and the client, and the bindings of the protocol to different transport protocols (so, for example, the application developer doesn’t have to write communications code for USB and Bluetooth from scratch).

The standardisation effort is also an important part of FIDO’s goal of getting rid of passwords, since Web applications get a standard way to interact with biometric authentication in the same way as they would interact with a security key – and without passing the credentials upwards to the Web application.

As the FIDO announcement stated: “User credentials and biometric templates never leave the user’s device and are never stored on servers”. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/11/fido_takes_a_bite_out_of_passwords_with_two_authentication_standards/