STE WILLIAMS

New BankBot Version Avoids Detection in Google Play — Again

Mobile banking Trojan BankBot uses a unique payload downloading technique to skip past Google Play Protect.

BankBot’s newest version ducked detection in Google Play by downloading its payload from an external source, according to a report by researchers at Avast, SfyLabs, and ESET, which made the discovery.

Once installed, BankBot will wait for a user to launch a legitimate banking app on his or her device and then overlay a copycat version of the app. It will not only steal users’ bank credentials as they log in to the fake app, but it will also intercept victim’s text messages, including mobile transaction authentication numbers (TANs), says Lukas Stefanko, a malware researcher at ESET.

“It will allow them to carry out bank transfers on a user’s behalf,” warns Stefanko. Banks will often rely on text messages as a form of two-factor authentication.

Attack Path
The authors of the latest BankBot version managed to get past Google Play’s security vetting process by submitting a bogus app without the actual payload packed within the app, Stefanko explains.

The victim downloads a Trojanized flashlight app, which even has flashlight functionality, and then the malicious payload is dropped from a nefarious link in the background.

The malicious payload waits two hours after it’s dropped before requesting the victim install it, says Stefanko, giving the cybercriminal administrator rights to the app.

After the user executes one of the targeted financial apps, such as Wells Fargo, Chase, or any of the other institutions on BankBot’s hit list, a fake overlay that mimics the original screen is placed on top of the legitimate app, says Nikolaos Chrysaidos, head of mobile threat intelligence and security at Avast. He adds that more advanced users may be able to detect the bogus overlay, given they are not identical to the original banking app interface, but other users may not notice the difference.

BankBot’s flexibility in the payload it delivers makes it unique, the security researchers say.

“Using the same payload delivery mechanism, the actors could drop whatever malware, spyware, banker [Trojans] they want into the device,” Chrysaidos warns. “CISOs should at least be proactive and use an AV solution on the Android devices of their company devices.”

The security researchers suspect BankBot’s authors are based in Ukraine, Belarus, and Russia, because its activities in those regions are absent. As a result, they believe the actors are keeping a low profile with the local law authorities, the report states.

Google was notified of the latest BankBot version on Nov. 17, and the Internet giant removed it from Google Play on the same day, Chrysaidos says. To date, all of the reported BankBot variants have been removed from Google Play, but the actors still appear active, so it is likely that another run will be made in the future to upload newer versions of BankBot, the researchers note.

Old vs. New
BankBot, which ESET initially discovered at the start of this year, had another version emerge in September, Stefanko says.

The droppers in the September version were considered far more sophisticated than this newest version, the report states. The malicious payloads could use Google’s Accessibility Service to enable the installation of apps from unknown sources. But in the fall, Google halted use of this Accessibility Service feature for everyone except those who are blind.

“Bad actors removing this functionality could make their malware a bit more stealthy from discovery, as something that uses the Accessibility Service could be very quickly detected as suspicious,” Chrysaidos says. “On the other hand, it makes the malware less powerful.”

Related Content:

 

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/mobile/new-bankbot-version-avoids-detection-in-google-play----again/d/d-id/1330499?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How one man could have deleted any image on Facebook

We’ve written about insecure direct object references before.

Here’s another one that could have given a bug-hunter called Pouya Darabi the ability to remove other people’s images from Facebook.

Fortunately for the world at large, Darabi told Facebook, who quickly fixed the bug and paid him a $10,000 bug bounty.

Insecure direct object references on websites are where you figure out a way to take a web request that lets you access an item that belongs to you, such as a video, article or image…

…and then deliberately modify the data in the request so that it references an object that belongs to someone else, but in such a way that the server authorises the request anyway, thus implicitly authorising you to access to the other person’s data.

In this way, to you trick the server into giving you access to something that would usually be blocked or invisible.

As Naked Security’s Mark Stockley very neatly put it in 2016 when describing a long-standing flaw in how domain names were administered in American Samoa (.AS):

Insecure direct object reference[s are] a type of flaw that allows [you] to access or change things that aren’t under [your] control by tweaking things that are.

For example, imagine that there’s an image you can’t access, on a server you want to hack, that’s published via a URL like this:

https://example.net/photos/7746594545.jpg

--- HTTP request generated: ---

GET /photos/7746594545.jpg HTTP/1.1
Host: example.net

Now imagine that after you login to your own account, you can edit your own private images with a special URL, combined with a session cookie, like this:

https://example.net/api/edit/?image=4857394574.jpg

--- HTTP request generated: ---

GET /api/edit/?image=4857394574.jpg
Host: example.net
Cookie: authtoken=HRCALAGJEOWRGTMW

In this made-up example, the authtoken is a session cookie that tells the server that it’s you, and that you’ve already authenticated.

Imagine that the server validates only your authtoken, and doesn’t check the specific image 4857394574 against your account to make sure you really are allowed to edit it.

You may be able to tweak and replay this request with the original, prohibited image filename in it, like this:

https://example.net/api/edit/?image=7746594545.jpg

--- HTTP request generated: ---

GET /api/edit/?image=7746594545.jpg
Host: example.net
Cookie: authtoken=HRCALAGJEOWRGTMW

In other words, in this hypothetical example, you end up authenticated to edit all image files simply by virtue of having the right to edit some of them.

That’s a bit like checking into a hotel, getting a key that opens your allocated room, and then stumbling across the fact that it opens all the other rooms on your floor due to a key encoding error.

Typically, this sort of flaw happens when software is tested to make sure it passes tests that it’s supposed to pass, but isn’t tested to make sure it doesn’t pass when it’s supposed to fail.

The Facebook flaw

Darabi noticed that when he created a Facebook poll with an image attached, he could modify the outgoing HTTP request to refer to other people’s images, not merely his own, by rewriting some of the fields in the relevant HTTP form (click on the image to see the original):

The poll would then show up with someone else’s image in it.

This sort of image substitution isn’t a problem if the substituted image is meant to be public anyway, so this doesn’t feel like much of a bug to start with…

…but when Darabi deleted the poll, which he was allowed to do because he created it, Facebook helpfully deleted the images attached to it, apparently assuming that his authentication to delete the poll extended to the image objects referenced in the poll.

Thus, insecure direct object reference.

What to do?

If you’re a Facebook user, you don’t need to do anything.

Thanks to Darabi’s bug report (sweetened for him by that $10,000 payout), this vulnerability has already been patched, so you can no longer rig up a poll that removes other people’s images.

If you’re a programmer, remember to test everything.

Sometimes, “failing soft”, where faulty code causes security to be reduced, is appropriate, such as automatically unlocking the fire escape doors if your security software crashes or the electrical power fails.

At other times, you want to “fail hard”, or “failed closed”, such as not accepting any authentication passwords if you think some of them have been compromised.

In particular, if there are conditions in your software that the developer assures you “cannot happen”, assume not only that they can but also that they surely will, and test accordingly…


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ElzzATygxs4/

How one man could have deleted any image on Facebook

We’ve written about insecure direct object references before.

Here’s another one that could have given a bug-hunter called Pouya Darabi the ability to remove other people’s images from Facebook.

Fortunately for the world at large, Darabi told Facebook, who quickly fixed the bug and paid him a $10,000 bug bounty.

Insecure direct object references on websites are where you figure out a way to take a web request that lets you access an item that belongs to you, such as a video, article or image…

…and then deliberately modify the data in the request so that it references an object that belongs to someone else, but in such a way that the server authorises the request anyway, thus implicitly authorising you to access to the other person’s data.

In this way, to you trick the server into giving you access to something that would usually be blocked or invisible.

As Naked Security’s Mark Stockley very neatly put it in 2016 when describing a long-standing flaw in how domain names were administered in American Samoa (.AS):

Insecure direct object reference[s are] a type of flaw that allows [you] to access or change things that aren’t under [your] control by tweaking things that are.

For example, imagine that there’s an image you can’t access, on a server you want to hack, that’s published via a URL like this:

https://example.net/photos/7746594545.jpg

--- HTTP request generated: ---

GET /photos/7746594545.jpg HTTP/1.1
Host: example.net

Now imagine that after you login to your own account, you can edit your own private images with a special URL, combined with a session cookie, like this:

https://example.net/api/edit/?image=4857394574.jpg

--- HTTP request generated: ---

GET /api/edit/?image=4857394574.jpg
Host: example.net
Cookie: authtoken=HRCALAGJEOWRGTMW

In this made-up example, the authtoken is a session cookie that tells the server that it’s you, and that you’ve already authenticated.

Imagine that the server validates only your authtoken, and doesn’t check the specific image 4857394574 against your account to make sure you really are allowed to edit it.

You may be able to tweak and replay this request with the original, prohibited image filename in it, like this:

https://example.net/api/edit/?image=7746594545.jpg

--- HTTP request generated: ---

GET /api/edit/?image=7746594545.jpg
Host: example.net
Cookie: authtoken=HRCALAGJEOWRGTMW

In other words, in this hypothetical example, you end up authenticated to edit all image files simply by virtue of having the right to edit some of them.

That’s a bit like checking into a hotel, getting a key that opens your allocated room, and then stumbling across the fact that it opens all the other rooms on your floor due to a key encoding error.

Typically, this sort of flaw happens when software is tested to make sure it passes tests that it’s supposed to pass, but isn’t tested to make sure it doesn’t pass when it’s supposed to fail.

The Facebook flaw

Darabi noticed that when he created a Facebook poll with an image attached, he could modify the outgoing HTTP request to refer to other people’s images, not merely his own, by rewriting some of the fields in the relevant HTTP form (click on the image to see the original):

The poll would then show up with someone else’s image in it.

This sort of image substitution isn’t a problem if the substituted image is meant to be public anyway, so this doesn’t feel like much of a bug to start with…

…but when Darabi deleted the poll, which he was allowed to do because he created it, Facebook helpfully deleted the images attached to it, apparently assuming that his authentication to delete the poll extended to the image objects referenced in the poll.

Thus, insecure direct object reference.

What to do?

If you’re a Facebook user, you don’t need to do anything.

Thanks to Darabi’s bug report (sweetened for him by that $10,000 payout), this vulnerability has already been patched, so you can no longer rig up a poll that removes other people’s images.

If you’re a programmer, remember to test everything.

Sometimes, “failing soft”, where faulty code causes security to be reduced, is appropriate, such as automatically unlocking the fire escape doors if your security software crashes or the electrical power fails.

At other times, you want to “fail hard”, or “failed closed”, such as not accepting any authentication passwords if you think some of them have been compromised.

In particular, if there are conditions in your software that the developer assures you “cannot happen”, assume not only that they can but also that they surely will, and test accordingly…


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ElzzATygxs4/

The end of net neutrality draws near

Is “Goodbye Net Neutrality” going to mean “hello” to Comcast throttling BitTorrent and other file-sharing sites again?

That’s one predicted scenario surrounding next month’s expected reversal, by the Republican-majority Federal Communications Commission (FCC), of the Net Neutrality regulations that took effect in June 2015 under President Obama.

Nilay Patel, writing in The Verge, noted this past week that FCC Chairman Ajit Pai thinks it was a mistake for the FCC to prevent Internet Service Provider (ISP) Comcast from blocking (or significantly slowing) traffic from the file sharing service BitTorrent in 2008.

Indeed, within a 188-page “declaratory ruling, report and order” titled “Restoring Internet Freedom [PDF],” Pai noted that he’s not the only one who thinks that – a federal appeals court threw out an FCC order that stopped Comcast from throttling BitTorrent content, a move that Comcast called “managing” its network.

In 2010, the U.S. Court of Appeals for the D.C. Circuit rejected the Commission’s action, holding that the Commission had not justified its action as a valid exercise of ancillary authority.

That, of course, is just one contentious issue in what has become a political firestorm over the Trump administration’s effort to eliminate FCC regulation of the internet, which took effect in June 2015, at the urging of President Obama.

But it is also a prime illustration of the fundamentals of the arguments on both sides, with voting on the rollback set for 14 December, and a national day of protest set for a week earlier, on 7 December.

For Net Neutrality advocates, that kind of government regulation is the only way to ensure that everybody gets equal access to the internet –the sites they want at the same speed as everybody else, and with all sites competing on a level playing field. It includes the expectation that there will be no “fast lanes” for the rich/elite and “slow lanes” for everybody else, and that ISPs won’t be able to block access to content, applications or websites that subscribers want.

The fear is that if Net Neutrality is revoked, Comcast and other ISPs will be free to go back to choking certain content and sites.

As Save the Internet puts it:

Without Net Neutrality, cable and phone companies could carve the internet into fast and slow lanes. An ISP could slow down its competitors’ content or block political opinions it disagreed with. ISPs could charge extra fees to the few content companies that could afford to pay for preferential treatment – relegating everyone else to a slower tier of service.

In reply, Pai insists that what he is proposing will be much better for average consumers. In a speech last spring at the Newseum in Washington, DC, he repeatedly emphasized the bipartisan support of the pre-2015 internet. Pai hailed the internet as the “greatest free-market success story in history,” thanks to:

…a landmark decision made by President Clinton and a Republican Congress in the Telecommunications Act of 1996. In that legislation, they decided on a bipartisan basis that it was the policy of the United States “to preserve the vibrant and competitive free market that presently exists for the internet… unfettered by Federal or State regulation.

And he said Net Neutrality, which classified broadband as a “Title II telecommunications service” instead of a “Title I information service,” created the kind of “heavy handed” regulation that was designed in the 1930s “for the Ma Bell monopoly,” and has stifled broadband investment and innovation.

Among our nation’s 12 largest internet service providers, domestic broadband capital expenditures decreased by 5.6% percent, or $3.6 billion, between 2014 and 2016, the first two years of the Title II era… the first time that such investment has declined outside of a recession in the internet era.

Pai contended that when investment in broadband declines, it is “low-income rural and urban neighborhoods” that suffer the most, since they generate the most marginal returns on investment.

Beyond that, back in 2007 when the Associated Press did its own investigation of Comcast slowing or blocking peer-to-peer (P2P) file sharing, CBS reported that the argument from Comcast and other ISPs for what they called “traffic shaping” was that a relatively small number of subscribers who were intense users of file-sharing services like BitTorrent, eDonkey and Gnutella, were hogging 50% to 90% of all internet traffic, slowing it down for everybody else.

In 2009, Comcast agreed to pay $16m to settle a class action lawsuit over the throttling of P2P connections. But, as Ars Technica reported at the time, that amounted to all of about $16 to those who submitted a valid claim for damages.

Pai insists that “transparency rules” under his proposed regulation will require ISPs to disclose any disparities in treatment of its customers, and that subscribers who think they are being treated unfairly can complain to the Federal Trade Commission (FTC) under antitrust and consumer protection laws.

But, as Patel pointed out, in today’s broadband market, “51% of Americans only have one choice of broadband provider.”

The rollback of Net Neutrality, “is not how the internet should work,” he wrote, closing with the exhortation: “Call Congress.”

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qWVtU-HCD_4/

The end of net neutrality draws near

Is “Goodbye Net Neutrality” going to mean “hello” to Comcast throttling BitTorrent and other file-sharing sites again?

That’s one predicted scenario surrounding next month’s expected reversal, by the Republican-majority Federal Communications Commission (FCC), of the Net Neutrality regulations that took effect in June 2015 under President Obama.

Nilay Patel, writing in The Verge, noted this past week that FCC Chairman Ajit Pai thinks it was a mistake for the FCC to prevent Internet Service Provider (ISP) Comcast from blocking (or significantly slowing) traffic from the file sharing service BitTorrent in 2008.

Indeed, within a 188-page “declaratory ruling, report and order” titled “Restoring Internet Freedom [PDF],” Pai noted that he’s not the only one who thinks that – a federal appeals court threw out an FCC order that stopped Comcast from throttling BitTorrent content, a move that Comcast called “managing” its network.

In 2010, the U.S. Court of Appeals for the D.C. Circuit rejected the Commission’s action, holding that the Commission had not justified its action as a valid exercise of ancillary authority.

That, of course, is just one contentious issue in what has become a political firestorm over the Trump administration’s effort to eliminate FCC regulation of the internet, which took effect in June 2015, at the urging of President Obama.

But it is also a prime illustration of the fundamentals of the arguments on both sides, with voting on the rollback set for 14 December, and a national day of protest set for a week earlier, on 7 December.

For Net Neutrality advocates, that kind of government regulation is the only way to ensure that everybody gets equal access to the internet –the sites they want at the same speed as everybody else, and with all sites competing on a level playing field. It includes the expectation that there will be no “fast lanes” for the rich/elite and “slow lanes” for everybody else, and that ISPs won’t be able to block access to content, applications or websites that subscribers want.

The fear is that if Net Neutrality is revoked, Comcast and other ISPs will be free to go back to choking certain content and sites.

As Save the Internet puts it:

Without Net Neutrality, cable and phone companies could carve the internet into fast and slow lanes. An ISP could slow down its competitors’ content or block political opinions it disagreed with. ISPs could charge extra fees to the few content companies that could afford to pay for preferential treatment – relegating everyone else to a slower tier of service.

In reply, Pai insists that what he is proposing will be much better for average consumers. In a speech last spring at the Newseum in Washington, DC, he repeatedly emphasized the bipartisan support of the pre-2015 internet. Pai hailed the internet as the “greatest free-market success story in history,” thanks to:

…a landmark decision made by President Clinton and a Republican Congress in the Telecommunications Act of 1996. In that legislation, they decided on a bipartisan basis that it was the policy of the United States “to preserve the vibrant and competitive free market that presently exists for the internet… unfettered by Federal or State regulation.

And he said Net Neutrality, which classified broadband as a “Title II telecommunications service” instead of a “Title I information service,” created the kind of “heavy handed” regulation that was designed in the 1930s “for the Ma Bell monopoly,” and has stifled broadband investment and innovation.

Among our nation’s 12 largest internet service providers, domestic broadband capital expenditures decreased by 5.6% percent, or $3.6 billion, between 2014 and 2016, the first two years of the Title II era… the first time that such investment has declined outside of a recession in the internet era.

Pai contended that when investment in broadband declines, it is “low-income rural and urban neighborhoods” that suffer the most, since they generate the most marginal returns on investment.

Beyond that, back in 2007 when the Associated Press did its own investigation of Comcast slowing or blocking peer-to-peer (P2P) file sharing, CBS reported that the argument from Comcast and other ISPs for what they called “traffic shaping” was that a relatively small number of subscribers who were intense users of file-sharing services like BitTorrent, eDonkey and Gnutella, were hogging 50% to 90% of all internet traffic, slowing it down for everybody else.

In 2009, Comcast agreed to pay $16m to settle a class action lawsuit over the throttling of P2P connections. But, as Ars Technica reported at the time, that amounted to all of about $16 to those who submitted a valid claim for damages.

Pai insists that “transparency rules” under his proposed regulation will require ISPs to disclose any disparities in treatment of its customers, and that subscribers who think they are being treated unfairly can complain to the Federal Trade Commission (FTC) under antitrust and consumer protection laws.

But, as Patel pointed out, in today’s broadband market, “51% of Americans only have one choice of broadband provider.”

The rollback of Net Neutrality, “is not how the internet should work,” he wrote, closing with the exhortation: “Call Congress.”

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qWVtU-HCD_4/

Barracuda gobbled up by private equity sharks

Private equity biz Thoma Bravo is buying slow-growth Barracuda Networks for $1.6bn in cash.

Barracuda is a $400m-run-rate business based in Campbell, California, USA, that sells data protection and security products. It has been making a move to flogging subscriptions as its customers move away from appliances, preferring to consume services. It’s been making just two to four per cent profits on $80m-$95m quarterly revenues for seven quarters now, after four quarters of losses.

Earlier this month it bought Sonian and its cloud archiving products for an undisclosed sum, which we thought may be around $100m.

The Thoma Bravo price represents a premium of 22.5 per cent on Barracuda’s ten-day average stock price prior to November 27, 2017, of $22.49. Barracuda’s board is unanimously in favor of the deal, and the biz will now operate as a private company. No major changes to its product set have been mentioned.

CEO BJ Jenkins said he thought the proposed transaction would provide an opportunity to accelerate Barracuda’s growth, without saying how, though he must have a good idea. He did say: “I expect that our employees, customers, and partners will benefit from this partnership,” and this at least doesn’t mention any restructuring.

William Blair analyst Jason Ader said: “Thoma Bravo appears to have paid a fair price—about 17.5 times enterprise value to free cash flow on our calendar 2018 estimates—and we consider there to be a low likelihood of another bidder emerging.

“For Barracuda, we believe the Thoma Bravo acquisition is probably the best scenario for shareholders, given skepticism regarding the company’s ability to transition the business from on-premises appliances to cloud-based solutions and recent margin challenges.

“While the frequency of ransomware attacks has been creating demand tailwinds for both Barracuda’s email security and backup products, we believe the company also faced some challenges from the transition to cloud and feel this takeout price is fair.”

Usually private equity buys a company so that it can make changes away from the intense glare of public company reporting requirements and shareholder concerns. This doesn’t have to mean severe cost-pruning or a CEO change, but it can often mean product and product line overhauls. Sometimes there can be a joining together of components from two or more companies to make a more viable one.

Riverbed went into Thoma Bravo in April 2015 and is still there. Also Thoma Bravo bought Blue Coat and its security products for $1.3bn in 2011 and sold it for $2.4bn to Bain Capital-led funds in 2015. Bain then sold it to Symantec for $4.65bn in 2016.

The proposed transaction is expected to close before Barracuda’s fiscal year end of February 28, 2018, subject to the usual approvals. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/27/barracuda_private_equity/

Barracuda gobbled up by private equity sharks

Private equity biz Thoma Bravo is buying slow-growth Barracuda Networks for $1.6bn in cash.

Barracuda is a $400m-run-rate business based in Campbell, California, USA, that sells data protection and security products. It has been making a move to flogging subscriptions as its customers move away from appliances, preferring to consume services. It’s been making just two to four per cent profits on $80m-$95m quarterly revenues for seven quarters now, after four quarters of losses.

Earlier this month it bought Sonian and its cloud archiving products for an undisclosed sum, which we thought may be around $100m.

The Thoma Bravo price represents a premium of 22.5 per cent on Barracuda’s ten-day average stock price prior to November 27, 2017, of $22.49. Barracuda’s board is unanimously in favor of the deal, and the biz will now operate as a private company. No major changes to its product set have been mentioned.

CEO BJ Jenkins said he thought the proposed transaction would provide an opportunity to accelerate Barracuda’s growth, without saying how, though he must have a good idea. He did say: “I expect that our employees, customers, and partners will benefit from this partnership,” and this at least doesn’t mention any restructuring.

William Blair analyst Jason Ader said: “Thoma Bravo appears to have paid a fair price—about 17.5 times enterprise value to free cash flow on our calendar 2018 estimates—and we consider there to be a low likelihood of another bidder emerging.

“For Barracuda, we believe the Thoma Bravo acquisition is probably the best scenario for shareholders, given skepticism regarding the company’s ability to transition the business from on-premises appliances to cloud-based solutions and recent margin challenges.

“While the frequency of ransomware attacks has been creating demand tailwinds for both Barracuda’s email security and backup products, we believe the company also faced some challenges from the transition to cloud and feel this takeout price is fair.”

Usually private equity buys a company so that it can make changes away from the intense glare of public company reporting requirements and shareholder concerns. This doesn’t have to mean severe cost-pruning or a CEO change, but it can often mean product and product line overhauls. Sometimes there can be a joining together of components from two or more companies to make a more viable one.

Riverbed went into Thoma Bravo in April 2015 and is still there. Also Thoma Bravo bought Blue Coat and its security products for $1.3bn in 2011 and sold it for $2.4bn to Bain Capital-led funds in 2015. Bain then sold it to Symantec for $4.65bn in 2016.

The proposed transaction is expected to close before Barracuda’s fiscal year end of February 28, 2018, subject to the usual approvals. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/27/barracuda_private_equity/

That $10,000 Facebook bug: Photos shafted, addicts screwed by polls

A security researcher found a way to delete any picture on Facebook, irrespective of whether it’s public or private, by cunning use of polls.

Pouya Daribi was digging around in the software used by Facebook users to set up quick opinion polls on their profile pages. When creating these informal surveys, the social media addicts can select photos to appear alongside the questions, and the ID codes for these pictures are embedded in the HTML form submitted to Facebook’s servers.

You can see where this is going. By jiggering around with the parameters in the request, Darabi found he could attach any image by changing the ID numbers. This allowed him to preview pictures uploaded online by strangers, and add them to a poll, and when he deleted that poll, the attached images were permanently deleted from the social network as well.

poll

Let’s get fiddling … The user-controlled photo ID numbers in the poll HTML form

The vulnerability is not quite as trivial as it appears: the ID numbers are not entirely sequential for pictures so a miscreant would have to feel their way through in the dark until they hit a valid image. It would be difficult to exploit it in a targeted way, however for causing general mischief it would be perfect. Discovering and reporting the security blunder earned Darabi a $10,000 award from Facebook’s bug bounty program.

The researcher’s writeup, published on Saturday, has some encouraging news as to how quickly Facebook fixed the issue. Darabi alerted the website’s security team, along with a proof of concept, on November 3, and within 12 hours Facebook had triaged the problem and rolled out a full fix two days later.

Facebook added the poll feature earlier this month, so it’s likely there are other flaws waiting to be found and dealt with. Given the money Facebook is offering, it’s time to get digging and see. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/27/facebook_flaw_kills_any_picture/

That $10,000 Facebook bug: Photos shafted, addicts screwed by polls

A security researcher found a way to delete any picture on Facebook, irrespective of whether it’s public or private, by cunning use of polls.

Pouya Daribi was digging around in the software used by Facebook users to set up quick opinion polls on their profile pages. When creating these informal surveys, the social media addicts can select photos to appear alongside the questions, and the ID codes for these pictures are embedded in the HTML form submitted to Facebook’s servers.

You can see where this is going. By jiggering around with the parameters in the request, Darabi found he could attach any image by changing the ID numbers. This allowed him to preview pictures uploaded online by strangers, and add them to a poll, and when he deleted that poll, the attached images were permanently deleted from the social network as well.

poll

Let’s get fiddling … The user-controlled photo ID numbers in the poll HTML form

The vulnerability is not quite as trivial as it appears: the ID numbers are not entirely sequential for pictures so a miscreant would have to feel their way through in the dark until they hit a valid image. It would be difficult to exploit it in a targeted way, however for causing general mischief it would be perfect. Discovering and reporting the security blunder earned Darabi a $10,000 award from Facebook’s bug bounty program.

The researcher’s writeup, published on Saturday, has some encouraging news as to how quickly Facebook fixed the issue. Darabi alerted the website’s security team, along with a proof of concept, on November 3, and within 12 hours Facebook had triaged the problem and rolled out a full fix two days later.

Facebook added the poll feature earlier this month, so it’s likely there are other flaws waiting to be found and dealt with. Given the money Facebook is offering, it’s time to get digging and see. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/27/facebook_flaw_kills_any_picture/

Cyber Forensics: The Next Frontier in Cybersecurity

We can now recover evidence from the RAM on a cellphone, even if the account is locked, and use it to prosecute a case.

Every day at Georgia Tech‘s College of Engineering, my lab helps to solve real crimes through cyber forensics, the application of investigation and analysis techniques to gather and preserve evidence from a computing device that can be presented in a court of law. My research has large-scale crime-solving implications, and my goal is to figure out how we can collect as much evidence as possible from any device involved in the crime to help put away the criminal.

Since I arrived at Georgia Tech, my lab has been hard at work to create forensic techniques that help investigators solve human crimes, in addition to tackling malware and cyber attacks. If someone robs a bank and drops his phone at the scene of the crime, we can mine that digital device for evidence that will help prosecute the case.

One of the primary focuses of my research is memory image forensics, the process of recovering evidence from the RAM (random access memory) of a device. I recently developed a cyber-forensic technique called RetroScope to recover encrypted information on a device, even if the user has locked his or her accounts. RetroScope leverages a copy of the memory (RAM data) from the device and recreates information such as texts or emails. An investigator can see entire sequences of app screens that were previously accessed by the user.

Terrorists are known to use an application called Telegram that is extremely secure and encrypts everything on the phone. With RetroScope, the data on the phone is recreated and made available to law enforcement. An investigator can see exactly what the suspect was communicating before or during the crime. Any data left on the memory of the device can be extracted and used as evidence.

Source: Georgia Tech

In a recent case, cyber forensics was used at a restaurant where patrons’ credit card information was being stolen. A forensic investigator was called in, but he couldn’t crack the case. With more customers being hacked, the restaurant was finally sued, and management called in a more-advanced forensic analyst to look over its systems. The forensic analyst realized there was malware on the restaurant’s point-of-sale system, exporting credit card information with each swipe. The hacker was leveraging the volatile RAM (e.g., the system’s short-term memory) to hide the malware, and the first investigator missed it.

The first investigator was only considering the static files stored on the disk of the computer. At the time, the forensic investigator wasn’t considering volatile RAM as a hiding place for malware. From research like mine, investigators now know that a device’s RAM is a viable place to harbor malware. You have to look everywhere in these investigations, leaving no stone unturned. My lab and I are continuing to pioneer the investigation of volatile RAM and the power of memory forensics in cases such as this.

At present, investigating crimes that involve digital devices as evidence is done in a very ad hoc manner, with much digital evidence being left behind. We need to design more holistic cyber-forensic techniques that take into account the entire digital system, and not just a single piece of evidence that investigators happen to find. This requires a paradigm shift in the way people think about cyber forensics. It’s no longer just a tool to be used in a larger investigation; it’s actually the driver of the investigation itself.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

 

Dr. Brendan Saltaformaggio is an Assistant Professor in the School of Electrical and Computer Engineering at Georgia Tech, with a courtesy appointment in the School of Computer Science. His research interests lie in computer systems security, cyber forensics, and the vetting … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/cyber-forensics-the-next-frontier-in-cybersecurity-/a/d-id/1330465?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple