STE WILLIAMS

Texas Instruments flicks Armis’ Bluetooth chip vuln off its shoulder

Texas Instruments has rather feebly slapped down infosec researchers’ findings on a so-called Bleedingbit Bluetooth Low Energy vulnerability after a more detailed explanation of the chipset’s weakness emerged.

A cry crying over her scraped knee

IT Wi-Fi kit bit by TI chip slip: Wireless gateways open to hijacking via BleedingBit chipset vuln

READ MORE

At Black Hat London last week, Ben Seri and Dor Zusman from research house Armis went into full detail about their November discovery of how to pwn TI-made Bluetooth Low Energy (BLE) chips.

The two affected chips – CC2640 and CC2650 – are used in several models of Cisco and Aruba wireless APs. What gave Armis a way in was the method of updating the chip’s firmware, which consisted of uploading firmware over an unencrypted connection, though the upload was authenticated.

Seri and Zusman went into precise detail during their Black Hat presentation on how they triggered the memory corruption problem which they said allowed them to pwn the chip, and thus the router it is built into.

The pair said that when a BLE chip broadcasts advertising packets (for other devices to know it is there and connect to it), those packets’ headers contain a field marked “length”. This field is “supposed to be 6 bits” long according to the deprecated Bluetooth 4.2 spec. Bluetooth 5.0, introduced two years ago, extended that to 8 bits.

TI’s affected chips, said Seri, happily handle both Bluetooth 4.2 and 5.0 advertising PDUs – and will execute 4.2 spec messages by checking incoming packets for length and ignoring the “missing” final 2 bits in the length field. Those two were reserved for future use (RFU’d) in the 4.2 spec, later being incorporated as a full part of the length field in Bluetooth 5.0. Having reverse-engineered the chip’s firmware, Armis figured out how to bypass that length authentication and fool the chip into executing over-length 4.2 spec messages, reading and executing the RFU-reserved bits.

Once they had done that, they said they turned on each of the “extra” bits in the length field until they found a combination that crashed the chip.

After finding their way past a secondary code validation method that halted their initial attempts to trigger a buffer overflow (with Zusman admitting that their final “form of exploitation [has a] 50 per cent success rate”), Armis’ researchers figured out that the chip “has no address space layout randomisation” and so “sprayed packets [at it] that hold [our] desired ICall_ values,” triggering a memory overflow. From there they were able to customise their payload until the chip’s memory was running in a loop that they controlled.

Zusman told the Black Hat audience:

“First we set up some shell code environment, then we stop the GAP task; we install our backdoor; then we link the data entry that has triggered the vuln to itself, making sure we [wouldn’t] crash in the next trigger, then we restore 2 variables, then we unhook ourselves from the Cisco gateway because we don’t want to run again. And that’s it basically: we have exploited the AP with only 32 bytes.”

Full technical details are available on Armis’ website, with its Black Hat presentation available on the BH site.

We patched, we warned – TI

Texas Instruments was not particularly happy about Armis’ published findings when El Reg asked the company to comment, stating, among other things, that “the potential security vulnerability that [Armis] are exploiting was addressed with previous software updates”.

Prior to being contacted by Armis, TI identified a potential stability issue with certain older versions of the BLE-STACK when used in a scanning mode, and we addressed this issue earlier this year with software updates. We’ve informed Armis that the potential security vulnerability that they are exploiting was addressed with previous software updates. If our customers have not already updated their software with the latest versions available, we encourage them to do so.

TI added: “Additionally, the over-the-air firmware download (OAD) Profile feature mentioned in Armis’ report is not intended or marketed to be a comprehensive security solution, as noted on TI.com [it pointed to the relevant URL here]. We encourage our customers to use security-enabled features when designing security-related systems.”

Cisco told us in a statement:

“Cisco is aware of the third-party software vulnerability in the Bluetooth Low Energy (BLE) Stack on select chips that affects multiple vendors. Cisco PSIRT issued a security advisory to provide relevant detail about the issue, noting which Cisco products may be affected and subsequently may require customer attention. Fixed software is available for all affected Cisco products.”

Firmware updates for all three of the TI chips are available from its website, though one should first look for updates from the vendor of any affected device.

For the CC2640 (non-R2), BLE-STACK version 2.2.2 is on TI’s website.

Folk wanting to “update to SimpleLink CC2640R2F SDK version 1.30.00.25 (BLE-STACK 3.0.1) or later” should go here.

And finally, for the CC1350, you can find Simplelink CC13x0 SDK version 2.30.00.20 (BLE-STACK 2.3.4), or later, here. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/11/bleedingbit_bluetooth_texas_instruments_armis_cisco/

Lenovo tells Asia-Pacific staff: Work lappy with your unencrypted data on it has been nicked

Exclusive A corporate-issued laptop lifted from a Lenovo employee in Singapore contained a cornucopia of unencrypted payroll data on staff based in the Asia Pacific region, The Register can exclusively reveal.

Details of the massive screw-up reached us from Lenovo staffers, who are simply bewildered at the monumental mistake. Lenovo has sent letters of shame to its employees confessing the security snafu.

“We are writing to notify you that Lenovo has learned that one of our Singapore employees recently had the work laptop stolen on 10 September 2018,” the letter from Lenovo HR and IT Security, dated 21 November, stated.

“Unfortunately, this laptop contained payroll information, including employee name, monthly salary amounts and bank account numbers for Asia Pacific employees and was not encrypted.”

Lenovo employs more than 54,000 staff worldwide (PDF), the bulk of whom are in China.

The letter stated there is currently “no indication” that the sensitive employee data has been “used or compromised”, and Lenovo said it is working with local police to “recover the stolen device”.

In a nod to concerns that will have arisen from this lapse in security, Lenovo is “reviewing the work practices and control in this location to ensure similar incidents do not occur”.

On hand with more wonderfully practical advice, after the stable doors were left swinging open, Lenovo told staff: “As a precaution, we recommend that all employees monitor bank accounts for any unusual activities. Be especially vigilant for possible phishing attacks and be sure to notify your financial institution right away if you notice any unusual transactions.”

The letter concluded on a high note. “Lenovo takes the security of employee information very seriously. And while there is no indication any data has been compromised, please let us know if you have any questions.”

The staff likely do. One told us the incident was “extremely concerning” but “somehow not surprising in any way. How on Earth did they let this data exist on a laptop that was not encrypted?”

The Register has asked Lenovo to comment. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/11/lenovo_asia_pacific_staff_data_unencrypted_work_laptop_stolen/

Nice phone account you have there – shame if something were to happen to it. Samsung fixes ID

A recently-patched set of flaws in Samsung’s mobile site was leaving users open to account theft.

Bug-hunter Artem Moskowsky said the flaws he discovered, a since-patched trio of cross-site request forgery (CSFR) bugs, would have potentially allowed attackers to reset user passwords and take over accounts.

Moskowsky told The Register that the vulnerabilities were due to the way the Samsung.com account page handled security questions. When the user forgets their password, they can answer the security question to reset it.

Normally, the Samsung.com web application would check the “referer” header to make sure data requests only come from sites that are supposed to have access.

In this case, however, those checks are not properly run and any site can get that information. This would let the attacker snoop on user profiles, change information (such as user name), or even disable two-factor authentication and steal accounts by changing passwords.

“Due to the vulnerabilities it was possible to hack any account on account.samsung.com if the user goes to my page,” Moskowsky explained.

“The hacker could get access to all the Samsung user services, private user information, to the cloud.”

steam

I found a security hole in Steam that gave me every game’s license keys and all I got was this… oh nice: $20,000

READ MORE

In one proof of concept, the researcher showed how an attack site could use the CSRF flaw to change the target’s Samsung.com security question to one of the attacker’s choosing. Armed with the new security question and its answer, the attacker would then use the “reset password” function to steal the target’s Samsung account.

It turned out the situation was even worse than the researcher initially thought. Thinking there were only two CSRF vulnerabilites on the site, Moskowsky went to report the issue directly to Samsung – something that was also done through the Samsung.com website. While reporting the issue, he noticed a third bug, the one that would allow him to forcibly change security questions and answers.

“I first discovered two vulnerabilities. But then when I logged in to security.samsungmobile.com to check my report, I was redirected to the personal information editing page,” Moskowsy explained.

“This page didn’t look like a similar page on account.samsung.com. There was an additional ‘secret question’ field on it.”

In total, three bugs were found and were rated medium, high, and critical, respectively. Moskowsky earned himself a payout of $13,300 for the find, a nice payout, but well short of the $20,000 he pocketed for spotting a major bug in Steam back in October.

Samsung did not respond to a request for comment on the matter. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/10/samsung_patches_accountstealing_hole/

Did you know that iOS ad clicks cost more than Android? These scammers did

An enterprising malware writer has been masquerading infected Android devices as Apple gear in order to make a few extra bucks.

Researchers with SophosLabs say the Andr/Clickr-ad malware takes advantage of the demand for ads that reach iPhone owners, as advertisers believe Apple fanbois are more willing to splash their cash.

The malware, which Sophos spotted on the Google Play store, infects Android devices and uses the bots to generate fake clicks on websites and earn the malware writers a payout from advertisers. Sophos estimates the malware, hidden within a flashlight app and some games, was downloaded more than two million times.

Because ads that reach Apple devices bring higher payouts for site owners, the Clickr-ad malware takes the additional step of telling the infected Android devices to present themselves as iPhones when making the fraudulent clicks.

Ads in NYC

3ve Offline: Countless Windows PCs using 1.7m IP addresses hacked to ‘view’ up to 12 billion adverts a day

READ MORE

“What sets Clickr-ad apart from previous examples is its sophisticated attempt to pass off much of the traffic the apps generate as coming from a range of Apple models such as the iPhone 5, 6 and 8,” Sophos said of the malware.

“It does this by forging the User-Agent device and app identity fields in the HTTP request. However, it is careful not to overdo the technique by allowing a portion of the traffic to use identities from a wide selection of Android models too.”

Sophos said that, while the malware was taken down from the Play store in late November, its command and control servers are still online, and infected devices are still generating the bogus ad clicks. Users will have to manually uninstall the infected apps in order to stop the clickfraud.

“Simply force-closing the app won’t do the trick because it can restart itself after three minutes – a full uninstall is needed,” Sophos explained,

“An extra precaution would be to conduct a full factory reset after ensuring all data has been synchronised to Google’s cloud.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/10/android_ios_clickfraud/

Latest Google+ flaw leads Chocolate Factory to shut down site early

Google says it will be speeding up the dismantling of its Google+ social network following the discovery of a new security bug that affected 52.5 million users.

The Chocolate Factory maintains that it has no evidence that the vulnerability, which was found in the API for Google+, was ever actively exploited. According to Google’s G-Suite VP of product management David Thacker, over a six-day period in November developers would have been able to access profile information that users had not made public.

Google said the vulnerability shows up when the user allows an app to connect with their Google+ profile. Rather than only see information the user had opted to share, the application would have been able to see all data about the user.

“We discovered this bug as part of our standard and ongoing testing procedures and fixed it within a week of it being introduced,” Thacker said.

“No third party compromised our systems, and we have no evidence that the app developers that inadvertently had this access for six days were aware of it or misused it in any way.”

Android Nougat

Google: I don’t know why you say Allo, I say goodbye

READ MORE

Still, while Thacker insists there is no evidence the bug was ever exploited, Google did say the exposure is serious enough to warrant moving the timeline for sunsetting Google+ forward by several months.

“With the discovery of this new bug, we have decided to expedite the shut-down of all Google+ APIs; this will occur within the next 90 days,” Thacker told users.

“In addition, we have also decided to accelerate the sunsetting of consumer Google+ from August 2019 to April 2019. While we recognize there are implications for developers, we want to ensure the protection of our users.”

At this point, Google may be happy just to be rid of its ill-fated social network after years of trying in vain to push Google+ as a viable alternative to Facebook. Following a previous leak, Google said it would be killing off Google+. After this latest security foul-up, the Mountain View ads giant no doubt will be glad to see the end of the service once and for all. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/11/google_hacked_again/

Massive botnet chews through 20,000 WordPress sites

WordPress users are facing another security worry following the discovery of a massive botnet. Attackers have infected 20,000 WordPress sites by brute-forcing administrator usernames and passwords. They are then using those sites to infect even more WordPress installations.

The botnet, which WordPress security company Wordfence discovered last week, infects sites using a feature known as XML-RPC. This is an interface that lets one piece of software make requests to another by sending it remote procedure calls (RPCs) written in the extensible markup language (XML).

Legitimate blogging programs use this feature to send blog content for WordPress sites to format and publish. Attackers can also use it to try multiple passwords and then manipulate a site if they gain access.

The attackers wrote a script that would launch an XML-RPC-based brute force attack, automatically generating a range of usernames and passwords in the hope that one of them will work and give it access to a privileged account. At that point, they can use that account to infect that site with the botnet software.

The password-building mechanism takes lists of usernames along with lists of common passwords and uses simple algorithms to create new password combinations from the usernames. So it might try the username ‘alice’ with passwords like alice123, alice2018, and so on. It might not be very effective on a single site, but when used across many sites, the attackers’ chances of success increase, says Wordfence.

Like any botnet, infected sites take instruction from the bot herders via a command and control (C2) server. In this case, however, the C2 infrastructure is relatively sophisticated. The attackers send their instructions to infected sites from one of four C2 servers that communicate via proxy servers, chosen from a large Russian list. Three of the C2 servers are hosted by HostSailor, which cybersecurity journalist Brian Krebs has reported on in the past.

While the C2 servers presented a login screen, Wordfence found that they did not, in fact, require authentication and it was able to view details of the infected slave machines, along with the proxy lists used to access them.

How can site owners protect themselves from this kind of brute force attack? Companies like Wordfence use anti-brute force techniques to restrict the number of login attempts and lock out attackers altogether after too many incorrect passwords.

Attackers might try to get around these techniques by switching proxies and/or user agent strings with each request, but these products can also stop them by blocking access from known malicious or infected sources using real-time blacklists.

These are all great extra layers of security to have, but a simpler and equally effective way to stop someone brute-forcing your account is to use a strong password and keep it safe in a password manager. Using a weird username that you won’t find on a regular list of names doesn’t hurt, either. It makes user credentials almost impossible to guess. Complement this with multi-factor authentication, preferably using a dedicated mobile authentication app rather than an SMS authorization, for extra protection.

Another effective security measure is to restrict administrative account access to specific IP addresses or to clients with specific digital certificates. It is also good practice to keep your WordPress installation as up to date as possible, too.

At present, the attackers seem to be in botnet-building mode, doing little more than growing the number of WordPress sites under their control. What will they do with these infected sites? It is difficult to tell, but a past Wordfence survey suggests that sending spam, hosting phishing pages, and launching malicious redirects are among the most popular WordPress attacks.

If you have a WordPress site, it would be worth checking your audit logs for any suspicious activity, checking your password security, and turning on multi-factor authentication (MFA or 2FA).

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/bxLYtIYmZQs/

Privacy, security fears about ID cards? UK.gov’s digital bod has one simple solution: ‘Get over it’

Digital minister Margot James reckons Brits need to “get over” their concerns about privacy and cyber security and let the government assign them with ID cards.

woman massages temples

Think tank calls for post-Brexit national ID cards: The kids have phones so what’s the difference?

READ MORE

The UK has historically railed against the idea of national identity cards, and previous attempts have failed over concerns about state surveillance and the creation of mass biometric databases.

But James has apparently pooh-poohed these concerns, brushing them aside in an interview with The Telegraph by saying people should “get over” privacy and security concerns associated with ID cards.

“I think there are advantages of a universally acclaimed digital ID system which nowhere in the world has yet,” she is reported to have said. “There is a great prize to be won once the technology and the public’s confidence are reconciled.”

She touted the government’s role in the development of digital identities – but pointed to work on Verify (of all things) in an apparent bid to demonstrate UK.gov’s expertise.

For those in need of a reminder, that’s one of the albatrosses around the Government Digital Service’s neck. The digital identity system, launched in 2011, has been struggling with low user take-up and internal Whitehall battles for years.

In July, a government watchdog downgraded its chances of success and in October the government opted to cut public support for the system, offering it up to the private sector.

id card mockup

UK.gov’s love affair with ID cards: Curse or farce?

READ MORE

James isn’t the only member of the UK’s leading party with their sights on national identity. In September, Amber Rudd – now secretary of state at Verify’s beleaguered Department for Work and Pensions – called for a state-backed system based on NHS numbers.

At the time, she also shrugged off the public’s possible concerns, arguing that people were perfectly happy handing over their data to tech giants, so they should give it to the government.

However, the government has yet to make any concrete policy proposals about a national ID system – and it can be sure critics won’t dismiss concerns about security and surveillance as easily as its ministers have. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/10/margot_james_id_cards/

Microsoft calls for laws on facial recognition, issues principles

In a year in which facial recognition has made massive strides to invade personal privacy and settle in as a favored tool for government surveillance, Microsoft isn’t just open to government regulation; it’s asking for it.

On Thursday, in a speech at the Brookings Institution, Microsoft President Brad Smith warned about facial recognition technology spreading “in ways that exacerbate societal issues.” Never mind any dents to profits, he said, we need legislation before the situation gets more dystopian than it already is.

We don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.

We must ensure that the year 2024 doesn’t look like a page from the novel 1984.

Smith said that Microsoft, after much pondering, has decided to adopt six principles to manage the risks and potential for abuse that come along with facial recognition: fairness, transparency, accountability, non-discrimination, notice and consent, and lawful surveillance. He said that Microsoft will publish a document this week with suggestions on implementing the principles.

The good, the bad, and the intrusive

It’s not as if facial recognition is being used to solely create worlds of ubiquitous surveillance, in which you’re shamed for jaywalking, you’re publicly humiliated for your financial troubles, or law enforcement uses it to surveil crowds that are overwhelmingly composed of innocent people.

Smith pointed to uses of facial recognition that, unlike those applications, are not, in fact, leading us all to Orwellian, Black Mirror-esque dystopia. He pointed to cases of missing children being reunited with their families, for example. One such is the story of a child with Down’s syndrome who’d wandered away from his father and was reunited after being missing for four years. That happy ending was thanks to Microsoft’s Photo Missing Children (PhotoMC) technology.

Microsoft isn’t the only vendor using facial recognition to do good. Other vendors make tools to find missing children: Smith pointed to nearly 3,000 missing children having been traced in four days when the New Delhi police did a trial of facial recognition technology in April.

On the other side of the coin, there are those Orwellian aspects of the technology.

For one thing, it’s well-documented that automated facial recognition (AFR) is an inherently racist technology. One reason is that black faces are over-represented in face databases to begin with, at least in the US: according to a study from Georgetown University’s Center for Privacy and Technology, in certain states, black Americans are arrested up to three times their representation in the population. A demographic’s over-representation in the database means that whatever error rate accrues to a facial recognition technology will be multiplied for that demographic.

Beyond that over-representation, facial recognition algorithms themselves have been found to be less accurate at identifying black faces.

During a scathing US House oversight committee hearing on the FBI’s use of the technology in 2017, it emerged that 80% of the people in the FBI database don’t have any sort of arrest record. Yet the system’s recognition algorithm inaccurately identifies them during criminal searches 15% of the time, with black women most often being misidentified.

That’s a lot of people wrongly identified as persons of interest to law enforcement.

In spite of that, law enforcement across the world adores facial recognition. In recent weeks, it’s emerged that the Secret Service plans to test facial recognition around the White House. That’s according to Department of Homeland Security (DHS) documents uncovered by the American Civil Liberties Union (ACLU).

While it’s important to protect the physical security of the president and the White House, the ACLU points out, this is also “opening the door to the mass, suspicionless scrutiny of Americans on public sidewalks,” as the cameras will “include images of individuals passing by on public streets and parks adjacent to the White House Complex.”

The ACLU knows first-hand how prone to error the technology is: it’s tested Amazon Rekognition, the company’s facial recognition technology, which is used by police in Orlando, Florida, and found that it falsely matched 28 members of Congress with mugshots.

To address bias, Smith said that we need legislation that would require companies to provide documentation about what their technology can and can’t do – in plain English that customers and consumers can understand. He also said that new laws should require third-party testing to check for accuracy and unfair bias in facial recognition services and suggested that companies could make an API available for this purpose.

He also said that laws should require humans to weigh in on facial recognition conclusions in “high-stakes scenarios,” including “where decisions may create a risk of bodily or emotional harm to a consumer, where there may be implications on human or fundamental rights, or where a consumer’s personal freedom or privacy may be impinged.”

Other things that new legislation should do, Smith said:

  • Ensure that it’s not used for unlawful discrimination. As it is, human rights activists say that China is using its facial-recognition systems to track members of persecuted minorities, including Uighur Muslims, Protestant Christians and Tibetan Buddhists.
  • Require that people know when they’re surveilled and that they give consent. Entities that use facial recognition to identify consumers should place a conspicuous notice that “clearly conveys that these services are being used.”
  • Limit ongoing government surveillance of specified individuals. To protect democratic freedoms, individuals should only be surveilled in public spaces under court order or in cases of “imminent danger or risk of death or serious physical injury to a person.”

We’re still in the infancy of this new technology, Smith said. Microsoft plans to formally launch its principles and a supporting framework before the end of March 2019, but in the meantime, it doesn’t even know all the questions, let alone the answers.

Hopefully, this will put us all on the road to getting there, Smith said:

We believe that taking a principled approach will provide valuable experience that will enable us to learn faster. As we do, we’re committed to sharing what we learn, perhaps most especially with our customers through new material and training resources that will enable them to adopt facial recognition in a manner that gives their stakeholders and the public the confidence they deserve.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/xcjrp0mv4Gs/

Microsoft’s gutting Edge and stuffing it with Chromium

Microsoft on Thursday announced that it’s going to spend the next year or so gutting its Edge browser and filling it with Chromium: the same open-source web rendering engine that powers Google’s Chrome browser (Chrome is Chromium with some Google extras), Opera, Vivaldi, Yandex, Brave and others.

This is an extraordinary step: some say it points to open source having won the browser wars, for better or worse. Better for web compatibility, says Microsoft, worse for a monoculture where if one thing breaks, a whole lot of other things break.

Terrible for any browser that’s trying to succeed outside of the near-total control of our online lives that Google already enjoys, Mozilla says. The open-source foundation regularly points out that Firefox is the only independent browser that isn’t tied to a profit-driven company, including Google with Chrome, Apple with Safari, and Microsoft with Edge.

Back in the day, Internet Explorer – the predecessor to Edge – not only ruled the browser roost; its stranglehold precipitated an epic antitrust case accusing Microsoft of abusing its monopoly position over Windows. But that was then, and this is now, and Explorer’s replacement, Edge, has a tiny share of the browser marketplace.

The way Microsoft sees it, with its embrace of Chromium, we’ll get the combined heft of Microsoft and Google contributing to the codebase, which could make it more secure. Unfortunately, it’s also going to create even more of a monoculture. If something goes wrong, it goes wrong in a lot of places. A flaw in the HTML or JavaScript engines becomes a flaw in Chrome, Chromium, Edge, Opera and others.

Microsoft’s Joe Belfiore, corporate vice president of Windows, paints the prospect of a Chromium-based Edge as a compatibility heaven for users and corporate IT, and a headache-reducing improvement for developers:

Ultimately, we want to make the web experience better for many different audiences. People using Microsoft Edge (and potentially other browsers) will experience improved compatibility with all web sites, while getting the best-possible battery life and hardware integration on all kinds of Windows devices.

Web developers will have a less-fragmented web platform to test their sites against, ensuring that there are fewer problems and increased satisfaction for users of their sites; and because we’ll continue to provide the Microsoft Edge service-driven understanding of legacy IE-only sites, Corporate IT will have improved compatibility for both old and new web apps in the browser that comes with Windows.

Why now?

Edge had about 4% of the browser market as of August. Chrome had about 67%. In effect, Chrome is the new IE.

In order to change that, Microsoft needs to supply a browser that businesses can run across all versions of Windows. That’s one thing that the Chromium move will accomplish: a Chromium-based Edge won’t be exclusive to Windows 10, and can instead be run on Windows 7, Windows 8.1 and even on macOS.

It also means that Microsoft will be able to update the browser more frequently, though whether that happens monthly isn’t clear. What is clear: updates will no longer be tied to every major Windows 10 update.

The Verge reports that Microsoft has been mulling this move for at least a year, pushed by consumers and businesses who want to see the company improve web compatibility. Edge has been getting better, but even small compatibility issues have made it a rocky road for users.

We don’t yet know the technical details of how this is going down. But according to Mary Jo Foley over at ZDNet’s All About Microsoft, it’s looking like Microsoft will swap out its EdgeHTML rendering engine for Chromium’s Blink engine, and possibly its Chakra JavaScript engine for Chromium’s V8.

Will this improve Edge’s teensy market share? That remains to be seen. But as far as Mozilla CEO Chris Beard is concerned, this is Goodbye to EdgeHTML, hello to even more control over online life being handed to Google. Browser engines may be invisible to users, he wrote in a blog on Thursday, but they hold great sway over what can be done online and what developers prioritize:

The ‘browser engines’ – Chromium from Google and Gecko Quantum from Mozilla – are ‘inside baseball’ pieces of software that actually determine a great deal of what each of us can do online. They determine core capabilities such as which content we as consumers can see, how secure we are when we watch content, and how much control we have over what websites and services can do to us. Microsoft’s decision gives Google more ability to single-handedly decide what possibilities are available to each one of us.

Readers, what do you think of this move: can we expect better security with Google and Microsoft both contributing to the Chromium codebase? Will that be worth Google tightening its grip on our online lives? Please do chime in below.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cc8_qBs5PDQ/

Android click fraud apps mimic Apple iPhones to boost revenue

SophosLabs has uncovered an unusual click fraud campaign in which malicious Android apps masquerade as being hosted on Apple devices to earn extra rewards.

Advertising click fraud, where a malicious app or process bombards websites with bogus traffic to earn advertising revenue, is a rapidly growing form of cybercrime on mobile and can be hard to spot.

This may go some way to explaining why Google’s Play store failed to detect the malicious design embedded inside a total of 22 apps which kicked off their click fraud campaign in June this year.

Named Andr/Clickr-ad by researchers, the malicious apps were downloaded a total of two million times with one, Sparkle Flashlight, accounting for half of this.

It’s the second time that SophosLabs has discovered malicious ad fraud apps on Google Play, after noticing the separate Andr/Guerilla-D ad fraud campaign lurking inside 25 apps in March and April.

Fake Apple traffic

What sets Clickr-ad apart from previous examples is its sophisticated attempt to pass off much of the traffic the apps generate as coming from a range of Apple models such as the iPhone 5, 6 and 8.

It does this by forging the User-Agent device and app identity fields in the HTTP request. However, it is careful not to overdo the technique by allowing a portion of the traffic to use identities from a wide selection of Android models too.

The Apple fakery is about making money – advertisers pay more for traffic that appears to come from Apple devices than from the larger volume of more socially diverse Android ones.

The difference is probably only small fractions of a penny but for a business built on click volume, those fractions add up over time.

The ad fraud boom

Ad fraud malware must constantly update itself to remain useful to its makers.

To maximise revenue, Clickr-ad’s command and control (C2) changes the ad profile every 80 seconds and downloads new SDK modules every 10 minutes.

The above example shows the malware posing as a Samsung Galaxy S7 to abuse Twitters’ ad network.

As one might expect from click fraud, a primary concern is stealth: Clickr-ad must hide what it’s doing from the Android user. Writes SophosLabs’ researcher Chen Yu:

Malicious ad calls are made in a hidden browser window, inside of which the app simulates a user interaction with the advertisement.

What to do?

The effect of this kind of app is to drain the device’s battery, generate data traffic users might be charged for, and generally bog down the device by constantly clicking on ads.

Because there is nothing to stop the malware’s creators from installing other malware on devices, SophosLabs’ decided to classify it as malicious rather than merely unwanted.

The apps were removed from the Play store in the week of 25 November but because their C2 infrastructure remains in place it’s likely they will continue clicking away until they are removed by device owners.

Simply force-closing the app won’t do the trick because it can restart itself after three minutes – a full uninstall is needed.

An extra precaution would be to conduct a full factory reset after ensuring all data has been synchronised to Google’s cloud.

To reduce the possibility of a return, we recommend running mobile anti-malware – such as the free Sophos Mobile Security for Android, for example.

Conclusions

Number one, although it’s mostly safer to download apps from the Play store than anywhere else, it doesn’t guarantee that what you just installed isn’t malicious.

Number two, mobile click fraud isn’t going to go away, indeed it will likely continue to grow as a problem. It’s simply too lucrative and Google clearly isn’t on top of the problem despite numerous initiatives to tighten app checking. What’s more, the beauty of Clickr-ad is that it’s a whole platform for ad fraud which could be deployed inside other apparently innocent apps.

Finally, while SophosLabs researchers haven’t detected this malware in Apple’s App Store, it should be noted that iOS apps from the same developers were found on the iTunes store minus the click fraud functions.

The affected apps:

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/85FUcNEnZWA/