STE WILLIAMS

Snapchat source code leaked on GitHub – but no one knows why

What just befell a “small” piece of SnapChat’s source code, and should users be concerned?

Things took a turn for the worse earlier this week when Twitter users got wind that the company had filed a takedown request under the Digital Millennium Copyright Act (DMCA) on 2 August 2018 in response to a portion of precious code being posted on GitHub.

Asking GitHub to remove commercially sensitive source code isn’t surprising in the least, although some claimed they detected a note of mild panic in the language used. In answer to the question identifying which copyrighted work had been infringed, Snap’s employee replied in full caps:

SNAPCHAT SOURCE CODE. IT WAS LEAKED AND A USER HAS PUT IT IN THIS GITHUB REPO. THERE IS NO URL TO POINT TO BECAUSE SNAP INC. DOESN’T PUBLISH IT PUBLICLY.

Given the situation, to most observers this will sound perfectly reasonable. The company followed up by confirming to Motherboard that a “small amount” of the source code for its iOS app had leaked in May during an update:

We discovered that some of this code had been posted online and it has been subsequently removed.

However, the company made two further claims that are open to question, the first being that the company was:

Able to identify the mistake and rectify it immediately.

This sounds reassuring and yet clearly someone managed to grab the code and post it to GitHub (not to mention the possibility that the code sat on GitHub for two months before this was noticed).

A second issue is the claim that the leak:

Did not compromise our application and had no impact on our community.

That might a bit complacent given Twitter posts suggesting the source code leaked beyond GitHub in the days before it was taken down.

Even a small piece of source code floating around the public domain raises the chances of a vulnerability being found at some point.

A Twitter user claiming to be the individual who posted the code to GitHub later claimed he/she had tried to communicate with the company regarding the original leak, but no response was forthcoming.

Given that Snapchat’s publisher, Snap Inc, runs a bug bounty program through the third-party HackerOne platform, this is a little surprising – or perhaps source code leaks don’t qualify for the bounty the leaker was angling for.

At least Snap can console itself that it’s not alone. Earlier this year, Apple found itself in a similar pickle after someone posted the source code for Apple’s iBoot bootloader to GitHub, which resulted in a similar DMCA takedown request.

In both cases, the user base has been left to wonder how it is that big, well-resourced companies keep inadvertently allowing their most valuable software assets to anyone with the wherewithal to capitalise on an old-fashioned mistake.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/J-ZVXCip4cY/

Profit-strapped Symantec pulls employee share scheme

Symantec is cancelling an Employee Share Purchase (ESP) programme, angering some workers in the process.

Last week Symantec revealed plans to slash 8 per cent of its workforce (1,000 heads) in response to disappointing enterprise sales. The firm has also cancelled a discounted share purchase worker-loyalty programme as an additional cost-saving measure.

The ditching of the ESP has further knocked morale, according to an affected insider who got in touch with The Register.

Our tipster suggested the move is part of a Machiavellian plan to encourage its top workers to leave in order to reduce redundancy payments [Ed: shouldn’t Symantec be encouraging its top performers to stay?].

Symantec today [Tuesday] canceled the employee stock purchase plan when employees were about to buy in at a 52-week low. This will save the company money it had budgeted to sell the stock to employees at the agreed upon discount.

It is anticipated the top performers in the company will be fed up and will move on to other employers. That will reduce the headcount by the expected amount and Symantec will not have to pay any severance packages for involuntary separations.

Symantec’s Employee Stock Purchase Plan offers a 15 per cent discount off stock purchasable every six months with up to 10 per cent of gross salary.

In normal circumstances this is free money to subscribed workers, equating to around 1.5 per cent of their salary. Symantec’s declining share price makes it less attractive but even so the suspension of the scheme has evidently upset some. It reportedly runs alongside a bonus programme, offering wage boosts of between 5 and 30 per cent.

We say in “normal circumstances” because everything is far from normal at Big Yellow right now.

Symantec is in the middle of an ongoing internal investigation into its accounting practices and executive commentary on historical financial results. The audit has meant that Symantec has not filed its annual report on Form 10-K for fiscal year 2018.

“The Company’s financial results and guidance may be subject to change based on the outcome of the Audit Committee investigation,” Symantec said as part of a statement on its results.

“At this time, the Company does not anticipate a material adverse impact on its historical financial statements for the third quarter of fiscal year 2018 and prior. As noted above, our fourth quarter of fiscal year 2018 and subsequent periods remain open periods from an accounting perspective, subject to adjustment for material updates.”

The Register understands that Symantec won’t be issuing shares to workers until it sorts through its books and files its outstanding 10-K returns.

The company has not sent us a publicly usable statement following our calls for comment yesterday evening. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/08/symantec_share_scheme_supended/

Japanese dark-web drug dealers are so polite, they’ll offer ‘a refund’ if you’re not satisfied

The concept of the “dark web” in Asia is way different to what peeps in Europe and the Americas are used to.

This is according to researchers at New York computer security firm IntSights, which today outlined a number of quirks unique to Asian countries in the way underground sites, and those of questionable legality, operate.

IntSights director of threat research Itay Kozuch told The Register that various countries in the region will use hidden Tor services in different ways and for different purposes compared to folks in the EU and US.

In Japan, for example, the dark web has a decidedly more civil tone than in other countries. Users in Japan will often use Tor-hidden sites for benign things such as blogs or BBS communities.

Even when illegal activities are taking place, Japan does so with good manners. Kozuch noted that drug dealers on the Japanese dark web will offer their customers free samples, and give refunds to those who are not satisfied.

“We contacted a few vendors and they said it is correct, if you’re not happy with the drugs you bought you can get a refund,” Kozuch said. “Even during the criminal acts they’re offering services, and it is important to them you would have a good experience.”

In China, meanwhile, there’s relatively little of a “dark” internet to speak of. Kuzich said that many of the activities seen on hidden services in other parts of the world operate on the “clear” internet in China, where the sheer size of the population – an estimated 772 million internet users – makes it difficult for authorities to combat illegal activity.

“The Chinese are really not afraid to operate within their clear web,” Kozuch told us. “The police and intelligence services are there, and from time to time we hear about arrests, but the fact is there are many hundreds of thousands of websites.”

In addition to being widespread in number, nefarious goods and services on the Chinese web can also be had at bargain prices. The IntSights team, we’re told, found that everything from narcotics to distributed denial-of-service operators and exploits were offered at far lower prices compared similar fare touted in Western cyber-souks.

你会说汉语吗?

Don’t, however, expect to be able to take advantage of any of those low prices. Kozuch noted that most vendors within China do not do business with international punters, and even fewer will so much as respond to anyone who does not communicate in Chinese.

This can be due to things as simple as the language barrier, but just as often the seller’s fear of government retribution will keep them from dealing outside of their own borders.

“It is one thing to do business with other Chinese,” Kozuch explained, “but if you are caught doing business with an American, you have a serious problem.”

In other parts of Asia, hacktivism dominates dark-web activity. In politically tumultuous places such as Thailand, activist groups such as Anonymous still operate prominently. Visitors to those underground sites will often seek out copies of database leaks and dumps of intelligence files, as well as hacking tools and guides that can be used in campaigns.

Back in China, hacktivism also has a prominent place on the dark web, though in many cases hacker groups will actually be acting on behalf of the state.

“Many of the groups in China are considered nationalistic and they have a strong sense of pride,” said Kozuch. “If a country, according to their belief, misbehaves against Chinese people.. they will act without any government orders or prompting.”

A full report on IntSights’ findings, if you’re interested in the details, can be downloaded here. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/08/intsights_asia_dark_web_report/

Breaking Down the PROPagate Code Injection Attack

What makes PROPagate unique is that it uses Windows APIs to take advantage of the way Windows subclasses its window events.

Attackers have a new way to sneak malicious code into benign processes. It is called PROPagate, and it is a stealthy code injection technique that is now being used in a growing number of attacks.

Recent campaigns such as Smoke Loader, used to install Monero Miner software, have utilized PROPagate in place of other injection methods. Smoke Loader has been known to use multiple injection methods to hide its stagers in the memory space of Explorer.exe. These techniques include, but are not limited to, SetWindowLong and CreateProcessInternalW. The newest versions implement PROPagate as a method to vary signatures and bypass detection.

So why are criminal campaigns using PROPagate?

The primary reason is stealth. Similar to other code injection methods, PROPagate inserts malicious code into a legitimate running process in order to make detection difficult, since no abnormal processes are being executed. It then invokes that inserted code to run the clandestine attack.

PROPagate enumerates the windows on the system with the same user privileges of the user executing it to find those that implement the SetWindowSubclass API. It then inserts the shellcode it wishes to execute into the memory of the process. Next, it registers a new property through the SetPropA function that, when invoked, will pass execution to the shellcode. This means that the shellcode will lie dormant until a window event occurs. When this event occurs, the benign process will execute the shellcode.

What makes PROPagate unique is that it uses Windows APIs that are available on all Windows systems. PROPagate takes advantage of the way Windows subclasses its window events. The SetProp is used to modify the property value to call the injected malicious code when its event is triggered.

Attributes: PROPagate’s primary benefit is its ability to hide the attacker’s activity.
The code injected into the running process is harder to detect by incident response. One way this can be used by attackers, as shown in the Smoke Loader example, is to inject a payload into a benign process, such as Explorer.exe, and use that benign process to download and install the intended malware. The download and installation will originate from the Explorer process ID, which could be overlooked by sandboxes and researchers. Another option for attackers is to create a backdoor into the system by opening a connection to a command and control server. Additionally, an attacker could launch the malware through any known persistence mechanisms, injecting the shellcode into benign processes, then have the malware delete the persistence mechanism and the file from the disk. When the malware receives a shutdown event, it simply replaces the persistence mechanism and writes itself back to the disk.

Weaknesses: PROPagate has two important limitations for the attacker.
First, it does not facilitate Remote Code Execution (RCE), so in order to utilize it, the attacker must already be on the system. Second, it is restricted to injecting only into processes with equal or lesser privileges, so PROPagate cannot be used to escalate privileges.

Here is a brief overview of how PROPagate is launched:

  1. Enumerate the windows of running processes to find one using the SetWindowSubclass API.
  2. Open the enumerated process.
  3. Copy an existing property from the process.
  4. Create two memory regions in the benign process.
  5. Modify the property to point to a memory region just created.
  6. Copy shellcode into the one of the memory regions created in the benign process and copy the modified property into the other memory region.
  7. Use the API command SetProp to load the modified property into the benign process.
  8. Trigger the payload by issuing an event to the benign process, such as Terminate Window.
  9. Reset the property list to its original value.
  10. Clean up and exit the malicious code.

In the Wild: A Brief Overview of a Current Propagate Attack Campaign
Smoke Loader (VirusShare has 28 variants)

MD5 = 0cfcc4737bb1b07bc3563144b297f873

  • Preliminary review did not show signs of the SetProp or SetWindowSubclass
  • Exploded with Cuckoo Sandbox. Flagged as malicious but did not flag on the PROPagate injection method. There are multiple injections happening in this sample.
  • Injection method uses CreateProcessInternalW.

MD5 = a080729856d6c06d69780e70a7298004

  • Preliminary review did not show signs of the SetProp or SetWindowSubclass.
  • Use the SetWindowLong injection method not PROPagate.

Detection

Figure 1: Cuckoo analysis summary for proof of concept.

At the time of this writing, the detection of PROPagate is not built into Cuckoo Sandbox (see Figure 1, above). When the author tested for Cuckoo detection through a proof of concept, it was flagged for creating read-write-executable memory but not for injecting code into the Explorer.exe or triggering an event.

Figure 2: Cuckoo signature alerts to read-write-execute memory creation.

Cuckoo flagged the sample with a very low score and did not alert that a new process was created from Explorer.exe. The proof-of-concept shellcode was not configured to exit cleanly and crashed the Explorer.exe process. Cuckoo does not look for the usage of SetProp or SetPropA.

PROPagate is an effective method for stealthy code injections, particularly through its ability to launch in valid processes. However, its capabilities are limited — the attacker must already be on the system to launch PROPagate because it does not facilitate RCE, and the attacker can only execute under the same user privileges. To detect this attack, it is important to add monitoring of SetProp/SetWindowsSubclass APIs.

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early bird rate ends August 31. Click for more info

Scott specializes in malware research/reverse engineering and incident response. He’s developed malware detection tools for a multinational conglomerate, analyzed APT attacks and tool kits, and consulted for the defense, manufacturing and financial industries. He’s spent over … View Full Bio

Article source: https://www.darkreading.com/breaking-down-the-propagate-code-injection-attack/a/d-id/1332473?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Manufacturing Industry Experiencing Higher Incidence of Cyberattacks

New report reveals the natural consequences of ignoring the attendant risks of industrial IoT and Industry 4.0.

The rapid convergence of enterprise IT and operational technology networks in manufacturing organizations has definitely caught the eyes of cyberattackers. According to a new report out today, manufacturing companies have started experiencing elevated rates of cyber reconnaissance and lateral movement from attackers taking advantage of the growing connectivity within the industry. 

Developed by threat hunting firm Vectra, the “2018 Spotlight Report on Manufacturing” features data from a broader study of hundreds of enterprises across eight other industries. It shows that even though organizations in retail, financial services, and healthcare industries are more likely to experience reportable breaches involving personally identifiable information, manufacturing organizations outpace them in other areas of risk. 

For example, the manufacturing industry is subject to a higher-than-usual volume of malicious internal behaviors, which points to attackers likely already having found footholds inside of these networks. In particular, during the first half of 2018 manufacturing firms had the highest level of reconnaissance activity per 10,000 machines of any other industry. This kind of behavior typically shows that attackers are mapping out the network looking for critical assets. Similarly, manufacturing was in the top three industries most impacted by malicious lateral movement across its networks.

All of these metrics indicate a heightened level of risk to manufacturing’s bread-and-butter: uninterrupted operations and well-guarded intellectual property. According to the “2018 Verizon Data Breach Industry Report,” 47% of breaches in manufacturing are motivated by cyber espionage. 

Experts chalk up the increased risk to the industry’s mass deployment of industrial Internet of Things (IoT) devices and the shift to what some tech pundits call Industry 4.0. As analysts at McKinsey, Deloitte, and others explain, we’re in the middle of the fourth industrial revolution. The first started with steam-powered machines. The second came with the advent of electricity. The third occurred with the first programmable controllers. And now the fourth is occurring with increased connectivity, automation, and data-driven adaptivity of operation systems across manufacturing plants. Industry 4.0 delivers ubiquitous production and control to the business, but it also increases the risk of disruption by cyberattackers if automated and connected systems aren’t sufficiently protected. 

Unfortunately the industry’s paradigms around protecting systems hasn’t caught up with the changing realities of its attack surface. For example, the Vectra report explains how manufacturers traditionally used customized and proprietary protocols for connecting systems on the factory floor. That in and of itself kept the bar of entry for cybercriminals pretty high. But that trend is changing as more IoT devices have utilized standardized protocols.

“The conversion from proprietary protocols to standard protocols makes it easier to infiltrate networks to spy, spread, and steal,” the report states. 

Additionally, manufacturers tend not to implement strong security access controls on certain systems for fear of interrupting the flow of lean production lines. All of this is adding up to heightened levels of risk.

“The interconnectedness of Industry 4.0-driven operations has created a massive attack surface for cybercriminals to exploit,” says Chris Morales, head of security analytics at Vectra.

Related Content:

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/risk/manufacturing-industry-experiencing-higher-incidence-of-cyberattacks/d/d-id/1332515?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hey, you know what a popular medical record system doesn’t need? 23 security vulnerabilities

Fresh light has been shed on a batch of security vulnerabilities discovered in the widely used OpenEMR medical records storage system.

A team of researchers at Project Insecurity discovered and reported the flaws, which were patched last month by the OpenEMR developers in version 5.0.1.4. With the fixes now having been out for several weeks, the infosec crew on Tuesday publicly emitted full details of the critical security bugs, with a disclosure [PDF] so long it has its own table of contents.

Any medical provider that has yet to update to the latest version of the open-source OpenEMR software is well advised to do so now, before some miscreant exploits the holes to nab sensitive records.

Among the list of bugs found by Project Insecurity are four remote code execution flaws; nine SQL injection vulnerabilities; arbitrary read, write and deletion bugs; three information disclosure flaws; a cross-site request forgery allowing for remote code execution; deep breath; an unrestricted file upload hole; a patient portal authentication bypass flaw; and administrative actions that can be performed simply by guessing a URL path.

Delicious source

Perhaps what is most impressive is that Project Insecurity gang – Brian Hyde, Cody Zacharias, Corben Leo, Daley Bee, Dominik Penner, Manny Mand, and Matthew Telfer – said all of the bugs were discovered by a team of seven researchers poring over source code without the use of any automated testing tools.

“We set up our OpenEMR testing lab on a Debian LAMP server with the latest source code downloaded from GitHub,” the Insecurity team explained.

“The vulnerabilities disclosed in this report were found by manually reviewing the source code and modifying requests with Burp Suite Community Edition, no automated scanners or source code analysis tools were used.”

In disclosing the flaws, Insecurity’s researchers make a number of recommendations to the OpenEMR community to avoid the introduction of further vulnerabilities, including the use of parameterized database queries in PHP scripts (to prevent SQL injection) and limiting uploads only to non-executable image files (to patch the arbitrary file upload-and-run hole).

Other bugs, such as the remote code execution and cross-site request forgery flaws, will require developers getting up to speed and implementing best practices for writing secure code.

“Obviously, if a malicious user were to convince an administrator to click a certain link, that malicious user could successfully pop a shell on their target,” the researchers noted. “Nearly all of OpenEMR’s administrative actions are vulnerable to CSRF one way or another.”

OpenEMR bills itself as “the most popular open source electronic health records and medical practice management solution.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/07/openemr_vulnerabilities/

Expect API Breaches to Accelerate

APIs provide the digital glue that binds apps, cloud resources, app services and data all together – and they’re increasingly an appsec security threat.

Last year the category of underprotected APIs cracked the OWASP Top 10 list for the first time. The breach trends since then are starting to prove that inclusion was pretty prescient. Just in 2018 alone we’ve seen at least half a dozen high-profile data breaches and security exposures caused by poor API security. And that doesn’t even include incidents last year at T-Mobile, Instagram, and McDonalds that all together exposed sensitive data about millions of their users. 

This week the latest API security incident to make waves struck Salesforce, which reported to customers that a bug in an API in its Marketing Cloud service potentially exposed customer data. The flaw could have caused API calls to retrieve or write data from one customer’s account to another’s, the company stated

This is a different verse of the same song we continue to hear about the growing trend of API insecurity. Just last month, for example, researchers announced that mobile payment app Venmo has been exposing details about hundreds of millions of transactions through a poorly secured API. And this spring an egregiously insecure Panera Bread API exposed details about mobile users in a major way. In that case, as many as 37 million records, including customer names, email addresses, physical addresses, birthdays, and the last four digits of credit cards — all in plain text — were exposed through a searchable API that required no authentication to access.

This is an issue that cuts across all company sizes and industries. Last month, for example, HIMSS released a report showing that exploitation of API flaws has become a major concern for healthcare organizations. And a study earlier this year by Imperva showed that more than two-thirds of organizations expose APIs to the public in order to enable partners and external developers to tap into their software platforms and app ecosystems. Unfortunately, more than three in four organizations report they treat API security differently than Web app security — indicating that API security readiness lags behind other aspects of application security.

That Imperva study also shows how prevalent API use is becoming within most organizations: The typical organization now manages an average of 363 APIs. This can be chalked up to a growing trend in the development world toward microservices, where most modern applications are no longer monolithic pieces of software but are instead composed of smaller components that can be reused, mixed, and matched across an entire application portfolio.

In addition, whole application ecosystems depend on open connectivity to share data and make users’ lives easier through better integrations. APIs are what’s used to help all of these components play nicely together and to get applications seamlessly sharing data among themselves. Indeed, 61% of organizations say API integration is critical to their business strategy.

“In 2018, it is expected that you need APIs to do business in this digital age,” writes Kin Lane, who’s known as the API evangelist at Cloud Elements. “The companies, organizations, institutions, and government agencies who are just beginning to invest in their API infrastructure are quickly realizing how far behind they are when it comes to the efficient delivery of data and content to Web and mobile applications, as well as the ability to work with Internet-connected devices, and take advantage of the benefits of machine learning and artificial intelligence.”

But as businesses jump on the API development trend, they’ll need to keep in mind that the more APIs grow in importance to them, the more they will grow in importance t attackers. According to Gartner, by 2022 API abuses will be the attack vector most responsible for data breaches within enterprise Web applications. 

Related Content:

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/application-security/expect-api-breaches-to-accelerate/d/d-id/1332504?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Even ‘Regular Cybercriminals’ Are After ICS Networks

A Cybereason honeypot project shows that ordinary cybercriminals are also targeting weakly secured environments.

Contrary to what some might perceive, state-backed groups and advanced persistent threat (APT) actors are not the only adversaries targeting industrial control system (ICS) environments.

A recent honeypot project conducted by security firm Cybereason suggests that ICS operators need to be just as concerned about ordinary, moderately skilled cybercriminals looking to take advantage of weakly secured environments as well.

“The biggest takeaway is that the threat landscape extends beyond well-resourced nation-state actors to criminals that are more mistake-prone and looking to disrupt networks for a payday,” says Ross Rustici, senior director of intelligence services at Cybereason. “The project shows that regular cybercriminals are interested in critical infrastructure, [too].”  

Cybereason’s honeypot emulated the power transmission substation of a major electricity provider. The environment consisted of an IT side, an operational technology (OT) component, and human-machine interface (HMI) management systems. As is customary in such environments, the IT and OT networks in Cybereason’s honeypot were segmented and equipped with security controls that are commonly used by ICS operators.

To lure potential attackers to its honeypot, Cybereason used bait such as Internet-connected servers with weak passwords and remote access services such as RDP and SSH enabled. But the security firm did not do anything else besides that to promote the honeypot.

Even so, just two days after the honeypot was launched a threat actor broke into it and installed a toolset designed to allow an attacker and a victim use the same access credentials to log into a machine via Remote Desktop Protocol (RDP). The toolset, commonly found on compromised systems advertised on xDedic, a Russian-language cybercrime market, suggested that the threat actor planned to sell access to Cybereason’s honeypot to others.

The threat actor also created additional user accounts on the honeypot in another indication that the servers were being prepared for sale to other criminals. “The backdoors would allow the asset’s new owner to access the honeypot even if the administrator passwords were changed,” Cybereason said in a blog describing the results of its honeypot project.

Cybereason deliberately set up the honeypot with relatively weak controls so it would take little for the attacker to break into it by brute-forcing the RDP, Rustici says. The skill level to prepare the server for sale was also fairly rudimentary and could have been accomplished by a high-level script kiddie.

Slightly more than a week after the initial break-in, Cybereason researchers observed another threat actor connecting to the honeypot via one of the backdoor user accounts. In this instance, the attacker was focused solely on gaining access to the OT environment. The threat actor’s scanning activities and lateral movement within the honeypot environment was focused on finding a way to access the HMI and OT environments.

The threat actor showed no interest in activities such as using the honeypot for cryptomining, launching DDoS attacks, or any of the other activities typically associated with people who buy and sell access to compromised networks.

The adversary’s movements in the honeypot suggested a high degree of familiarity with ICS networks and the security controls in them, Cybereason said. At the same time, the attackers, unlike more sophisticated adversaries, also raised several red flags that suggested a certain level of amateurishness on their part.

“The way they operated makes us think this group was a mid- to high-level cybercrime group,” Rustici says. “Based on their capabilities, it is likely they were either trophy hunting to improve their reputation or looking for a ransom payday.”

The data from the honeypot project shows attackers have a new way of sourcing ICS assets, Cybereason noted. Rather than select, target, and attack a victim on their own, adversaries can simply buy access to an already compromised network.

The threat group that purchased access to the honeypot also lived entirely off the land for lateral movement and for scanning for systems with access to HMI and OT systems, Rustici says. “They never uploaded a tool to the network,” he noted.

Related Content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/even-regular-cybercriminals-are-after-ics-networks/d/d-id/1332505?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Funnily enough, no, infosec bods aren’t mad keen on W. Virginia’s vote-by-phone-app plan

The US state of West Virginia plans to allow some of its citizens to vote in this year’s midterm elections via a smartphone app – and its seemingly lax security is freaking out infosec experts.

Voters living overseas, including military personnel and their spouses, will, in theory, be able to install and use the Voatz mobile application to submit their ballots electronically over the internet. Voatz, founded in 2014, is a Boston-based startup that specializes in “mobile focused election voting and citizen engagement.” It recently nabbed $2.2m in a seed funding round, and its software is available for Android and iOS.

West Virginia officials conducted a pilot project using Voatz with a handful of overseas voters in its primary elections earlier this year, and with that having been judged a success, they now want to expand the program for the November midterm elections.

According to state bureaucrats, Voatz uses a combination of blockchain ledgers and biometrics: a scan of the photo on your government ID has to match a selfie taken by your phone before a ballot can be cast, and the data is stored in a blockchain held on distributed backend servers. That’s supposed to stop miscreants from voting as someone else, voting multiple times, tampering with tallies, and so on.

Voters will still have the option to send in paper ballots and, judging by this week’s response from the infosec community, that may be a good idea.

Criticisms

Security experts are not convinced the startup’s system will be secure enough to ensure nobody can mess with the submitted election results, especially with Russian and other hackers taking a keen interest in America’s democratic processes.

UK-based computer security bod Kevin Beaumont outlined on Monday a list of red flags that he spotted.

We’re told the Voatz website needs patching: it is powered by an out-of-date version of the Apache web server on a box with an out-of-date SSH service and PHP installation. It also apparently exposes NTP, POP3, PHP3, and a 2009-era edition of Plesk to the internet. The site’s database, hosted on Azure, has a remote administration panel exposed on port 8080 with no HTTPS protection, according to Beaumont.

This does not inspire confidence that Voatz can keep miscreants out of its servers, and prevent them from potentially meddling with election results.

Some of the Voatz source code also appears to have ended up on GitHub complete with Yodlee account login credentials and the keys to one of the upstart’s MongoDB databases. Yodlee is used to identify voters’ identities via their bank account details.

An earlier attempt to use Voatz in a Utah county election allegedly went awry, and officials had to fall back to paper ballots, according to the app’s reviews.

Beaumont also argued that the security audits carried out on Voatz by external outfits were not particularly thorough, and one of the listed auditors has apparently claimed it hasn’t had anything to do with the biz. Meanwhile, Unix systems administrator David Gerard has picked apart Voatz’s use of a private blockchain, which is basically not much more than a single-user append-only database.

Voatz told The Register the source code on GitHub is no longer used, and the MongoDB database in question was handled years ago by an intern. “Those are from a summer project which an intern worked on as a test project two-plus years ago,” a spokesperson for Voatz said. “It doesn’t have anything to do with our system deployed currently.”

Voatz also disputed claims its systems are vulnerable and untested, adding that its use of a blockchain ledger is legit. It has popped more information about its West Virginia project on its website, here and here.

“After authentication, the Voatz app encrypts the voter’s identity, ties the phone to the voter via their fingerprint, and then deletes all identifying information (photo, identity record),” the biz explained.

“This process ensures that any identifying information is not stored. Once authenticated, voters can vote on mobile ballots they receive from their jurisdiction. If, for any reason, the voter falls off the voter registration rolls, the jurisdiction will no longer send a mobile ballot and the voter must restart the process of registration and authentication.

There’s more to blockchain than dodgy cryptocurrencies

READ MORE

“The Voatz app is built with security measures embedded in qualified smartphones and employs blockchain technology to ensure that, once submitted, votes are verified and immutably stored on multiple, geographically diverse verifying servers. Before going into the pilot, Voatz submitted the smartphone voting app to an independent security firm for review. Beyond the pilot, the Voatz voting app undergoes frequent rigorous ‘red-team’ testing by independent, qualified third parties.”

One big worry is the upstart’s staggering reliance on its blockchain backend.

The idea is that election officials send a token to each voter, which is credited to the ledger in their smartphone app.

Then, when a vote is submitted over the internet from the application, Voatz’s servers verify the action, and if all is well, the token is debited from the voter’s ledger and credited to the selected candidate’s ledger. Finally, you total up all the tokens, and the winning candidate is the one with the most.

This verification of the submitted votes, and assignment of tokens, relies on the backend servers being clean and not compromised by hackers – otherwise ballot tokens could end up in the wrong ledgers, and Voatz is not particularly open about how its system works under the hood.

Joseph Lorenzo Hall, chief technologist at the Center for Democracy and Technology in Washington, DC, summed it up to CNN: “Mobile voting is a horrific idea. It’s internet voting on people’s horribly secured devices, over our horrible networks, to servers that are very difficult to secure without a physical paper record of the vote.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/07/voatz_west_virginia_voting_app/

Fortnite ditches Google Play – will it undermine Android security?

Well, Google, that’s what you get for having an open platform that makes it easy to install apps on Android phones: Epic Games has tucked its Fortnite game under its arm and leaped out of the Google Play walled garden, saying “Basta!” to that 30% “store tax” on all sales.

…and evidently not being able to do the same to Apple, with its identical 30% App Store cut, given that Apple, unlike Google, doesn’t allow iOS users to download apps that aren’t first approved by its internal review processes and distributed through its proprietary marketplace.

On Friday, Epic Games CEO Tim Sweeney confirmed the rumor about its Play Store exit to The Verge. Besides ditching the Play Store, Sweeney said that Epic would do the same thing for the iOS release of Fortnite, if it were possible. It’s not: Apple’s ecosystem is fully locked down, meaning Epic has no choice but to use the iTunes App Store, same as with the console platforms.

In an email, Sweeney said that Epic had two motivations: first, the game maker’s after a more direct relationship with customers. It doesn’t need Google Play for that, given that players can get Fortnite on PC through its own Epic Games Launcher. Similarly, Epic has chosen to bypass Steam – a video games distribution platform that offers digital rights management (DRM), matchmaking servers, video streaming, social networking services, game installation and automatic updating – and just use its own launcher and account system instead.

Sweeney:

Epic wants to have a direct relationship with our customers on all platforms where that’s possible. The great thing about the Internet and the digital revolution is that this is possible, now that physical storefronts and middlemen distributors are no longer required.

The second motivation: money. Epic is tired of Google Play’s 30% bite out of sales, and it doesn’t think it’s worth it. What Sweeney had to say about it in an interview with Eurogamer:

It’s a high cost in a world where game developers’ 70% must cover all the cost of developing, operating, and supporting their games. And it’s disproportionate to the cost of the services these stores perform, such as payment processing, download bandwidth, and customer service.

Sweeney told Eurogamer that Epic is “grateful” for everything Google’s done, but now it’s time to shift gears on how Android fits into the landscape. He didn’t sound particularly worried about Google reacting badly to the news, saying that Epic’s looking forward to “continued collaboration” with the company:

Google built Android as an open platform on top of the Linux kernel and other open-source software, and Android 8.0 “Oreo” greatly advances the security and user-friendliness of Android as an open platform. Epic Games, as the developer of Fortnite and the Unreal Engine, is grateful for Google’s work and looks forward to continued collaboration with Google to further Android as an open, console-quality mobile gaming platform.

Fortnite is free to play. In Fortnite Battle Royale, 100 players land on an island to look for weapons, build defenses, and fight in ever-smaller spaces to be the last one standing. Epic makes its money off the stuff players buy to make it out alive and/or to make them pretty: costumes, dance moves and the like. Indeed, Epic makes a whole lot of money out of that stuff: In April, Fortnite generated nearly $300m across mobile, console and PC platforms, according to digital game sales tracker SuperData Research.

Because Fortnite is so wildly popular – it hit 125m players in June – Epic likely isn’t worried that its customers won’t follow it straight out of the Play Store, even if it means doing what many young players have never done: namely, install the game themselves, without the help of an app store like Google’s or Apple’s.

That process is known as “sideloading.” It’s not terribly complex, and Epic’s proprietary installer will presumably make it even less so. But while it’s not particularly complex, neither is it particularly safe.

Besides making beaucoup bucks off apps, there’s a reason that Google tries to push everyone through the Play Store (and a reason why Apple won’t give up control). Like it or not, the Play Store is a walled garden that keeps out malware.

Granted, Google doesn’t keep out all malware, but it shuts out an awful lot. It’s hard to get malware into the Play Store, and the vast majority of Android malware is on other markets or websites.

Will Epic’s decision to ditch the “sales tax” of the Play store – along with its protection – be a rallying cry for other apps to do the same? And will it encourage Android users to look elsewhere?

If Fortnite is even halfway successful, it might, and doing that risks undermining one of the simplest, most useful and easiest to remember pieces of security advice we can offer: stick to Google Play.

Even before the announcement there were plenty of scammers out there trying to take advantage of the anticipation around Fortnite for Android and willing to jump on any lambs that have wandered out of the walled garden.

We’ve seen things like a fake Fortnite app that leads to malware, and a fake free invites to supposedly get in on the release that turned out to be from windbag fraudsters looking for profit or for pumped-up Twitter following/likes/retweets/comments.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/XKJiLYSo8FA/