STE WILLIAMS

93% of Cloud Applications Aren’t Enterprise-Ready

The average business uses 1,181 cloud services, and most don’t meet all recommended security requirements, Netskope says.

Think your company’s cloud usage is secure? Think again. Data shows the average businesses has 1,181 cloud services, and nearly all of them — 92.7% — are not enterprise-ready.

This data comes from Netskope, which discovered trends around cloud service adoption and usage by analyzing anonymized data from its Netskope Active Platform. The number of cloud services ranges from a few hundred in smaller organizations to more than 3,000 in large enterprises.

To determine whether an app was “enterprise-ready,” analysts used parameters from the Cloud Security Alliance’s Cloud Controls Matrix. They researched more than 40 parameters from each cloud service, including business continuity, data security, access control, privacy, and auditing, and used these to rank services as low, medium, high, or excellent.

Human resources and marketing departments are major drivers of cloud adoption. The average count of HR apps across organizations is 139, the highest yet for any given department. “It just keeps rising,” says Jervis Hui, senior security strategist at Netskope. “This is the highest average we’ve seen in the course of the four to five years Netskope has been doing this report.”

Researchers are seeing a broad transition from traditionally on-premises HR services to cloud-based apps like Workday, SuccessFactors, and Ultimate Software. Most of these new apps contain sensitive data but aren’t always sanctioned by IT, putting the data at risk.

“A lot of these HR apps and marketing apps have a lot of customer information and marketing information that counts as personal data under GDPR,” says Hui. “And a lot of them are shadow IT; they’re not necessarily brought in or vetted by the IT organization.” (The EU’s General Data Protection Regulation takes effect on May 25.)

When creating policies and access controls to secure information, teams should start with HR and marketing apps, the researchers reported. Many popular apps in these categories contain personal data and require data loss prevention software and access controls to ensure it’s used in compliance.

Analysts compiled a list of top cloud services, which mostly consist of storage and collaboration tools and include popular offerings like Outlook, Office 365, Gmail, Facebook, Skype, Google Drive, SharePoint, Microsoft Power BI, iCloud, Twitter, LinkedIn, Box, and Salesforce.

These are common in the enterprise and most are sanctioned; however, even vetted apps can be connected to dangerous ones, Hui points out. Some workflow apps are less popular but contain sensitive data — for example, virtual signature tools that handle important files.

“Those are the apps you really want to look at,” he notes. Admins can put security controls on Microsoft services and Box, for example, to prevent sharing sensitive files with non-vetted apps.

Data indicates the majority of malware detections are generic, with threats like Flash exploits and worms making up 41.6% of the total. Backdoors made up 33.6% of malware detections, followed by Microsoft Office macros (8.6%), adware (4%), and PDF exploits (3.2%), with threats like ransomware, Mac malware, JavaScript, and mobile malware falling behind. Bitcoin and other cryptocurrency malware made up only 0.4% of the total, but that number is rising rapidly, says Hui.

Businesses will need to crack down on data visibility ahead of GDPR this year.

“Looking at the data … the big thing in terms of compliance is looking at which apps are in use right now in our organization and seeing what kind of big controls you need to put in place,” says Hui. “Companies need visibility into which apps are being used and place control over them.”

When you find applications putting data at risk, determine which groups of employees are using those apps and how many people are using them. How are they being used? Where is data flowing? Are they accessing those applications on unmanaged devices?

If the app is dangerous and not used often, one option is to block it completely and not let anyone use it. If it’s a common app and personally identifiable information is flowing into it, start coaching people away from the app. Have a sanctioned, alternate app ready for a similar service and say “This app is not compliant; please use this service instead.”

Related Content:

 

 

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/cloud/93--of-cloud-applications-arent-enterprise-ready/d/d-id/1331125?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

NPM update changes critical Linux filesystem permissions, breaks everything

You’ve probably heard of JavaScript.

It was invented as a programming language for use in web pages, that would run not on your web server but inside every web visitor’s browser.

Unlike traditional program downloads, with a pop-up dialog and an “Are you sure” button, JavaScript is shovelled straight into your browser and, by default, runs automatically without asking.

Website features we take for granted these days – pull-down menus, pop-up forms, animated image transitions, clickable icons, and much more – are all achieved thanks to browser-side JavaScript.

Of course, this means that even though JavaScript is a fully-featured, general-purpose language, it can’t be used for just any old purpose while you are browsing.

The power of JavaScript is carefully limited by the browser, to reduce the rather obvious risks of running program code that arrived in a web page from “out there somewhere”.

For example, JavaScript programs in your browser generally can’t reach out into other tabs, can’t start or stop other programs, can’t access files on your hard disk, can’t read the registry, can’t scan the network, can’t sniff around in memory.

But JavaScript became so well-known and widely-loved – as coding languages go, it is clean, expressive and powerful – that it has turned into a server-side programming language, too.

Thanks to a project called Node.js, JavaScript has now joined popular server-side languages such as PHP, Perl, Python, Ruby and Java (which is unrelated to JavaScript, by the way) as a coding platform for building complex systems.

Unlike browser JavaScript, Node.js is augmented by add-on toolkits to do just about anything you can think of: manage processes, run servers, read local files and databases, control the network, perform cryptographic calculations, transcode images and videos, recognise faces, you name it.

The careful restrictions imposed on JavaScript inside the browser are unnecessary – indeed, are a hindrance – when you’re writing a full-blown application using Node.js.

Of course, all these add-on capabilities come with a price: complexity.

For example, let’s say you want to use Node.js to program face recognition into your website’s login system, and you decide to use the ready-made library called facenet, a “deep convolutional network designed by Google, trained to solve face verification, recognition and clustering problem with efficiently at scale.” (Let’s hope the code has fewer errors per line than that sentence.)

Well, facenet itself needs a bunch of additional add-ons, namely: @types/ndarray, argparse, blessed, blessed-contrib, brolog, canvas, chinese-whispers, glob, mkdirp, printf, python-bridge, rimraf, tar, and update-notifier.

So far, so good, but these dependencies have their own needs: chinese-whispers, for example, needs jsnetworkx, knuth-shuffle and numjs.

And so it goes: jsnetworkx needs babel-runtime, lodash, through and tiny-sprintf; and babel-runtime needs regenerator-runtime.

The good news is that when you adopt Node.js for programming, you also ends up with NPM, the Node Package Manager, which sorts out all these dependencies for you.

Automatically. From all over the internet.

The bad news, of course, is that when you adopt Node.js for programming, you also ends up with NPM, the Node Package Manager, which sorts out all these dependencies for you.

Automatically. From all over the internet.

Simply put, you can write a five-line JavaScript program that is elegant in its simplicity, but only if your Node Package Manager drags in tens or even hundreds of thousands of lines of other people’s software.

Automatically. From all over the internet.

And keeps it updated, automatically, from all over the internet.

Simply put, your graceful five-line JavaScript program, behind the scenes, is a hodge-podge of directories and files awash with other people’s code that you couldn’t easily keep track of yourself even if you wanted to, all of it kept afloat automatically by a pacakage management toolkit that is about as complex as the auto-updating system built into the operating system itself, yet not integrated with it or even built in collaboration with it.

(If the previous paragraph seems rather long and breathless, please treat that as a metaphor.)

Perhaps unsurprisingly, then, a recent update to the Node Package Manager introduced a bug that caused it to interfere with the operating system, by incorrectly changing the file permissions of a raft of important system directories that should have been left well alone.

In other words, you had to give NPM operating system superpowers to keep your Node.js world in order; while doing so, NPM mis-used these superpowers to throw your operating system into disarray by locking the system itself out of numerous mission-critical files.

I found that a selection of directories in / were owned by a non-root user after running sudo npm and many binaries in /usr/bin stopped working as their permissions were changed. People experiencing this bug will likely have to fully reinstall their system due to this update.

What to do?

  1. Keep backups that make a meaningful rollback easy. NPM has caused relaibility disasters before, and given its vaguely anarchical nature, will cause them again.
  2. Get the lastest version of NPM as soon as you can. Fortunately, your operating system will probably take care of that, not NPM itself.
  3. Don’t autoupdate production servers. Prove the latest update in testing first.
  4. Remember that simple software can be immensely complex. Keep that in mind when making time for testing (see 3).

There’s an adage that’s been applied to many “breakthroughs” in software engineering in the past few decades:

I’ve got a programming problem! I know, I’ll solve it using X. Oh, dear… now I’ve got two problems, one of them being X.

Technologies like Node.js and NPM are both a blessing (because they let you do complex things quickly), and a curse (because they solve the problem of “quickly”, not the problem of “complex”).

Plan accordingly, because those who cannot remember the past are condemned to repeat it.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Xtuo4LYWp28/

Leveraging Security to Enable Your Business

When done right, security doesn’t have to be the barrier to employee productivity that many have come to expect. Here’s how.

Wouldn’t it be great if everyone were trustworthy? No bad guys trying to break in and steal your cyber assets, and everyone is able to do their jobs unobstructed and without fear of negative consequences? That’s when businesses succeed, costs go down, productivity skyrockets, and everyone is happy.

Unfortunately, this is not the world we live in. With both external cyberattacks and insider threats on the rise, companies must protect themselves from threats in their own backyard and the far-reaching corners of the cyber world. Because the risks are so high, many businesses have employed security processes and systems that encroach further and further into the business, hindering daily productivity and causing mass frustration among employees. In the most extreme cases, security has become employee enemy No. 1.

But security doesn’t have to be the barrier many have come to expect and can actually help enable a business — when done right. Let’s explore a few common instances of security getting in the way of productivity and possible solutions to turn security into an ally of business objectives.

Scenario 1: Access Control
Too often, organizations’ knee-jerk reaction to bolstering security is to strengthen user authentication requirements. Often, this approach results in multiple passwords to remember (and forget), obstacles that get in the way of required access, and obstructive — but well-intentioned — technologies.

For example, I’m aware of a large company that required users to log in to two separate VPNs, both fronted by separate multifactor authentication solutions (MFAs), in order to remotely access basic systems. Understandably, most users end up avoiding the 10-minute login time and the unreliability of the VPN connections, and default to calling IT when they absolutely require access.

So, how can we turn that obstacle into a business enabler?

The first step is to look into more modern technologies, such as a reverse proxy, which can overcome the cumbersome nature of multiple VPNs and ensure quick, seamless, and secure access from anywhere, on any device. With this approach, there is no need to repeatedly require MFA once a user has “passed the test” of proving who they are.

Businesses can also leverage adaptive authentication technology, which automatically adjusts authentication requirements relative to the risk of the request. For example, an initial login may require MFA, but subsequent logins by the same user, from the same device, in the same day would not. If, however, the request suddenly comes from an unknown device, there could be something fishy going on. With adaptive authentication, the rules for an MFA requirement for specific risky login instances can be preset and automatically enforced.

The result: the default stance of obstruction and denial is replaced with enablement and efficiency. The business is the beneficiary.

Scenario 2: Privileged Accounts
The prime targets for many bad actors are the privileged accounts that provide the “keys to the kingdom.” With this super-user access, bad guys can get to virtually any data, files, and systems they want, cover their tracks, and act with anonymity. Businesses typically address this threat in one of two ways: they simply pretend there is no risk and continue sharing credentials, or they can lock away all privileged credentials and issue them under the strictest controls. One is incredibly risky; the other is equally inefficient. Both prevent businesses from truly realizing their objectives.

A multifaceted approach to privileged access management (PAM) can provide proper security measures while also ensuring that permissions are available when needed, thus facilitating business agility. What this means is that privileged account rights are issued on a “least privilege” model, whereby each user is issued only the permissions necessary to do their job. “Full” administrative permissions are locked away in a digital vault complete with automated issuance workflows and approvals, audits of tasks performed, and automatic password change requirements. This practice eliminates the cumbersome manual processes often associated with PAM and assigns the individual accountability.

It is also important to find and remediate instances of users with permissions that exceed their role, their peer group, or industry norms. By ensuring that each user has the correct rights, everyone can do their jobs, and the chances of abuse and misuse are greatly reduced.

Scenario 3: Provisioning and Deprovisioning
How long does it take for your average new user to be fully provisioned? Research conducted by the Aberdeen Group in 2013 and still valid found that it takes at least a day and half. Many organizations lag far behind that, reporting days or weeks before full access is granted. Nothing stands in the way of achieving business objectives like provisioning delays. And, on the flip side, nothing causes more security concerns than delays in deprovisioning.

The same research indicated that it takes half a day on average to fully deprovision a user. But again, many organizations fall significantly behind the curve on that matter — and that doesn’t even take into account instances of faulty provisioning in which rights are inappropriate due to IT copying ungoverned sets of permissions.

Delays and errors tend to be the result of a lack of communication between IT and line-of-business employees. IT knows how to provision and deprovision but lacks the context behind access requirements and what a user actually needs to perform his or her role. In addition, with the diversity of the modern enterprise, provisioning actions often require multiple IT teams, many disparate tools, and an abundance of manual processes that result only in inactive users.

The solution to this problem from both an efficiency and security standpoint is to unify provisioning across the entire enterprise, basing access on business roles that can be enforced enterprise-wide, and placing the power in the hands of the line-of-business rather than IT. For organizations that have taken this approach, full provisioning is close to instantaneous and incidents of misprovisioning are nearly nonexistent.

Business Roadblock or Business Driver?
We’ve hit a tipping point. We can either continue to obstruct business for the sake of security, or we can change the way we do things and shift security from business roadblock to business driver. The low-hanging fruit of business-enabling security include adaptive approaches to access control, a holistic strategy for privileged access management, and a unified and business-driven program of provisioning and deprovisioning.

Related Content:

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jackson Shaw is vice president of product management for One Identity, the identity access management (IAM) business of Quest Software. Prior to Quest, Jackson was an integral member of Microsoft’s IAM product management team within the Windows server marketing group at … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/leveraging-security-to-enable-your-business/a/d-id/1331096?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Visa: EMV Cards Drove 70% Decline in Fraud

Merchants who adopted chip technology saw a sharp decline in counterfeit fraud between 2015 and 2017, Visa reports.

Merchants in the United States who adopted EMV chip cards saw a 70% decline in counterfeit fraud between Dec. 2015 and Sept. 2017, according to new data from Visa.

The payment card company began shifting to chip cards in 2011 to reduce counterfeit fraud, the most common type in the US at the time. More than 2.7 million merchant locations now accept chip cards, which marks a 578% increase from the 392,000 accepting them in Sept. 2015. Nearly 60% of US storefronts take chip cards.

User adoption has increased as well. In Sept. 2015, there were 159 million Visa chip cards in the US. By Dec. 2017 that number reached 481 million, and 67% of Visa credit and debit cards are chip-enabled. Nearly all (96%) of US payments in December were done with EMV chip cards.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/visa-emv-cards-drove-70--decline-in-fraud-/d/d-id/1331119?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

10 Can’t-Miss Talks at Black Hat Asia

With threats featuring everything from nation-states to sleep states, the sessions taking place from March 20-23 in Singapore are relevant to security experts around the world.PreviousNext

(Image: Black Hat)

(Image: Black Hat)

Mobile and platform security are popular topics for next month’s Black Hat Asia conference in Singapore, where industry experts will meet from March 20-23 to learn about newly discovered exploits and the tools and techniques to defend against them.

Lidia Giuliano, independent security professional and member of the Black Hat Asia Regional Review Board, notes she was impressed by the diversity of this year’s submissions. Session topics cover mobile, cryptography, IoT, exploit development, malware, policy, network defense, data forensics and incident response, reverse engineering, Web application security, the security development lifecycle, hardware, and platform security, among others.

Much of this year’s research will dig into mobile threats, particularly on the Android operating system. “People have their whole lives on their mobile phones,” Giuliano explains. “It’s a window into their lives and that puts people in a really vulnerable position.”

Here, we put the spotlight on Black Hat Asia talks that are expected to deliver groundbreaking and useful information for security pros. If you’re planning to attend, dig out your schedules and let us know what you’re excited to see.

 

 

 

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

 

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full BioPreviousNext

Article source: https://www.darkreading.com/threat-intelligence/10-cant-miss-talks-at-black-hat-asia/d/d-id/1331111?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

‘OMG’: New Mirai Variant Converts IoT Devices into Proxy Servers

The new malware also can turn bots into DDoS attack machines, says Fortinet.

Numerous versions of the Mirai IoT bot malware have surfaced since the creators of the original code – one of whom is a former Rutgers University student – first released it in Sept. 2016.

The latest iteration of Mirai is dubbed “OMG,” and turns infected IoT devices into proxy servers while also retaining the original malware’s DDoS attack capabilities.

Security researchers at Fortinet recently encountered the new Mirai variant, and say the modification likely provides the malware authors another way to generate money from the code. Satori, another IoT bot malware based on Mirai code, was discovered in December and is designed for mining cryptocurrencies rather than launching DDoS attacks.

“One way to earn money with proxy servers is to sell the access to these servers to other cybercriminals,” Fortinet said in a blog post this week. Proxies give cybercriminals a way to remain anonymous when carrying out malicious activity like cyber theft, or breaking into systems.

“Adversaries could also spread multiple attacks through a single source. They could get around some types of IP blocking and filtering,” as well, according to a Fortinet spokesperson.

OMG uses an open source tool called 3proxy as its proxy server. For the proxy to work properly, OMG includes two strings containing a command for adding and removing certain firewalls rules so as to allow traffic on two random ports, Fortinet said. OMG also packs most of the functionality of the original Mirai malware, including the ability to look for open ports and kill any processes related to telnet, http, and SSH and to use telnet brute-force logins to spread, Fortinet said.

When installed on a vulnerable IoT device, OMG initiates a connection to a command-and-control server and identifies the system as a new bot. Based on the data message, the CC server then instructs the bot malware whether to use the infected IoT device as a proxy server or for DDoS attacks – or to terminate the connection.

According to Fortinet, OMG is the first Mirai variant that incorporates both the original DDoS functionality as well as the ability to set up proxy servers on IoT devices. 

“The simplest and most effective uses of a proxy server are to cover the origins of an attack, reconnaissance activity, or for simply re-routing a user’s search for information to sites controlled by someone pushing a specific agenda,” says Gabriel Gumbs, vice president of product strategy at STEALTHbits Technologies.

IoT bots can also be used in disinformation campaigns, he says.

“It is now known that foreign adversaries used stolen US identities to post information on social media,” Gumbs says. In the same manner, “a compromised IoT device on a home network, such as a NEST or Samsung Smart fridge, could be modified to post messages that would appear to originate from a legitimate user’s location, using their identity.”

Related Content:

 

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/omg-new-mirai-variant-converts-iot-devices-into-proxy-servers/d/d-id/1331122?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hacker claims spyware maker Retina-X has been breached, again

Some hackers seem to have a problem with spyware.

Perhaps they don’t like the idea of “[living] in a world where younger generations grow up without privacy,” as one hacker told Motherboard after he allegedly hoovered out a spyware company’s servers… for the second time in about a year.

Last year, the hacker turned vigilante. He broke into servers belonging to Retina-X Studios, a Florida-based company that sells spyware products to keep tabs on kids and employees (the “legal” targets of covert surveillance).

He said he had found the key and credentials he needed to start the attack inside the Android version of the company’s Teenshield app.

Having gained access, he claims to have taken customer account logins, as well as data from the devices of people monitored by a Retina-X product called PhoneSheriff: private photos, messages, alleged GPS locations and more. He didn’t post any of it online, he says, though he did claim to have wiped some of the servers he’d been rooting around in.

Retina-X confirmed this first breach, classifying it as a “fairly sophisticated” attack while also minimizing it as “a weakness in a decompiled and decrypted version of a now-discontinued product.”

The same hacker now alleges that he’s returned to haunt Retina-X despite it taking “steps to enhance our data security measures”. Motherboard reports that Retina-X disagrees:

Friday morning, after the hacker told us he had deleted much of Retina-X’s data, the company again said it had not been hacked. But Motherboard confirmed that the hacker does indeed have access to its servers.

The publication says it verified the breach by downloading the PhoneSheriff app on to an Android phone and then using the phone to take photos of their shoes. This is what the hacker messaged editorial staff moments later:

I have 2 photos of shoes.

Retina-X isn’t the first spyware maker to have been breached.

FinFisher, which specializes in government spyware that’s infamous for being used to spy on dissidents, was hacked in August 2014, while the infamous keylogger/stalkerware maker FlexiSpy was ransacked in April 2017.

Regardless of how spyware marketing has been smoothed over – as in, “hey guys, let’s drop the references to cheating spouses and emphasize the legality of spying on kids and employees” – the fact remains that covert surveillance tools are popular with people spying on unwitting partners.

Unsurprising, given their feature set.

Spyware apps like FlexiSpy – that can log keystrokes and tap into mics, calls, stored photos, text messages, email and even encrypted messages from apps such as WhatsApp – are cited by an overwhelming majority of survivors of domestic violence.

The majority of abusers train such tools on their victims’ whereabouts, communications and activities. A 2014 NPR investigation found that 75% of 70 surveyed domestic violence shelters in the US had encountered victims whose abusers had used eavesdropping apps. Another 85% of surveyed shelters reported that they’d helped victims whose abusers used GPS to track them.

But none of that is justification for hacking companies that are operating legally. The hackers broke the law, and they didn’t help the victims of spyware in the process. They actually could have made things much worse for those victims by telling others how they did it, putting out blueprints and encouraging them to do the same.

For all we know, the hackers weren’t themselves all that benevolent and might well have lied about what they did or didn’t do. Even if their motives were pure who’s to say what mistakes they might have made and what they might have tripped over while roaming around and discovering somebody else’s network.

And will the next hacker through the door be so restrained as to refrain from publishing victims’ personal information?

There are good reasons why unauthorized access to computers, and destruction of data, are illegal – regardless of how distasteful we may find the data that was destroyed.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MgREiQGn39k/

5 signs you may be talking to a bot

The social media platform Twitter has a bot problem. Twitter has been under increased scrutiny lately for hosting hundreds of thousands of accounts that seem to be legitimately owned by real people, but in fact are just “bots,” or automated accounts, that are created en masse to flood the platform – usually to espouse political beliefs.

There are plenty of allegations that the English-speaking Twitter world was flooded with bots from nation-states like Russia to support Brexit in the UK and to promote or denigrate presidential candidates in the United States. Just this week, many high-profile, right wing Twitter users have noted that their accounts have been getting frozen and their follower counts plummeting in what they have termed #TwitterLockOut, though other Twitter users have argued that this was a long overdue purge of fake accounts.

These bots are easy to deploy and effective at plastering propaganda to influence discussion and divide populations, and they’re not a small presence: One study last year estimated that bots make up nearly 15% of Twitter users in total – about 30 million – double Twitter’s own estimate of bots on their platform.

Bots are by no means limited to supporting American right-wingers on Twitter, they are becoming an issue on all major social media platforms, especially Facebook and Instagram, across almost all countries and languages. If you’re on social media at all, it’s worth asking yourself: Can you tell when you’re talking to a bot?

Even if you’re smarter than the average bear, it’s not always easy to tell the bot accounts from real ones. (Bot creators are getting better by the day.)

  • If the account claims to be representing a major politician or celebrity, check to verify that this account isn’t an impersonator. There’s a blue “verified” checkmark that Twitter bestows on accounts that have been proven to be owned by who they claim to be. That said, the check doesn’t exist for all official accounts, so this method isn’t fool-proof. Still, when possible, look for the blue check.
  • Any account that has a generic blank user profile photo (previously it was the Twitter “egg”) and a username that is a noun followed by a bunch of random numbers is very likely a bot.
  • Even a supposedly genuine looking profile photo can be deceptive. Many bots pull photos from public social media profiles or even stock imagery to give their profile photos an authentic veneer. Try doing a reverse Google Image search on a profile photo for a profile you suspect might not be real – chances are it may belong to someone with a completely different name.
  • One of the latest ploys Twitter bots use is generating biographies (the descriptive text underneath your name) with random nouns and descriptors to make the profile look somewhat genuine. If the biography looks disjointed and doesn’t make much sense – e.g. the profile photo is of a young girl in a bikini, and the profile says “grandmother of 5, devoted husband,” that’s a big red flag.
  • Does this user engage with people in conversation in a meaningful way, or does it just spit out statements, hashtags and links without any real interaction with other users? Yes, more sophisticated bots can have something resembling a back-and-forth conversation, but most of the basic ones flooding Twitter are rather spammy, and one-note – don’t expect a meaningful response if you ever Tweet at them.

There are also tools and websites that claim to track bot activity on Twitter and say they can even check if an account is a bot for you. These tools can be handy to confirm suspicions, but keep in mind that any tool is ultimately an extension of its creator – a bot checker tool could be completely reputable and trustworthy, or it may have its own political agenda.

In the end, trust your gut if something feels off with the account you’re talking to, and if you feel so inclined, report any suspicious accounts or bots to the social media platform to help keep interactions online genuine and as bot-free as possible.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Ttxf9Ro31P0/

Rancher sues Feds for sneaking a spy camera on to his land

Texas has constructed a wall along its 1,200-mile border with Mexico.

Not Donald Trump’s wall, mind you: rather, the Department of Public Safety has created a surveillance wall, made up of hundreds of cheap video cameras, giving border sheriffs live video coverage so they can keep an eye out for smugglers and traffickers of drugs and humans. The department calls it Operation Drawbridge, and it consists of low-cost, off-the-shelf wildlife cameras with motion detection and low-light capability.

24×7 monitoring: it’s a steal at these prices, the department says:

At approximately $300 per camera, they provide a high-tech capability at a low-tech cost.

Maybe so, but it might cost the state a bit more than that, depending on the outcome of a lawsuit filed by the owner of a ranch who found what he and his lawyers suspect is one of the Operation Drawbridge cameras strapped about 8 feet up on a mesquite tree on his ranch.

He didn’t know what it was or how it got there, so Ricardo Palacios, a 74-year-old rancher and attorney, took it down.

In short order, he got some phone calls. The Customs and Border Protection (CBP) officials said it was theirs, and they wanted it back. The Texas Rangers called, too: they said it was theirs, and they also demanded its return.

No dice, Palacios told them all, so they threatened to arrest him.

Oh yea? Well, how about this instead: Palacios turned the tables and quickly sued both agencies before they could prosecute him, accusing them of trespassing on his land and of violating his constitutional rights. His lawyer, Raul Casso, called the agents’ behavior “creepy” in a complaint and said it smacked of 1984:

Plaintiffs maintain that there is something creepy and un-American about such clandestine, surreptitious, 1984-style behavior on the part of Defendants – officers of the law.

Just like that camera and the agents who stuck it in that tree, the federal lawsuit is treading on contentious territory, raising questions about what limits there are to the government’s power to conduct surveillance in the name of border security on private property, without the landowner’s permission.

Or, in the case of Palacios, with the landowner’s repeated orders, over the course of years, to get the hell off his land.

Beyond just this one surveillance camera, there’s a history of confrontation between Palacios, his sons and the CBP. As Palacios described it in his complaint, the troubles began in 2010 at a CBP checkpoint 29 miles north of Laredo, Texas.

His sons were heading home to their ranch, which is about 6 miles north of the checkpoint. When the CBP asked Richard D. Palacios Jr. where he lived, he refused to tell them. He was sent to a secondary inspection, at which point about 10 government agents allegedly grabbed him and body-slammed him to the ground, according to the complaint, also roughing him up after he was taken to a detention cell at the checkpoint station.

After 90 minutes in detention, he was released.

In the years that followed, Palacios Sr. claims that government agents continually trespassed on his land, shone lights in the window of his son’s house, opened gates, lied about having been granted a key to do so, trained night-scope cameras toward their houses and land, and told him that they could do whatever they liked when Palacio tried to order them off his property.

Could they? Well, that depends.

Palacio’s ranch is situated north of the Mexican border. Exactly how far away is of utmost importance when it comes to what rights you have in the US, where it’s commonly said that borders are a “Constitution-free zone”. According to federal law, agents don’t need warrants to search private property that’s located within 25 miles of a border, “for the purpose of patrolling the border to prevent the illegal entry of aliens into the United States.”

If Palacios’ ranch were within that range, he wouldn’t have a leg to stand on, given that it would be within the we-don’t-need-no-stinkin’-warrants zone. But when Palacios used the “My Map” facility at the Texas General Land Office to measure the distance, he found that his property is about 27.5 miles away from the nearest bend in the Rio Grande (the southern border) as the bird flies.

Those 1.5 miles are a razor-thin margin in which to demand that government agents require a warrant or probable cause to conduct a search. But as far as Palacios and his lawyers are concerned, that’s enough for them to hang onto that surveillance camera. That, and his constitutional right to privacy and Fourth Amendment protection against unreasonable search.

His lawyers have the camera and are trying to request that it be formally introduced as evidence. They believe that the camera Palacios took down from the mesquite tree is part of Operation Drawbridge and that it’s one of some 4,000 such cameras arrayed along the border. They’re hoping the lawsuit can get the feds to stop barging in on Palacios’ yard and planting surveillance gadgets as they wish.

Ars Technica quoted David Almaraz, another of Palacios’s attorneys:

Our lawsuit is that we want a federal judge to tell the border patrol and the feds to not go on [Palacios’] property without permission or probable cause. And if you all are going to keep doing that, you’re going to have to pay for it. It’s called the right to be left alone. That’s what the Fourth Amendment is all about.

Both the CBP and the Texas Department of Public Safety have declined to comment on the lawsuit. As Ars Technica reports, Texas officials have claimed qualified immunity: a legal doctrine that protects law enforcement officials.

No hearings have been scheduled yet.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/LqnZxVPdhFc/

Bitcoin exchange founder charged with covering up hack

It’s one thing to launch cryptocurrency businesses with programming weaknesses that lead to them getting hacked and hoovered.

But lying to the Securities Exchange Commission (SEC) about it during sworn testimony, as you try to cover up the fact that all the Bitcoins have gone bye-bye?

Oh, dear – that’s a coin that will buy you nothing but trouble.

And those are the charges facing Jon Montroll, 37, of Saginaw, Texas, the operator of a now-defunct cryptocurrency investment platform who’s been charged with lying to cover the fact that hackers made off with more than 6,000 of his customers’ Bitcoins.

Montroll, also known as “Ukyo,” was charged with two counts of perjury and one count of obstruction of justice in a complaint unsealed by the office of Manhattan US Attorney Geoffrey Berman on Wednesday. The US Securities and Exchange Commission (SEC) also filed a lawsuit accusing Montroll of violating securities laws.

According to prosecutors, before Bitcoin shot up to nearly $20,000 in December, Montroll operated two online services: BitFunder.com facilitated buying and trading of virtual shares of businesses listed on its platform, while WeExchange Australia Pty. Ltd. served as a Bitcoin depository and currency exchange.

The Bitcoins belonging to users of both businesses were held in one, common account.

During 2013, one or more hackers exploited a vulnerability in BitFunder and credited their accounts with profits they never earned. They withdrew around 6,000 Bitcoins between 28 and 31 July 2013, which would now be worth more than $60m, according to prosecutors’ estimate. (For what it’s worth, according to the Coinbase exchange, Bitcoin was worth $10,165 and falling when I checked this morning, which puts the estimate of disappeared Bitcoin at a current value of $60.9m.)

Any way you slice it, the hackers got away with more Bitcoin than could cover what Montroll owed the platform’s users.

But that’s not how Montroll framed the hack in sworn testimony to the SEC in November 2013. According to the complaint, Montroll denied that the hack was successful:

When [the hackers] went to withdraw, the system stopped them because the amount was obviously causing issues with the system.

[The software issue] was corrected immediately, whenever the system started having the problems, and I caught on to what was happening I’d say within a few hours.

Then he came up with evidence to back up his claim: Montroll reportedly gave the SEC investigators a screenshot purportedly documenting, among other things, the total number of bitcoins available to BitFunder users in the WeExchange Wallet as of 13 October 2013. This supposed balance statement showed that there were 6,679.78 BTC on hand as of that date. Prosecutors say that in his sworn testimony, Montroll explained that it represented…

The collective pool of BTC held for users on BitFunder – users who transfer bitcoins to BitFunder, this is the total amount that’s being held by BitFunder of those users.

Well, that would have been nice. Unfortunately, the balance statement was cooked, prosecutors allege. Investigators looked at the digital evidence – including chat logs and transaction data – that showed that the balance statement was “a misleading fabrication.”

They noted that three days into the hackers bleeding off the Bitcoin, Montroll was on an internet relay chat with an unnamed person – “Person-1” – looking for help to track down the stolen coins. That didn’t work, so he allegedly transferred some of his own bitcoins into WeExchange to cover up the hole that had been ripped into the balance.

But still, the hacking and hoovering continued. Prosecutors say that by the time shown on the fabricated balance statement, WeExchange held thousands of bitcoins less than it claimed. Prosecutors say it held just 40.

In subsequent testimony, when confronted with that evidence, Montroll allegedly lied to SEC staff again. He admitted that he’d cooked up the balance statement, but he claimed that he only discovered the success of the hack after the SEC asked him about it during his first day of testimony.

As for the chat with Person-1? Don’t know anything about it, Montroll allegedly claimed.

Maximum penalties are rarely handed down, but for what it’s worth, the perjury charges each carry a maximum of penalty of five years in prison, while the obstruction of justice charge carries a maximum penalty of 20 years.

In a separate, parallel lawsuit, the SEC charged Montroll with operating an unregistered securities exchange; defrauding users; and making false and misleading statements, including failing to disclose the hacking attack. He also sold unregistered securities that purported to be investments in the exchange and misappropriated funds from those investors, the SEC said.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/BepSA_R9aDQ/