STE WILLIAMS

As Primaries Loom, Election Security Efforts Behind Schedule

While federal agencies lag on vulnerability assessments and security clearance requests, the bipartisan Defending Digital Democracy Project releases three new resources to help state and local election agencies with cybersecurity, incident response.

With primaries for 2018 elections beginning March 6, efforts to harden state, local election systems are being hindered by federal sluggishness and “wariness of federal meddling,” the Associated Press reports. 

One of state and local election officials’ main complaints, according to the AP report, is their struggle to obtain federal security clearances, which would enable greater information sharing in the event of a security threat or incident. Fewer than half of the officials that have requested federal clearances have yet received them, according to the AP, including the state elections board executive director in Illinois, one of two states where voter registration databases were breached in 2016. 

Another key concern: vulnerability assessments of the state and local election systems. The US Department of Homeland Security offered to conduct these assessments – but only 14 state and three local agencies took DHS up on the offer, and only five of these requested vulnerability assessments have been completed. DHS says all will be completed by mid-April, according to AP.

Election officials did, however receive new guidance Thursday, from the bipartisan group that recently released cybersecurity guidance for election campaign managers. The Defending Digital Democracy Project (D3P) at Harvard Kennedy School’s Belfer Center for Science and International Affairs – co-chaired by the former campaign managers for Mitt Romney and Hillary Clinton and the former Defense Department chief of staff during the Obama Administration – published “The State and Local Election Cybersecurity Playbook,” “The Election Cyber Incident Communications Coordination Guide,” and “The Election Incident Communications Plan Template.” 

D3P’s recommendations cover paper trails, audit practices, multi-factor authentication, access controls, log management, vendor agreements, end user training, incident response, and communications plans, in addition to details about the specific threats affecting voting systems, from the hardware to registration databases.

For more information, see the Associated Press and D3P.  

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/as-primaries-loom-election-security-efforts-behind-schedule/d/d-id/1331056?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

AI and Machine Learning: Breaking Down Buzzwords

Security experts explain two of today’s trendiest technologies to explain what they mean and where you need them.

Artificial Intelligence and machine learning are marketed as security game-changers. You see them all the time on systems and services, promising to catch threats and strengthen defense.

“One of the problems in our industry is people tend to throw around buzzwords in the hopes of differentiating their marketing,” says Jon Oltsik, senior principal analyst at ESG. “But all it does it confuse the market.”

The two technologies have legitimate purpose and potential but are used so often and so interchangeably, it can be hard to make sense of them. What’s the difference between them? Where should they be used? And how should you shop for them to get real value?

AI vs. ML: What They Really Mean

Machine learning is a segment of artificial intelligence, explains Roselle Safran, president at Rosint Labs. Artificial intelligence, a more general concept that has been around for decades, describes machines that think like people. One of its many applications is machine learning, in which a machine looks at tons of data and from that data, can learn what something is.

“There are products that have some pretty impressive capabilities but they’re not necessarily capabilities where the product is learning,” she explains. “It’s often useful from a marketing perspective to slap the label on because then it checks the buzzwords off.”

Machine learning has two components, Safran continues. One is a massive volume of training data; the other is a “feedback loop” that informs decisions. Based on the product, a machine learning system will look at volumes of data to determine whether its decisions are correct.

“Large organizations are starting to experiment with artificial intelligence and machine learning,” says Oltsik. “They don’t have a deep understanding of the concepts and models, but nor do they want to,” he says. “What they care about is how effective it is, and does it improve their existing technology and processes.”

However, he continues, security leaders should know enough to determine where these technologies can be applied and how to choose one system over another.

Where They Fit in Your Security Strategy

“There are a couple of unique problem sets in security that are right for machine learning, and right for different kinds of solutions,” explains Ryan LaSalle, global managing director for Growth Strategy at Accenture Security. He describes security as a “graph problem” because it’s a way of storing lots of data and everything is relationship-driven.

People have a hard time visualizing in a graph, he continues, but machines excel because they thrive on large volumes of data. The key is to pick up scenarios where the machine has an advantage over people; for example, observing human behaviors and detecting anomalies.

“User behavior analysis is a big one that went from traditional analytics to more machine learning-driven,” he says.

Machines can view employee behaviors across multiple points in the environment and automate access privileges, something “most enterprises are terrible at,” he adds. Business managers often “rubber-stamp” the process of employee access and won’t consider the same level of detail when considering who should be able to access what.

In the near term, most security teams will use machine learning for detection and response, though there are protective applications as well, says Safran. In the future there will be applications on the strategic and architectural levels, but we’re not there yet.

“For now I see most of the activity is going to be operationally focused and tactical in nature: detecting malware, phishing attacks, detecting unusual behavior that could be indicative of insider threats,” she explains. When a system detects a threat that needs to be investigated, a machine can help by providing next steps for the response process.

Machines Won’t Replace (Most of) Your Colleagues

There are several misconceptions about artificial intelligence and machine learning, says Oltsik. One of them is the idea that machines will eventually be a substitute for humans.

“Across the board, at this point we’re very, very far from a situation where machines are going to do all the work and the security team can go home,” Safran says. “All of the machine learning apps for the next few years will focus on enhancing the work of the security team and making their operations more efficient and effective.”

However, machines can do the same work as tier one analysts, freeing up limited security talent to focus on more advanced work, she points out. Most tier-one tasks involve information gathering and technical duties. These are decisions that can be calculated. Security teams can leverage machine learning to automate “busywork” and train up their employees.

How to Shop Securely

“You need to look at the threat scenarios your business cares about,” says LaSalle. Test the outcomes of the system and compare where you are today, with what you’re trying to achieve.

Oltsik also points to performance as something to keep in mind. If an artificial intelligence tool collects data on-site and puts it in the cloud for processing, it will cause latency. What kind of impact will that have on your organization?

Data reporting is another factor to consider, he continues. “All this data interpretation and analysis is only as useful as it comes back and provides information to a human. In the history of technology there have been good reports and bad reports, good visualization and bad.”

Safran recommends asking the vendor about the training data they use. If there is no training data, changes are the tool doesn’t actually have machine learning capabilities. If there is training data, you need to know if it’s specific to the business or their whole customer base.

You also need to understand how the model works, as well as the feedback loop informing it.

“It’s a challenging question for many organizations, but having insight into how it works under the hood gives a better perspective on what it’s capable of doing and how it could be a benefit,” she explains.

Related Content:

 

 

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/ai-and-machine-learning-breaking-down-buzzwords/d/d-id/1331057?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Would you allow Facebook into your home?

If you believe some of the more speculative stories on the internet right now, this question won’t be hypothetical for long.

There are a number of stories circulating that later this year Facebook will announce the Portal, its camera-enabled premiere foray into the world of home smart devices, akin to Amazon Echo and Google Home.

Of course, this being a device from Facebook, it’s going to leverage its huge library of knowledge about all its users, and what those users look like. After all, Facebook has been using facial recognition technology to scan photos uploaded to its service for years to match those faces to its users.

The rumored Facebook Portal device would take advantage of Facebook’s massive database of knowledge about its users, their behavior, and their faces for everything from identity verification to detect moods for targeted advertisements, or to glean any trends about user emotional health over time.

The rumored Portal is still firmly in the realm of Silicon Valley whispers – though we’ll find out in May at the F8 Developer’s conference if it’s real or not – but it raises larger questions about welcoming smart devices into our home.

Of course there are certainly those of us who have smart speakers in our houses and have made peace with an always-on device helping us with tasks and listening to the minutiae of our lives. But will we feel as comfortable with a face-savvy camera in our home, especially one that’s tied to a service as ubiquitous as Facebook?

If the idea makes you a little queasy, it would be advisable to keep these kinds of camera-based devices out of your home in the future.

In the meantime, you can take this as an opportunity to review your facial recognition settings in your Facebook profile. Facebook even helpfully reminded me to do so recently rather out of the blue, which made me suspect that perhaps they’re planning on doing something new with their face recognition technology, but that’s only a guess on my part:

How to check your facial recognition settings in Facebook

(Note: If you’re in Canada or the EU, facial recognition isn’t currently available to you on Facebook)

From the app:

  1. Tap Privacy shortcuts
  2. Then More Settings
  3. Then Face Recognition.

From a web browser:

  1. Click the down arrow menu button at the top right (to the right of the question mark) and then click Settings.
  2. In the left-most menu that loads on the next screen, you should now see Face Recognition as an option.
  3. There’s only one option in the Face Recognition menu: Do you want to have this feature enabled – Yes or No.

Facebook helpfully notes in its help documentation about this feature that “you can turn the [facial recognition] setting on or off at any time, which will also apply to any features we add later.” Presumably, this includes any devices it might make that could use this feature.

Even if you have the settings turned off on your Facebook profile – and even if you don’t *have* a Facebook account – as long as you’ve appeared in a photo that’s been uploaded, it’s entirely possible for Facebook to one day make a face-recognized profile of you.

And it’s not just Facebook, facial recognition tech is ubiquitous and it’s already well-loved by advertisers and law enforcement alike. There may not be much we can do to avoid our likenesses being scanned and made into face-based profiles on a distant database somewhere as cameras proliferate in society – but for as long as we have any say over this tech, it’s good to exercise that control.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/fF-P_ZhdrVc/

Equifax Names New CISO

Former Home Depot CISO takes the reins in the wake of Equifax’s massive data breach and fallout.

Equifax has hired Jamil Farshchi as its new chief information security officer (CISO) to fill the slot by its former CISO, who retired last year in the wake of revelations of its massive data breach.

Farshchi is the former CISO of The Home Depot who took that position after the retailer suffered a data breach of its own in 2014. According to Equifax, Farschi will head the company’s information security program transformation, and will report to the CEO. His resume includes serving as the first global CISO at Time Warner, vice president of global information security at Visa, and senior positions at Los Alamos National Laboratory and NASA.

“Jamil has a reputation for helping enterprises rebuild and fortify information security programs. His expertise in risk intelligence and cybersecurity combined with his intimate knowledge of industry best practices will allow us to design and deploy a best-in-class, global security strategy to re-establish ourselves as a trusted leader,” Paulino do Rego Barros, Jr., interim chief executive officer at Equifax, said in a statement.

“Equifax is a company with tremendous potential, and I am confident that we will transform our security program into one of the most advanced and recognized globally,” Farshchi said in a statement. “I am grateful for this new challenge and am looking forward to enabling the business with new insights, a fresh perspective, and a multi-dimensional way of thinking about global data stewardship and information security.”

Read more here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/informationweek-home/equifax-names-new-ciso/d/d-id/1331045?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Fake News: Could the Next Major Cyberattack Cause a Cyberwar?

In the way it undercuts trust, fake news is a form of cyberattack. Governments must work to stop it.

Fake news — we’ve all heard about it, but sometimes we struggle to grasp the extent of its impact.

With more people moving online and social media becoming the go-to news source, and with a good chunk of what is put on social media being fake, the reader must determine whether information is true or not. When people believe everything they read, the world becomes an unpredictable place.

In the past, we could easily choose which news source to follow and have a high level of confidence about its accuracy. Today, however, with news arriving to us in a social media feed, both trusted and fake news sources are merged together — and the consumer must decide whether or not to believe the news. With no clear indication of the truth or the source of news on social media, many countries, democracies, and nation-states will struggle with transparency and could become politically instable. It only takes one fake news story within a trustworthy source to devalue an entire news feed, forcing us to question what is real and what is not.

To put it bluntly, fake news is a form of cyberattack and will only grow significantly in 2018 and beyond.

Attribution, Transparency  Response
Fake information has become a major disruption to our way of life, filling our news feeds to influence our actions in an attempt to change the outcome of important decisions, including elections. Rather than focusing on the important needs of citizens — such as taxes, health, and education — many governments are now embroiled in trust and transparency challenges caused by the continuous disruption from cyberattacks. We have seen the governments in both the US and UK increasing focus and attention on recent cyber incidents with little to no transparency.   

Many recent cyber incidents have involved the theft of huge amounts of personal and sensitive information that is then used to pursue and influence our nation’s decision-making. Some notable cyber incidents — for example, breaches at Yahoo, Ashley Madison, and Equifax —exposed sensitive data that could be used via news feeds to trigger emotions and reactions. When a cyberattack from another nation-state tries to influence our way of life, our society, or our government, should this be considered an act of war?

Large troll factories and botnet farms are using our stolen personal information to guarantee that our news feeds are filled with fake information that attracts readers to respond and participate, creating a growing trend that encircles friends and family. This could start from a machine-controlled bot that wants you to share malicious information, influence your friends’ decisions, and distrust your own government, creating divisions rather than giving you true information.   

National Ownership International Cooperation
It’s clear that cyberattacks are crossing country borders and disrupting our way of life, without nation-states taking responsibility. We hear about cybercriminal groups that are behind many of the major cyber incidents in recent years, including data breaches, ransomware, or the targeting of government agencies’ classified information. Companies and governments have linked these cybercriminal groups to nation-states; for example, both FireEye and Symantec have accused North Korea of being behind the WannaCry ransomware, though they haven’t revealed concrete evidence and North Korea has denied involvement. Without clear cooperation and transparency, this problem will grow with into increasing numbers of cyberattacks on critical infrastructure, political affiliations, financial institutions, and communications.

To prevent a major catastrophe from occurring, governments and nation-states need to work together on cyber attribution with full cooperation and transparency that holds each other responsible for the actions of criminal organizations operating from within their borders. At the recent World Economic Forum, it was announced that a new Global Centre for Cybersecurity will be launched. This should focus on establishing cooperation between governments so that attribution is possible in the future; if a cybercrime has been committed, the governments involved should work together, similar to the way Interpol works today. It is equally important that governments do not provide a safe haven for cybercriminals to carry out such attacks, especially when they are doing it for financial and political gains  with and extreme aggression.

It is time for governments to act and protect democracy and our way of life. 

Related Content:

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Joseph Carson has more than 25 years’ experience in enterprise security, is the author of Privileged Account Management for Dummies and Cybersecurity for Dummies, and is a cybersecurity professional and ethical hacker. Joseph is a cybersecurity adviser to several governments, … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/fake-news-could-the-next-major-cyberattack-cause-a-cyberwar-/a/d-id/1331001?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Can Android for Work Redefine Enterprise Mobile Security?

Google’s new mobility management framework makes great strides in addressing security and device management concerns while offering diverse deployment options. Here are the pros and cons.

Google’s new enterprise mobility management framework Android for Work (AfW) allows employers to effectively manage and mobilize all devices used in the workplace, even when employees use their own devices. Business-owned or single-use devices for specific employees and customers can incorporate IT controls to improve security and end-user functionality.

A dedicated Android enterprise mobility management (EMM) solution with an integrated AfW offering is especially critical for the modern enterprise, considering the rising adoption rates of Android devices, as verified by recent IDC research:

While AfW is still evolving into a stable, widely supported ecosystem, it makes great strides in addressing security and device management concerns while offering diverse deployment options and global availability. The program, developed by Google, allows businesses and employees to use Android-based devices for customized work-specific purposes. Through integration with an EMM provider like Codeproof, AirWatch, and others, the Android for Work solution lets IT managers and employees enable tailored capabilities and restrictions in the way the data, apps, and devices can be used for work purposes. AfW features also include a range of privacy and productivity features to the device, which may belong to an employee or provisioned by the employer.

Prior to the AfW service offering, there was no standardized set of mobile device management (MDM) API in the core Android operating system. Several OEMs developed their own set of APIs to enable report management of the device. Popular examples include the Samsung Knox enterprise mobility management APIs on top of the Google Android operating system and the LG Gate, developed as its own set of EMM APIs. As a result, each EMM provider needed to work individually with each OEM and manage the devices through the OEM’s EMM stack. For IT and security managers, this meant an ever-increasing number of management portals to handle, at increasing licensing costa and low effectiveness.

Google’s Android for Work fills this gap by eliminating the OEM-related API dependency, and offers the same set of APIs in the core operating system. These APIs are available for EMM providers to control and manage Android devices across all the OEMs.

Provisioning Methods
IT managers can use AFW with EMM services to provision and enroll devices for employees in several ways, including:

  • NFC: Enables quick and easy configurations onto new devices by simply tapping them together.
  • EMM Tokens: Using codes provided by IT, end users can install specific apps or EMM agents onto their devices from a remote location.
  • QR codes: Enables devices to scan an image and enroll a device from a setup wizard without any hands-on support from IT for EMM enrollment.
  • Zero-Touch Enrollment (for corporate-owned devices): Enables simplified, large-scale deployments with support for multiple device manufacturers without any manual setup, allowing end users to use their preconfigured device out-of-the-box. This includes limited support for enforced management apps for certain device and OS versions.

Device Modes and User profiles

  • Business Only: This device mode is available for corporate devices and offers MDM functionality for individual users, who may be employees, contractors, or other partners. This allows organizations to maintain and configure the same device with unique configurations for every different device owner.
  • Personally Enabled: This capability is enabled on employee-owned BYOD devices that are connected to the corporate network and enrolled via the EMM. With the Profile Owner mode enabled, the employer or IT manager only gets to access certain work-related data, apps, and features on the employee BYOD device.
  • Single Use: This mode of operation focuses on the purpose of device functionality as opposed to the end user. As such, IT managers can establish an operating mode with certain features turned on and the rest blocked. These options are useful when the device is used to perform a specific purpose, no matter who gets to use it.

BYOD Challenges Downside Risks
Striking a happy medium between user satisfaction and enterprise security is key to success in the modern mobility landscape. Single-phone corporate environments benefited from standardized security policies and unified interfaces, but BYOD support increases security risks because it fundamentally changes the nature of the architecture. App- or device-specific vulnerabilities may circumvent an existing security policy, and the more devices (and variety of devices) that are networked together, the greater the risk.

Limiting device interaction reduces risk but hamstrings employees who are accustomed to the flexibility provided by BYOD solutions. This may lead to worsening habits and policies as employees opt to use unsafe workarounds that further expose enterprise networks to malicious or vulnerable apps. All of the above increases the difficulty of managing the mobile environment since your IT department loses a layer of control on user-owned devices. There are also hidden costs to BYOD programs, including spikes in data usage (especially for employees who travel) and increased support costs for a wider variety of devices and apps.

Related Content:

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Satish Shetty is CEO and founder of Codeproof Technologies, an enterprise mobile security software company. Shetty has more than 20 years of security and enterprise software development experience. A recognized leader in the mobile device management space, Shetty also has … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/can-android-for-work-redefine-enterprise-mobile-security-/a/d-id/1331041?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Lazarus Group Attacks Banks, Bitcoin Users in New Campaign

A new Lazarus Group cyberattack campaign combines spear-phishing techniques with a cryptocurrency scanner designed to scan for Bitcoin wallets.

The Lazarus Group has been discovered behind a new cyberattack campaign dubbed HaoBao targeting banks and Bitcoin users via spear phishing lures that deliver a new cryptocurrency scanner that hunts for Bitcoin wallets.

The attack campaign uses spear-phishing emails impersonating job recruiters, a tactic previously seen from the group – widely believed by researchers to operate out of North Korea – last year. From April through October 2017, researchers at McAfee Advanced Threat Research (ATR) saw Lazarus Group using job descriptions to target a range of organizations in English and Korean, gain access to their environments, and then steal sensitive data or money.

In January 2018, researchers detected the start of a new campaign when they found a malicious document disguised as job recruitment for a business development executive located in Hong Kong. More malicious files with the same “Windows User” author appeared from January 16-24.

While the fake job recruitment messages are similar to those seen last year, the implants in this campaign have never been previously seen in the wild or used in previous Lazarus Group attacks, says Ryan Sherstobitoff, McAfee senior analyst of malware campaigns.

When a victim selects “enable content,” the malicious document launches one of two payloads on the system via a Visual Basic macro. The first is a lightweight cryptocurrency scanner, which gathers data on processes and users then scans for a registry key and Bitcoin wallet. Attackers can observe traffic sent to the CC server to determine whether a machine uses Bitcoin or not.

“It’s being more focused and filtering out targets of interest,” Sherstobitoff explains. “They’re getting more aggressive with Bitcoin stealing, and more aggressive in the way they target.”

If the malware detects a machine has a Bitcoin wallet, it deploys and installs another payload. The secondary payload is a long-term implant intended to gain persistence on the machine. While researchers weren’t able to observe this directly, he says this second stage probably has capabilities to steal private keys or siphon from the victim’s Bitcoin wallet.

Shrinking Footprints

Sherstobitoff says this campaign demonstrates Lazarus Group is moving toward smaller implants as opposed to large files and installations it used in the past. HaoBao loads directly into memory to scan information, making forensics recovery more difficult.

“They’re going fileless, and going for reduced implants, and reducing the footprint overall on the machine and cleaning up more quickly,” he says of their change in tactics. Moving forward, he expects they’ll continue to shrink their presence so it’s as small as needed to be successful.

Antivirus tools would detect this implant but overall, discovery is difficult because the attack is fairly targeted, says Sherstobitoff. “If your AV products aren’t totally up-to-date, there’s high potential these things can go for a number of days or months without being seen.”

Further, attackers may use data from the initial scan to tailor their secondary payload. For example, he continues, if the scanner determines a system is running a certain type of antivirus software, attackers could craft the second stage to evade detection.

There have been signs of increased activity from Lazarus Group, which researchers believe is based in North Korea. The US-CERT today published an advisory stating the DHS and FBI have detected Trojan malware variants HARDRAIN and BADCALL used by the North Korean government. The US government refers to North Korean cyber activity as Hidden Cobra, another term for the group.

Going for Gold

Previous Lazarus Group campaigns from 2017 focused on both money and data theft. This time, it seems their focus is exclusively on Bitcoin. HaoBao is used to target financial organizations and institutions that use or trade Bitcoin, which might have wallets, Sherstobitoff says.

He points out that only two organizations were used as a lure in the spear-phishing emails used in this attack. The job descriptions used in fraudulent emails are legitimate positions taken from real career sites. While the targets are unknown, all are related to cryptocurrency.

Cryptocurrency is a growing target for attackers because it’s highly anonymous, and there aren’t many regulations that make catching crypto theft east. Many attackers have found it’s more lucrative for them to steal compute power for mining cryptocurrencies, as opposed to stealing data, says RedLock CEO Varun Badhwar. Cryptojacking attacks on businesses often go unnoticed.

Related Content:

 

 

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/lazarus-group-attacks-banks-bitcoin-users-in-new-campaign/d/d-id/1331053?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Beware the ‘celebrities’ offering you free cryptocoins on Twitter

Consider @Eilon_Musk, @ElonMuski, @EloonMusk, @Elonn_Musk, @Alon_Musk, @DoonaldTrump65, and @justtinsun_tron: what a generous clutch of almost-celebrities!

All have been popping up on Twitter within the past few weeks, all of them bearing handles that are passingly close to those of legitimately famous people like Elon Musk, Donald Trump, Justin Sun, other tech CEOs, or other big names in cryptocurrency – and all of them claiming that they’re showering cryptocurrency onto the first comers.

All you have to do to receive it is first send some cryptocoin to an online wallet (please don’t!), and you’ll get double – triple! – quadruple! – decuple! – your money back (fat chance!).

Here’s one sample of these scammers’ come-ons:

The scammer in this case has ripped off a picture of Justin Sun, founder of the Tron Foundation. TRON is a blockchain-based open source global digital entertainment protocol. As this particular scam shows, not only are the scammers ripping off well known people’s photos and typosquatting their handles; they’re also plopping their scam come-ons down in the prime real estate of the comment section of their targeted celebrities’ posts.

That’s what @justtinsun_tron did in that scam above, and it’s what @DoonaldTrump65 did with his own scam, which showed up in the comments for this tweet about #nationalprayerbreakfast from President Trump:

The @DoonaldTrump65 account, which has since been suspended, on Thursday replied to @realDonaldTrump’s tweet with an offer to donate 250 Ethereum to the ETH community – “Because I’m the best President ever!”

BuzzFeed News did a “cursory search” of Twitter that uncovered 27 fake accounts promoting “dubious bitcoin or ethereum ‘investments,’” including ten masquerading as Musk and three pretending to be Donald Trump. BuzzFeed also found that there are large automated botnets doing the scut work behind the scenes.

Twitter told BuzzFeed News that it is trying to stamp out the scams:

We’re aware of this form of manipulation and are proactively implementing a number of signals to prevent these types of accounts from engaging with others in a deceptive manner.

Twitter may well be stamping out these accounts, but it’s a game of Whack-A-Mole. “New accounts, including three more posing as Trump, popped up Friday morning and are still active,” BuzzFeed reported on Friday. I found the @JusttinSun_tron account to be active as of Tuesday morning.

The scams might seem laughably easy to dismiss, but it doesn’t take many fooled donors to nicely fatten an online wallet. Partly, that’s because botnets automatically flood comments with fake replies. Given a similar-looking handle and an identical avatar, the scammer’s tweets look like they’re part of a legitimate thread, instead of being separate tweets from separate accounts. The scam tweet is then amplified as bots retweet it or reply with yet more bogus tweets saying that the cryptocurrency come-on is for real and actually works.

Josh Emerson, a self-proclaimed Twitter bot hunter, as of Thursday had tracked over 1,200 of the scammer accounts that were amplifying fake Elon Musk tweets touting the cryptocurrency scheme:

BuzzFeed News quoted Emerson:

Obviously the protections in place for automated account creation are not working.

Beware the bitcoin bots – they’re after your cryptocoin, they’re bot-breeding like mad, and they’re racing like rabbits to outpace Twitter’s ability to catch up.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PGS_JEc4SZQ/

Facebook’s privacy settings are illegal, says court

Facebook tucks default privacy settings away where you have to go dig for them – not exactly what you’d consider a way to get informed consent, the Berlin Regional Court in Germany has decided. And what’s up with that real-name policy that doesn’t allow users to be anonymous?

Illegal, illegal, illegal: that’s what the court has decreed on those and five of Facebook’s terms of service.

According to a judgment (PDF; in German) handed down by the Berlin court in mid-January and publicly revealed on Monday, Facebook collects and uses personal data without providing enough information to users to constitute meaningful consent. The Guardian reports that the case against Facebook was brought by the federation of German consumer organizations (VZBV), which argued that Facebook force-opts users by default into features it shouldn’t.

The VZBV’s press release quotes the group’s legal officer, Heiko Dünkel:

Facebook hides data protection-unfriendly presets in its privacy center, without sufficiently informing [users] during registration. That’s not enough for informed consent.

According to Germany’s Federal Data Protection Act, companies can only collect and use personal data with the consent of those affected. How can users make informed consent if they don’t know what’s going on?

They can’t, the VZBV said:

In order for them to make informed choices, providers must provide clear and understandable information about the nature, extent and purpose of the use of the data.

The VZBV pointed out these shortcomings in Facebook’s privacy settings:

  • Location service for mobile phones is activated by default. This reveals locations of people who use chat.
  • Search engines get a link to the participants’ activity history by default, making it easy for anybody online to stumble across things like profiles and account photos.

In all, the VZBV complained about five of Facebook’s privacy presets. The Berlin judges agreed with the privacy group about all of them: the presets are “ineffective,” the VZBV said, and there’s no guarantee that a user would even take note of their existence.

The Berlin Regional Court also declared eight clauses in Facebook’s terms of service to be invalid, including terms that allow Facebook to transmit data to the US and use personal data such as usernames and profiles for commercial purposes.

The court also ruled Facebook’s authentic name policy illegal. That policy once required users to go by their “real names” on the platform, but after a plethora of stories of how people have been harmed by the real-name policy, Facebook revised it in 2015 to permit whatever names users go by in real life… as long as that name doesn’t include expletives; titles; special characters; words, phrases or characters from multiple languages; or anything offensive or suggestive.

That’s not good enough, said the Berlin court: The current name policy is illegal because it disallows anonymity.

Dünkel:

Providers of online services must also allow users to participate anonymously, for example [by] using a pseudonym.

Facebook told The Guardian that it plans to appeal the decision, but that it’s “working hard to ensure that our guidelines are clear and easy to understand, and that the services offered by Facebook are in full accordance with the law.”

A week after the Berlin court ruled against Facebook, the company said it would be making significant changes to its privacy settings in preparation for the European Union’s sweeping new General Data Protection Regulation (GDPR), considered by many as the biggest overhaul of personal data privacy rules since the internet was born.

Chief Operating Officer Sheryl Sandberg said last month that the plan was for Facebook to make it easier for users to manage their own data:

We’re rolling out a new privacy center globally that will put the core privacy settings for Facebook in one place and make it much easier for people to manage their data.

Sandberg said that the creation of this “privacy center” was prompted by the requirements of the GDPR: a regulation that requires any company that does business in the EU to take specific steps to more securely collect, store and use personal information. The aim of the GDPR is to give Europeans more control over their information and how companies use it.

Facebook’s actually been trying to give people more transparency and control for a while, Sandberg said at the time. Of course, there’s nothing like the prospect of mammoth fines to speed the plough. From The Guardian’s coverage of Sandberg’s remarks:

…companies found to be in breach of GDPR face a maximum penalty of 4% of global annual turnover or €20m (£17.77m), whichever is greater. In Facebook’s case, based on a total revenue of $27.6bn in 2016, the maximum possible fine would be $1.1bn.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/965hj2_XTeA/

Did the NSA really use Twitter to send coded messages to a Russian?

On June 20 last year, the official Twitter account for the US National Security Agency (NSA) issued the following innocent-looking tweet:

Samuel Morse patented the telegraph 177 years ago. Did you know you can still send telegrams? Faster than post pay only if it’s delivered.

On August 17, the same theme was taken up again:

The 1st telegraph communications exchange occurred between Queen Victoria and President Buchanan in 1858.

At the time, only a handful of people responded to either message. The tweets might have rested in obscurity indefinitely had the New York Times and The Intercept not alleged last weekend that the messages had an extraordinary purpose unconnected to remarking on telegraphic history. Explains The Intercept:

Each tweet looked completely benign but was in fact a message to the Russians.

As part of a sequence of 12, the tweets are now claimed to be a coded back-channel used to communicate with a Russian who was negotiating to sell to the NSA a set of cyberweapons stolen from it in 2016 by a group calling itself The Shadow Brokers.

These tools were leaked to the world and used by cybercriminals to launch attacks, such as May 2017’s WannaCry ransomware attack (later blamed by the US on North Korea).

Assuming the latest account stands up, it suggests that as recently as a few months ago, the NSA was still keen to find out precisely how much was lost in the incident and was willing to pay for the privilege.

But, surely sending coded messages on a public system is a strange way to communicate something this sensitive?

There might be two reasons for an agency like the NSA to use Twitter.

The first is that a verified Twitter account appears to be a valued stamp of authenticity. The ‘Russian’ apparently needed something to verify with whom he was talking, and an official Twitter handle will do it seems.

Less obviously, using coded tweets is a convenient way to hide in plain sight. The two parties could have used direct messaging (DM) but this would have logged the connection they had to one another (i.e. from one Twitter account to another).

Ironically, in 2014 Twitter was said to be working on a way of making DMs encrypted end-to-end but backed down, possibly because the company didn’t want to antagonise a US government already unhappy with the spread of hard-to-crack encryption. That would have made the channel more secure for the NSA too.

The idea of encoding messages using a public system – or going a stage further and actually hiding them in its communication – has been around for a while even if documented examples are rare.

In 2010, Georgia Tech researchers proposed a steganography tool called Collage that would use Twitter posts and Flickr images to hide messages from government censors.

That same year, the possibilities of the technique were demonstrated by reports that that a Russian spy ring has used it to hide messages in a 100 or more public website images.

This week, the world was reminded that there is more to communication than what is said or written. The NSA’s tweets will never seem mundane again.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PVkHPpLq4Ig/