STE WILLIAMS

Microsoft ‘kills’ passwords, throws up threat manager, and APIs Graph Security

Ignite Microsoft is beefing up the security in its cloud services lineup with a handful of unveilings today at this year’s Ignite conference.

The Redmond giant says the offerings are part of an aim to secure both its own web services and the partner ecosystems that have popped up around them.

Passwords out to pasture

Among the big declarations from security VP Rob Lefferts was that Microsoft was marking “the end of the era of passwords.”

This will be done by extending the Microsoft Authenticator multi-factor phone app to Azure Directory (AD). Any network that uses AD to authenticate people will now be able to give those users the option of using Authenticator to sign in via a PIN, fingerprint, or face scan on their iOS or Android device.

So you can log into your account if you physically have your phone and a valid PIN for the Authenticator app, for example. If you have the app running on your handheld, and provide the right extra detail – a PIN, face scan, etc – then access is granted to your account via AD.

“Using a multi-factor sign-in method, you can reduce compromise by 99.9 percent, and you can make the user experience simpler by eliminating passwords,” Lefferts declared.

“No company lets enterprises eliminate more passwords than Microsoft.”

Threat Protection set to watch over Microsoft 365, Secure Score rates Azure

Companies opting for the Microsoft 365 Windows-as-a-service package will now be able to use a new security monitoring tool to track and manage all of the security features and reports generated by the various online and offline platforms.

Microsoft says the feature will allow admins to have a single screen where they can view reports from emails, Office applications and documents, Windows endpoints and managed infrastructure.

“This will let analysts save thousands of hours as they automate the more mundane security tasks,” Lefferts declared.

Cloud desktop

Still holding out on Windows 10? Microsoft tempts upgrade with virtual desktop to Azure

READ MORE

Azure, meanwhile, will get new security reporting in the form of Secure Score, a service Microsoft says will give admins updates on what security policies are in effect at their company and where possible weak spots remain.

The idea of Secure Score is to search out best practices like securing admin accounts with multi-factor authentication and implementing two-factor auth for regular user accounts.

In addition to management for on-prem networks, Secure Score will take into account Azure instances, where score reports and rundowns will be shown in the Azure Security Center.

Confidential Compute for Azure clouds

For companies wanting to better isolate their cloud instances on Azure, Microsoft said it will be rolling out a new hardware-based service to the Azure DC line. The service will let customers opt to have their instances run on Intel SGX hardware to make sure that the code itself is running encrypted in a secure portion of the bare-metal machine itself.

Microsoft is also pushing the Information Protection SDK into general availability and adding new labeling options that will allow developers to apply Microsoft’s content protections for sensitive data and files into their own code. With the new options, Redmond adds support for Office Apps and PDF docs.

Graph Security API lands

Also targeting Microsoft’s dev community is the general availability release of the Graph Security API. The tool allows developers to plug their code into the Graph Security service and access things like its alert service, company Graph analysis, and scripts for configuring and managing the security settings for multiple products.

The idea, says Lefferts, is to make it easy for both customers and security vendors to share their threat intel and manage best practices and data analysis on malware and network attacks.

[Graph Security] helps our partners work with us and each other to give you better threat detection and faster incident response,” he said.

“It connects a broad heterogeneous ecosystem of security solutions via a standard interface to help integrate security alerts, unlock contextual information, and simplify security automation.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/24/microsoft_kills_passwords/

In Quiet Change, Google Now Automatically Logging Users Into Chrome

The change is a complete departure from Google’s previous practice of keeping sign-in for Chrome separate from sign-ins to any Google service.

In a change with potentially worrisome privacy implications, Google has begun to automatically log users into Chrome whenever they use the browser to sign into Gmail or any other Google service.

The change, introduced quietly with the new Chrome 69 release earlier this month, is a complete departure from Google’s previous practice of keeping sign-in for Chrome separate from sign-ins to any Google service. Previously, Gmail users concerned about Google collecting their browsing data could use Chrome without necessarily being signed into the browser.

But starting with Chrome 69, the only way users can do that now is to not be logged into any Google service at all. Signing into a Google account will automatically sign them into Chrome. Signing out of Chrome will automatically sign users out of their other Google accounts.

In a blog Sunday, Matthew Green, a security researcher at Johns Hopkins University, blasted the change as having enormous implications on user privacy and trust. “The change makes a hash out of Google’s own privacy policies for Chrome,” Green noted.

The privacy policies, up until this week at least, made it very clear that when users are using Chrome in “basic browser mode,” their data is stored locally, and when they are signed into Chrome, their browsing data is shipped to Google. The implication up to now has been clear, Green said. “If you want privacy, don’t sign in,” he says. “But what happens if your browser decides to switch you from one mode to the other, all on its own?”

Until Green’s post on Sunday, few even knew about Google’s update. The only indication of the change is that the user’s Google profile picture or icon has now begun appearing in the righthand corner of the browser window when they are logged into a Google account.

In a Twitter thread responding to Green’s blog post, Google software engineer Adrienne Felt late Sunday night insisted that though Google is now automatically signing people in to Chrome, that does not mean the company is automatically uploading their browsing data as well.

In order for that to happen, users have to take the additional step of turning on a “Chrome Sync” feature after they are signed in, she said. By syncing, users can access their Chrome browsing histories across all devices. But Sync does not happen automatically when people get signed into Chrome, according to Felt.

She added that Google is updating its privacy notices “ASAP” to better clarify the implications of its automatic sign-in update for Chrome.

The new feature that automatically signs people into Chrome is called “identity consistency between browser and cookie jar.”  

The only reason that Google has added the feature is to prevent confusion among people sharing devices, Felt said in tweets that echoed comments made by two Chrome developers to Green. “In the past, people would sometimes sign out of the content area and think that meant they were no longer signed into Chrome, which could cause problems on a shared device,” Felt said.

For example, an individual using a computer where another user might have previously signed into Chrome could end up having cookies from their browsing sessions uploaded to the originally signed-in user’s account, Green said. However this becomes a potential issue only for users who sign in to Chrome in the first place, he noted.

The problem that the update is supposed to address does not impact users who chose not to log in to Chrome at all. If the problem has to do with signed-in users, it makes little sense for Google to make a change that forces unsigned users to become signed-in users, he said.

Troublingly, Google’s new menu for users signing into Chrome is also so vague that people could easily end up granting consent to sync their browsing data when they, in fact, did not intend to do so, Green said. Where previously users had to put in the effort of entering their credentials to sign in to Chrome, they can now end up consenting to data upload “with a single accidental click.”

Google also has not made clear what data exactly it will upload when a previously logged-out user logs in to Chrome and turns on the Sync feature. It’s not clear whether in this case Google will upload all of the data that was previously stored locally on the user’s device, Green noted.

Dark Reading has observed an equally confusing message when a user signs out of Gmail these days. The message notes that the user is signed out and that syncing is paused, and then adds:  “Your bookmarks, history, passwords, and more are no longer being synced to your account but will remain on this device. Sign in to start syncing again.”

In her tweets, Felt has noted that Chrome data is not being uploaded without a user specifically consenting to syncing it. So it is not clear what other “bookmarks” or “history” it is that Google is referring to here when it talks about “syncing.” Google did not respond to Dark Reading’s request for clarification about what the Gmail message means. In response to a request for comment on Green’s concerns, Google pointed to Felt’s Twitter stream.

Related Content:

 

 

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/in-quiet-change-google-now-automatically-logging-users-into-chrome/d/d-id/1332882?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Fault-Tolerant Method Use for Security Purposes in New Framework

A young company has a new patent for using fault tolerance techniques to protect against malware infection in applications.

How do you really know that an application has not been compromised? A newly patented technology is based on the premise that because you know precisely what every thread and API call are supposed to do, any divergence is a sign of trouble.

Fault-tolerance has long used multiple identical instances of an application to insure that the application can continue to function even if its hosting server goes offline. Young startup Virtual Software Systems (VS2) has been granted a patent for using that concept as a way of continuously validating application integrity.

Mario Troiana, head of development for VS2, says the company’s fault-tolerant framework is called the Intentional Computing Environment (ICE). “ICE is a framework made up of several mechanisms. Collectively, they instantiate multiple replicas of an application with different processes running on different virtual machines,” he says.

“It then enforces determinism of each thread of the application, so every time an API call is reached, the threads are compared to make sure they’re going to the same destination in a state space,” he says.

According to the company, “ICE detects and inhibits unintended application behavior caused by unpredictable events including hardware failures, malicious activity, and countless other faults.” That means if there is a point of application behavior that deviates for any reason from what is expected, ICE throws an exception and halts its execution.

There are a number of components to ICE, each handling or responding to a different aspect of application or data behavior, but the entire suite is based on the idea that application behavior is deterministic — that every part of an application will respond in a predictable, known way to any input.

Troiana is quick to point out two critical aspects of the way that ICE works. First, when software is developed, standard API calls must be replaced with ICE calls. This allows the protection software to work when the code is in production. It also means that this is not a solution applicable to off-the-shelf third-party software where the customer has no access to source code.

And VS2 doesn’t promote this as a complete, comprehensive security solution: the company sees it as complementary to other components in a total security architecture. 

ICE is currently available as a feature-complete technology assessment release for beta customers.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/application-security/fault-tolerant-method-use-for-security-purposes-in-new-framework/d/d-id/1332883?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Baddies just need one email account with clout to unleash phishing hell

A single account compromise at an unnamed “major university” in the UK led to a large-scale phishing attack against third parties, according to data protection outfit Barracuda Networks.

With one account in their pocket, the attackers used it to compromise modest numbers at the same institution, after which they were turned into slave relays for the phishing flood, usually pushing an invitation to links from web services such as OneDrive or Docusign.

The incident contains a curious irony: third parties seem to have recognised the malicious campaign before the infected organisation, or at least before it reacted to block it.

What’s hard to argue with is the simple mathematics of email compromise – one account can generate thousands of new phishing emails that are suddenly more likely to beat the recipient’s filters because they come from a high-reputation domain.

The campaign wasn’t even that flattering, according to Asaf Cidon, Barracuda’s vice president of email security. “This university was simply being used as a platform for a phishing campaign against other companies. Universities are good targets for email compromise because they have a lot of email accounts and a lot are dormant,” Cidon told The Reg.

A new Barracuda study has suggested the university’s experience is not unusual, with a third of a random sample of 50 organisations questioned admitting they’d suffered one or more email compromises in the three months to the end of June.

Compromised

Cidon said companies only know accounts have been compromised when someone in IT notices the resulting email blast, or an employee reports it to them. The average number of accounts compromised was a surprisingly low three – almost certainly an underestimate, Cidon believed.

It would be a mistake to assume attacks are highly targeted with only 6 per cent of the compromised employees falling into this category. Far from targeting executives – a tactic for the spooks and nation-staters – anyone will do. What matters most is the domain reputation of the compromised organisation.

And it’s almost as if the compromised organisations have resigned themselves to living with it due to a lack of easy solutions. “There’s a spectrum of reactions. Some of the customers we’ve spoken to are in panic mode with the IT team pulling their hair out. At the other end, there are some companies who are not worried about it.”

There’s an obvious need to start monitoring internal email traffic. For anyone that’s built their messaging security around the idea of perimeter filtering, that’ll sound like a bit of a radical upgrade. Since the beginning of email time, the bad stuff has been on the outside, kept out by a gateway model that no longer works. Now vendors turn up with the new wheeze of watching the traffic sent within their domain.

For now, the only alternative is layers of unpopular and expensive authentication to protect accounts or signing up for Office 365 or G-Suite’s cloud AI email security, which has started making big if untested claims for the ability to block and claw back phishing emails. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/24/unsophisticated_email_takeovers/

Hacking Back: Simply a Bad Idea

While the concept may sound appealing, it’s rife with drawbacks and dangers.

As the topic of hacking back continues to resurface among elected officials, those of us in the cybersecurity community are scratching our heads over why this concept refuses to die. After digging deeper, one can see that there are many misperceptions regarding what the terms “hacking back” and “active cyber defense” (ACD) actually mean. General frustration and misinformation are driving the interest, but the mixing of definitions is fueling confusion.

Let’s start with the Active Cyber Defense Certainty (ACDC) Act, which was introduced to Congress by Rep. Tom Graves (R-Ga.) and Rep. Kyrsten Sinema (D-Ariz.) in March 2017 (updated in October 2017) as legislation that would give companies the right to hack back after a “persistent unauthorized intrusion.” The bill’s name itself generates confusion.

Hacking follows a very methodical approach, often referred to as the “kill chain.” This defines a hacker’s actions from initial compromise, escalation, lateral movement, to exfiltration. Hacking back would require a reverse traversal of the kill chain (with heavy modification) in order to understand the attack, adversary, and attribution. The danger with uninformed companies hacking back is that they typically won’t have well-defined strategies or methods for their actions. There are also many issues regarding what hacking back would achieve. It is likely that companies could gain some threat intelligence but less likely that they could recover their data.

ACD is a specific approach for gathering intel (internal-external), assessing risks and threats, developing a tech plan to lure and misdirect in-network attackers, and implementing a consistent operational strategy. Fundamentally, ACD consists of cyber intelligence, deterrence, and defense-in-depth strategies to create a comprehensive, proactive defensive posture.

Recently, the issue of hacking back resurfaced at the federal level in a Judiciary Committee hearing, when Sen. Sheldon Whitehouse (D-R.I.) weighed the merits of hacking back during a broader discussion on more open disclosure of digital threats and breaches as well as security stress testing. Whitehouse advocated for creating a more transparent process for private companies to seek guidelines for hacking back, noting, “We ought to think hard about how and when to license hack-back authority so capable, responsible private-sector actors can deter foreign aggression.”

While promoting a forum for discussion is smart, doing anything that advances this idea forward — like setting guidelines for hacking back — is dangerous. Organizations must know how to establish a proactive cyber defense, but counter-hacking exposes many practical, ethical, and legal concerns that most are ill-equipped to address. It also creates the risk for unintended consequences that could have adverse effects.

Although the concept of an “eye for an eye” may seem appealing to some, Whitehouse is correct that we must carefully consider the implications and consequences of what might happen if hacking back is allowed. We should also seek to understand and embrace new technologies that empower a proactive defense and new compliance standards for legacy and innovative technologies that present high security risks.

What if Hacking Back Became Legal in the US?
If hacking back were to become legal, an organization would first have to accurately identify its adversary, pinpoint the location of its information, and retrieve it without causing other harm. Then it would need to notify the FBI prior to hacking back.

The next steps are challenging because:

1. It’s difficult to prove an attacker “persistently” attacked a network.

  • Attribution is extremely challenging, with traffic or commands that may appear to come from one source but originate elsewhere.
  • Adversaries often hide behind third-party “zombie” computers they’ve compromised to orchestrate their attacks. Feasibly, the original victim could now unwittingly become the attacker.

2. The bill only legalizes hacking back against attackers within the US. Attacks involving individuals from other countries would be subject to local laws.

3. A private organization’s counter-hacking may interfere with investigations or activities by government agencies. Reliable mechanisms for cross-coordination do not exist in the US and are even less established when applied internationally.

How Would It Work?
A typical hack starts with probing a cybercriminal’s infrastructure for weaknesses to prepare for retrieval/retaliation, followed by remotely breaking into a target’s servers and wiping any data, or disabling the attacker’s malware from delivering a payload.

Counter-hacking requires specialized skills and is generally deemed unwise by cybersecurity experts for two reasons:

1. Most organizations lack the skill set, basic tools, and defensive posture to conduct a precision hack back and cannot handle the potentially escalated wrath of an aggravated attacker — especially one that may have sizable resources to retaliate.

2. A far less complex and contentious approach could be achieved by fortifying their defenses, implementing strategies for detecting threats quickly, and adding proactive measures to reduce risk and increase the effort an adversary would need to complete their attack.    

A Better Approach
Organizations should pick the best perimeter defenses their budgets and resources can afford, understanding that with human error and advanced targeting, nothing can be 100% bulletproof. Companies shouldn’t stay passive, instead leveraging their home-field advantage by adding proactive security measures and focusing on threat models that affect the organization the most. They should also adopt active defense tools, such as deception, that change an attack’s asymmetry, gather threat intelligence, and use the attacker’s behavior for strengthening proactive cybersecurity behavior. Consider this a chess match where traps are used to manipulate adversaries and trick them into making mistakes, ultimately leading to their loss.

While organizations should have the support, rights, and permission to defend themselves from a cyberattack, conducting a retaliatory attack isn’t the answer. Legislation will also not solve the problems of cyberattacks. Instead, adoption of proactive cyber operations will drastically minimize the impact of a breach, and ultimately eliminate the need to retaliate. As a wise person once said, “the best revenge is massive success.”

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Carolyn Crandall is a technology executive with over 25 years of experience in building emerging technology markets in security, networking, and storage industries. She has a demonstrated track record of successfully taking companies from pre-IPO through to … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/hacking-back-simply-a-bad-idea/a/d-id/1332856?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

‘Scan4Yyou’ Operator Gets 14-Year Sentence

The counter antivirus service, which was shut down in 2016, caused a total loss amount of $20.5 billion, according to the DoJ.

Ruslans Bondars, operator of the “Scan4you” counter antivirus (CAV) service, has been sentenced to 14 years in prison for his role in helping hackers evade antivirus software, the Department of Justice announced this week.

Scan4you was one of the Dark Web’s largest CAV services before it was shut down in 2016. Cyberattackers could use it to determine whether the malicious software they created would be detected by antivirus software. Between 2009 and 2012, the time frame when Scan4you was running, at least 30,000 people used the illicit service to test their malware.

Bondars, a 38-year-old Latvian resident and citizen of the former USSR, was convicted by a federal jury in Virginia on May 16, 2018. After a five-day trial, he was charged with one count of conspiracy to violate the Computer Fraud and Abuse Act, one count of conspiracy to commit wire fraud, and one count of computer intrusion with intent to cause damage and aiding and abetting.

“Ruslans Bondars helped malware developers attack American businesses,” said assistant attorney general Brian Benczkowski. One of these was Target, which was hit with a breach in 2013 that compromised more than 40 million credit cards and nearly 70 million email addresses. Another attacker used Scan4you to create the Citadel Trojan, which infected over 11 million devices.

In issuing its sentence, the court found a total loss amount of $20.5 billion. On top of Bondars’  prison sentence, a judge ordered him to three years of supervised release.

Read more details here.

 

 

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/scan4yyou-operator-gets-14-year-sentence/d/d-id/1332874?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

6 Dark Web Pricing Trends

Microsoft Deletes Passwords for Azure Active Directory Applications

At Ignite 2018, security took center stage as Microsoft rolled out new security services and promised an end to passwords for online apps.

It’s looking like a password-less future for Microsoft, which will soon give users the option to eliminate passwords for applications by using Azure Active Directory (AD) for authentication.

This was one of many security announcements coming from Microsoft Ignite 2018, taking place this week in Orlando. In addition to password-free authentication, the company is rolling out its new Threat Protection Service and offering Azure Confidential Computing in preview.

Microsoft already lets Azure AD-connected apps authenticate via Microsoft Authenticator, an app it launched in 2016 to combine passwords with one-time codes for two-step verification. Now, users can avoid the password option entirely: the app serves as one form of verification, and a biometric authenticator (fingerprint or facial scan) or PIN serves as the second.

Rob Lefferts, Microsoft’s corporate vice president of security, says the move to password-less authentication for Azure AD applications marks “a critical milestone” for both companies and their employees, who are targeted with increasingly subtle and complex phishing attacks.

“Social engineering is a play for end users,” he said in an interview with Dark Reading. “So many of the threats and attacks we see on a day-to-day basis are designed to trick users into giving away their credentials.”

Most users don’t employ strong passwords, so multi-factor authentication is becoming mainstream as companies buckle down on security: SMS and email codes, hardware tokens, and authenticator applications are all widely accepted forms of MFA.

Microsoft is acting on the growing notion that the best password is no password at all, instead swapping in different forms of authentication to improve security.

“It’s not just about more security; it’s also about making end users feel more effective,” Lefferts says. An easier MFA experience gives attackers fewer opportunities to trick people into doing the wrong thing. Employees who want to sign into Microsoft Authenticator will be redirected to its mobile app, where they can authenticate with a biometric factor, he explains.

To gauge the effectiveness of their security policies, businesses can use the newly expanded Microsoft Secure Score, which acts as an “enterprise-class dynamic report card” for security, Lefferts wrote in a blog post on today’s news. A company’s score serves as a simple metric of how well they’re doing security-wise, he notes.

Secure Score already covered features in Office 365 and Windows; now, it will cover all of Microsoft 365 and hybrid cloud workloads in Azure Security Center. Scores are evaluated by integrating signals from Azure AD, Enterprise Mobility and Security, and other services, and bringing the data into one place. Factors are weighted differently based on importance, says Lefferts. At the top of the list are known good practices like enabling MFA for all users.

Microsoft Threat Protection, another service announced today, is designed to detect, investigate, and remediate threats across endpoints, email, documents, identity, and infrastructure in the Microsoft 365 admin console. The idea is to pull together a unified vision of the “cacophony of alerts” that security operations teams handle daily, says Lefferts, and make it easier to spot anomalies they need to investigate.

Threat Protection’s dashboard organizes data on active incidents and the users and devices at greatest risk to security threats. The information is organized according to which threats are most imminent, and problems are sorted into “resolved incidents” as admins address them.

Azure Confidential Computing, a platform that lets developers build cloud applications and store data in a trusted execution environment, is now available in preview mode. With data in an enclave, cloud computing ensures data and operations can’t be viewed from the outside. If an attacker tries to manipulate the code, Azure denies the operations and disables the environment. An Early Access program for the service went live in September 2017.

“As organizations move to cloud computing, one of the most important things we can do it make sure data is protected in all stages of its lifecycle,” says Lefferts. This includes data at rest and in transit, both of which are protected by Azure Confidential Computing.

Related Content:

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/microsoft-deletes-passwords-for-azure-active-directory-applications/d/d-id/1332880?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

iTunes is assigning you a ‘trust score’ based on emails and phone calls

Apple plans to use “abstracted” summaries of our phone calls and emails to assign users a trust score as a way to combat fraud.

It quietly slipped the change into the iTunes Store terms and privacy disclosures last week, on Monday, at the same time it released iOS 12, tvOS 12, and watchOS 5.

(Speaking of which, please do remember that you need to turn some of the iOS 12 security enhancements on.)

According to Venture Beat, which first spotted the news about the trust scores, you can find the new provision in the iTunes Store Privacy windows of iOS and tvOS devices.

It reads:

To help identify and prevent fraud, information about how you use your device, including the approximate number of phone calls or emails you send and receive, will be used to compute a device trust score when you attempt a purchase.

The submissions are designed so Apple cannot learn the real values on your device. The scores are stored for a fixed time on our servers.

Initially, Apple didn’t give much by way of context or clarity. Venture Beat, for one, was puzzled over the notion of using phone calls and emails to assign trust in the case of Apple TV, given that the devices don’t make calls or send emails.

Apple didn’t specify how recording and tracking the number of calls or emails coming from a user’s iPhone, iPad, or iPod Touch would help it to verify a device’s identity better than would its unique device identifier, be it hardcoded serial number or advertising identifier, or, in the case of iPhones and cellular iPads, the codes on SIM cards.

Meanwhile, on social media, people’s minds leapt to a particularly chilly episode of Black Mirror: “Nosedive,” in which people rate each other during interactions, bumping each other’s scores up or sending them into social hell, where nobody stops and helps you if you’re wandering around needing help on the side of a highway, given that anybody’s retinas will show that you’re a sub-4.0 low-life.

But Apple’s move isn’t all that nefarious. It’s got good cause to keep trying new ways to combat fraud, given the steady drumbeat of iTunes customers getting ripped off.

In June, Apple Singapore was looking into a rash of iTunes fraud, with dozens of customers getting billed for iTunes purchases they never made.

On Wednesday, an Apple spokesperson clarified the trust score, telling Venture Beat that the only data it’s going to receive after crunching our calls and email will be a numeric score, computed on-device, using the company’s “standard privacy abstracting techniques,” and retained only for a limited period, without any way to work backward from the score to user behavior.

No calls, no emails, nor any other extrapolations of the data will be shared with Apple, the spokesperson said. Content of calls and emails won’t leave your device, won’t go to Apple, and won’t go to the cloud, as in, “somebody else’s computer.”

If someone else tries to use your account, their trust score won’t match yours, so it will be one more tool in Apple’s arsenal to suss out when somebody’s trying to rip off both you and Apple. It’s also designed to reduce false positives in fraud detection.

Apple’s trying to stay a step ahead, and that’s a good thing: Keeping up with iTunes fraud is a cat-and-mouse game, and the company’s got to keep trying new ways to fight the crooks.

We’re assuming this won’t turn into a Black Mirror episode, but if it does, we’ll be sure to let you know!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/CAHfHu1lWk4/

Police accidentally tweet bookmarks that reveal surveilled groups

The Massachusetts State Police (MSP) accidentally spilled some of its opsec onto Twitter on Tuesday night, uploading a screenshot that revealed browser bookmarks which included links to a collection of Boston’s left-wing organizations that the staties are keeping an eye on.

The tweeted screenshot showed that the MSP bookmarked activist groups, including MAAPB (Mass Action Against Police Brutality), COMBAT (the Coalition to Organize and Mobilize Boston Against Trump), and Resistance Calendar.

On Wednesday, MSP put out a statement about the bookmarks, saying that police have…

…a responsibility to know about all large public gatherings of any type and by any group, regardless of their purpose and position, for public safety reasons. We do not collect information about—nor, frankly, do we care about—any group’s beliefs or opinions.

In this case, as the Twitter responses show, the leak has riled people who are distrustful of police surveillance and its purportedly unbiased nature. But the leaked bookmarks would have been embarrassing no matter what they showed.

It’s embarrassing for the simple fact that it’s sloppy data handling, and it led to exposure of information that clearly wasn’t meant to be publicly shared: otherwise, one imagines, the MSP wouldn’t have felt the need to delete the revealing tweet.

Of course, the MSP is far from the only organization that’s let slip data not necessarily meant for public consumption.

The most recent example came in January, when, during a false alarm about an incoming ballistic missile, an Associated Press photo taken within headquarters at the Hawaii Emergency Management Agency (HI-EMA) showed a yellow sticky note, bearing a password and stuck to a computer screen – plain to see for one and all, including, obviously, a press photographer who’d go on to disseminate it worldwide.

Then too, there was Luiz Dorea, head of security at the 2014 World Cup. There was a lovely photo taken of Dorea in the state-of-the-art security center for the games, with its giant video wall and staff hard at work, and the Wi-Fi SSID and password showing up loud and proud on the big screen behind him… Right underneath the secret internal email address used to communicate with a Brazilian government agency.

This is the kind of thing that you need royalty to weigh in on, clearly. Specifically, Prince William. He should know: He has experience with credentials posted in the background. It happened when he was a search and rescue helicopter pilot for the Royal Air Force (RAF) and journalists did a “day in the life of” in 2012.

If the prince is busy, maybe we could send over Owen Smith, the UK Labour Party politician. He might have some good advice: in September 2016, login details for his campaign’s phone bank were tweeted out to thousands with yet another “helloooooooo, what’s that in the background?” photo.

The lesson here is drop-dead simple when it comes to passwords: Don’t write down passwords in public places. Don’t put them on sticky notes. Don’t write them on white boards.

Swap “password” for “any information showing up on your desktop that you don’t want the entire Twitter universe to see”, and you can guess what the lesson in this case is: crop that screen grab before you drop it.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/HwCJtSyEQ74/