STE WILLIAMS

Desktop Telegram users showing off not only their silly selfies but also their IP addresses

Telegram has paid out a €2,000 bounty to a researcher who uncovered a vulnerability that caused the messaging app to expose users’ IP addresses. The programming blunder has been fixed in the latest version.

Dhiraj Mishra took credit for the discovery and reporting of CVE-2018-17780, a vulnerability in the Windows and tdesktop (GitHub) versions of Telegram that, under specific settings, would allow a user to view the IP address of anyone they call.

Mishra told The Register the flaw stems from Telegram’s default settings to allow some users to place peer-to-peer calls. When P2P calls are made, the Telegram log file on the caller’s machine shows the IP address of the person being called.

On certain versions of Telegram (such as iOS, and Android) users can turn off the logging by disabling the P2P option the privacy settings in the “calls” menu. Disabling Peer2Peer will force all calls to be routed through Telegram’s own server, obscuring the IP addresses of both parties.

kids in classroom with raised hands

Back to school soon – for script kiddies as well as normal kids. Hackers peddle cybercrime e-classes via Telegram

READ MORE

This option, however, was not given to the desktop Windows and tdesktop builds. Because of this, users who took calls on their desktop machines were susceptible to having their IP address logged, not something you generally want in a secure communications platform.

“Telegram is supposedly a secure messaging application, but it forces clients to only use P2P connection while initiating a call, however this setting can also be changed from “Settings Privacy and security Calls peer-to-peer” to other available options,” Mishra said.

“The tdesktop and telegram for windows breaks this trust by leaking public/private IP address of end user and there was no such option available yet for setting “P2P nobody” in tdesktop and Telegram for Windows.”

Mishra told The Register that he reported the flaw, along with a proof of concept, to Telegram. The bug has since been patched, and Mishra took home a tidy €2,000 bounty.

Those running the desktop versions of Telegram will want to make sure they have the latest version installed, which now sport fixes for the vulnerability, and, if they want to prevent all IP address logging, disable P2P calling. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/10/01/telegram_bug_ip_addresses/

Boffin: Dump hardware number generators for encryption and instead look within

Hardware-based random number generators (HWRNGs) for encryption could be superseded after a Philippines-based researcher found that side-channel measurement of the timing of CPU operations provide enough entropy to seed crypto systems with the necessary randomness.

In a paper presented on Saturday at the International Conference on Innovative Research in Science, Technology and Management (ICIRSTM) in Singapore, JV Roig, consulting director and software developer at Asia Pacific College (APC) in the Philippines, says that HWRNGs represent a natural target for subversion by national intelligence agencies due to their black-box nature.

Were a HWRNG to be designed to produce predictable (non-random) numbers, the resulting cryptography would be weak – a situation that numerous law enforcement agencies have sought or demanded.

Whether or not these devices have actually been compromised isn’t the issue, Roig said in an email to The Register. “HWRNGs are, by nature, black boxes, unauditable, and untrustworthy, so they’re out,” he said.

The solution within

Roig’s paper, “Stronger Cryptography For Every Device, Everywhere: A Side-Channel-Based Approach to Collecting Virtually Unlimited Entropy In Any CPU,” claims that because no CPU has identical performance characteristics, true randomness is readily available.

“CPU execution time variance is the way forward, for all types of devices, from servers to IOT/embedded/appliances: run a trivial benchmark, time it, repeat,” said Roig. “The accumulated timing info becomes your entropy, the source of your randomness.”

He likens these measurements as flipping a coin multiple times to get enough bits of entropy, where each benchmark run counts as a flip. He calls the technique SideRand, and provides sample code written in C:

#include stdio.h 
#include time.h 
int main() {
  int i=0;
  int j=0;
  int samples = 256;
  int scale = 5000000;
  int val1 = 2585566630; 
  int val2 = 576722363; 
  int total = 0;
  double times[samples];

  for(i=0; isamples; i++)
    {
      clock_t begin = clock();
      for(j=0; jscale; j++)
      {
          total = val1 + val2;
      }
      clock_t end = clock();
      double time_spent = (double)(end - begin) / CLOCKS_PER_SEC;
      times[i] = time_spent;
      printf("%frn", times[i]);
    }
    return 0; 
}

This code, straightforward enough to be easily auditable, should be suitable for older systems with microsecond-level clock precision. It accesses the system clock() function and collects timing information in an array.

The result is 256 timing value samples, which represent enough collected entropy to seed a cryptographically secure pseudo-random number generator (CSPRNG). The paper includes a variant algorithm for more modern systems capable of microsecond-level precision.

Digital fingerprints

CPUs, Roig’s paper explains, contain millions or billions of transistors, which have enough variation that no two chips perform identically. Chip designers may try to minimize transistor variances through guardbanding, but the situation has been getting worse over time, as noted last year in a paper by researchers at Lawrence Livermore National Laboratory.

Faced with this differences, chipmakers may resort to CPU binning – designating chips from the same batch with different characteristics as a different product lines, so they don’t have to toss units that fall short of the spec.

Roig argues that the persistence of chip imperfections means timing measurements will be viable for the foreseeable future.

“Until we reach this level of technology, which does not seem to be on the horizon, and CPUs somehow revert back to having non-dynamic performance scaling features, the proposed side channel-based heuristic is likely to remain a good candidate for ubiquitous secure random number generation across all our CPU-powered devices,” his paper says.

Timing measurements, Roig contends, can close the boot time entropy hole identified by Nadia Heninger and colleagues in 2012 and is simple enough to deter government agencies from trying to backdoor OS RNG seeding.

“This is how we can make sure every device, everywhere, has stronger cryptography,” said Roig. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/10/01/hardware_random_numbers/

‘Short, Brutal Lives’: Life Expectancy for Malicious Domains

Using a cooling-off period for domain names can help catch those registered by known bad actors.

Domain Name System (DNS) pioneer Paul Vixie for more than three years has been calling for a “cooling off” period for newly created Internet domain names as a way to deter cybercrime and other abuses. Domain names registered and spun up in less than a minute only encourage and breed malicious activity, he argues, and placing them in a holding pattern for a few minutes or hours can help vet them and catch any registered by known spammers and other bad actors.

Vixie — who is founder and CEO of threat intelligence firm Farsight Security — and his team have now taken an up-close look at the life cycle of new Internet domains, and their findings shine new light on the lifespan of malicious and suspicious domains. “Most of them die young, and most of them die after living short, brutal lives,” he says of newly created domains.

Over a six-month period, Vixie and his team conducted a longitudinal study of 23.8 million domains under 936 top-layer domains from their creation. They found that in the first seven days, 9.3% of new domains died: the median lifespan was four hours and 16 minutes.

The cause of death for 6.7% of those new domains was blacklisting, and most of them were blocked within an hour of their birth. DNS registrars and hosting providers, meanwhile, deleted or revoked malicious domains in three days or more after their creation. Interestingly, new generic top-level domains (gTLDs) suffered three times the rapid deaths than traditional ones such as .com, .net, .org, and .edu, for example.

Vixie’s team found in the first week of life for new gTLDs there were 12 cases of more of them dying than living past their first week. “I was not shocked to see them as poster children of the short-lifetime effect,” he says. “I don’t know if they are more abusable or not,” but it’s possible the registries who snapped them up to sell aren’t getting as much business as they expected. “They’re under a good deal of financial pressure,” he says, so some may be less choosy over to whom they sell their available domains.

The Internet’s biggest TLD, .com, had just 2% of its new domains blacklisted and 3.6% deleted by registrars.

The new research, which Vixie will present on October 5 at the VirusBulletin International Conference in Montreal, underscores how a secure DNS policy is needed both for registrars that issue domains as well as enterprises that register new domains, he says. Putting new domains on ice for hours, days, or a week, is the best approach to ensure there’s no malicious intent or ties. Enterprises, too, get the benefit of ensuring their new domains aren’t incorrectly blacklisted, for example.

“All new domain should go into a penalty box — good or bad — until they’ve had a chance to live long enough,” he says. Vixie’s full report will be released on Friday.

Related Content:

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/short-brutal-lives--life-expectancy-for-malicious-domains-/d/d-id/1332945?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How to have that difficult “stay safe online” conversation with your kids

It’s crucial to arm kids with knowledge of how to protect themselves and their information online, not only in the moment, but also for the future – a concept many kids may not really care about or even grasp.

If you’re looking for the best way to start a conversation with your children about online safety as they start using the internet with greater independence, below are some tips to help them (and you!) keep themselves and their information protected.

1. Does it pass the grandmother test?

It can be easy to get swept up in the moment, and suddenly without realizing you’ve said or done something you regret and that you can’t take back. It’s even worse on the internet, as that thing you’ve said or done lives online forever – yes, even if you think you’ve deleted it.

Think for a moment before you post something, and remember that once it’s online it’s out there for everyone to see. If you wouldn’t be comfortable with your grandmother, a teacher, or future employer reading that post, perhaps it shouldn’t go online in the first place.

2. Who are you talking to?

You can’t always be sure of who you’re talking to online, and you definitely can’t be sure of who’s watching or reading.

If an unexpected message pops up from someone you know, be careful. It might be someone pretending to be that person.

3. Protect your information

Whether you’re talking to someone or using an app or a service, it’s crucial to protect your personal information (your full name, your birthdate, or where you go to school), and your location (like where you live, or where you frequently hang out with your friends).

If someone or something is asking for your details, ask yourself why. Who are they, and why do they want this information? What do they want to do with it? Follow your gut instinct: If something feels off about the website or app that’s asking, trust that gut instinct and stop what you’re doing.

4. Don’t be lazy with passwords

It might seem like the easy thing to do – less typing and remembering, right? – but using the same password on every service and app is a really bad idea.

Sites and services get hacked pretty frequently, and hackers will often post a big data dump of all the email addresses and passwords they gather during that hack. Then they take those email addresses and passwords and try them out on other sites and apps, and sadly it often works.

So if you use the same password on a harmless free gaming app and a social media account, if that harmless app gets hacked you may find yourself locked out of your social media account the next day, as your profile has been hacked too.

The solution is really easy: Use unique, strong passwords on every site and app you use.

You can use your browser or mobile device’s built-in password manager, or a third-party manager to do this. Any of these password managers will do two important things: Generate a strong password (one that a hacker couldn’t guess on their own easily), and remember it for you.

5. Use 2FA on your accounts to keep hackers out

For the accounts that are really important to you, taking an extra step to keep them out of a hacker’s hands is really worth doing.

A lot of services, like email, social media, and games offer what’s called multifactor or two-factor authentication. This is an additional measure of security to add to your account that goes beyond passwords. Sometimes the multifactor authentication comes in the form of a numerical code the service texts to you, in other cases the service will help you set up multifactor authentication with a third-party authenticator (like Google Authenticator).

Other services may have their own authenticator app or key generator they will ask you to use – if a service offers multifactor authentication, they’ll walk you through how to set it up and use it.

6. Think before you download

You don’t want to do anything that might make your phone stop working properly, or that could put it under someone else’s control. Download apps or browser extensions from trustworthy sources, otherwise they could allow someone to take control of your device, steal your information, compromise your accounts – and even demand ransom money to release control of the devices and its contents back to you.

7. Check permissions on apps

Take a good look at any permissions the app asks for – does it really need all those permissions? Ask why it needs all that access if it seems excessive, and if you can’t find out why, it might be best to remove it.

8. Don’t share accounts with friends

This one might sound like a no-brainer, but don’t share your passwords with friends either. If your friend gets hacked, then you’re locked out too! (Or if you and your friend have a fight, they might change that shared password in a moment of anger.)

If your friend wants to use the same app or service you’re using, they should get their own account that’s under their control.

9. Remember to log out!

If you’re at a public computer or using some other kind of shared device, like at a library, store, or a lab, remember to log out of any accounts you log in to! (Unless you really want other folks at the Apple Store reading your email.)

These tips are just a part of the ongoing conversation you should be having with your kids. Yes, parental controls exist to set limits on screen time, app access, and even transactions as you feel appropriate, and these can be very useful.

However, they are not foolproof – and one day, like training wheels, they have to come off. That’s why it’s vital that, when that day comes, your kids are well equipped with the knowledge to allow them to safely take control.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mkaw1BHHmro/

The Right Diagnosis: A Cybersecurity Perspective

A healthy body and a healthy security organization have a lot more in common than most people think.

As someone who is battling a chronic medical condition, I understand the importance of the right diagnosis. The right diagnosis along with modern medicine and the right attitude have helped me successfully battle multiple sclerosis for nearly a decade. Most people who meet me in person have no idea that I have MS, and I intend to keep it that way for a very long time.

So, why am I telling you this? And further, what do diagnosing and battling MS have to do with information security? I’d argue that we can learn an important lesson from my experiences: that just as the right diagnosis and the right treatment can go a long way toward treating medical issues, they also go a long way toward treating security problems.

No security program is perfect, but some need more attention than others. What are the checkpoints that will help organizations understand where their security programs are ailing, how to make the right diagnosis, and begin the proper treatment? Let me share a few of my thoughts.

Check brain function: Just as the brain controls how the body functions, the leadership of a security organization controls how that organization functions. When looking to evaluate and understand where a security program stands, one of the first diagnostics should be focused on leadership. Do security leaders have a clear vision? Do they have a solid strategy? Are they focused on the right goals and priorities? Do they have the right plan to make their strategy a reality? Do they have the ear of the executives, the board, and other stakeholders? Are they building the right team? These and other questions can help a security organization check its brain function and diagnose where it may be ailing.

Check the heartbeat: Security operations could be considered the central function of a security program, analogous to its heartbeat. Just as a healthy, regular heartbeat is critical to the health of the body, a healthy security operations program is critical to the health of a security organization. Is the security operations team properly trained? Do team members’ tools support their mission? Do team members populate their work queue with reliable, high-fidelity, practical alerts? Do they detect and respond to incidents in a timely and efficient manner? Do they have the right processes and procedures in place?

Check blood flow: Security needs to make its way throughout the organization just as blood needs to make its way throughout the body. This requires the right message, practical guidance, and the proper relationships. When any of these are lacking, the security organization will have a difficult time working with the business to improve its security posture.

Check breathing function: Just as breathing brings oxygen to the body, fresh ideas and innovation bring oxygen to the security organization. When a security program stagnates and becomes stale, it begins to lose effectiveness. Risks and threats change with time. Attackers become more creative and sophisticated. Technologies change. Detection methods become outdated. All of this results in the security organization becoming increasingly unaware of what it needs to be concerned about. The relevance of the information on which it relies becomes diluted. Without innovation to breathe new life into the security program, returns will diminish. Increasingly less risk will be mitigated.

Check muscle function: Just as the muscles move different parts of the body and implement the will of the brain, the incident response function implements the will of the security team. In the event of an event or incident, incident response is the muscle that brings the organization back to an acceptable place from a risk perspective. Ensuring that the incident response function is healthy is directly correlated with ensuring that the security program is healthy and properly able to mitigate risk. Does the incident response team have the visibility required to properly monitor the enterprise? Does it have the people, process, and technology to ensure success? Do team members have the required relationships within the organization to properly mitigate and remediate incidents that occur?

Check the extremities: Healthy extremities are an important part of a healthy body. In security, customers, vendors, partners, and other stakeholders are the extremities. It’s easy to get caught up in the nearly endless list of internal security tasks awaiting the average security team. But considering the security of customers, vendors, partners, and other stakeholders is also an important part of a mature security program. Without considering the health of its extremities, the security organization will miss a number of ways in which risk can be introduced into the enterprise.

Get a second opinion: Sometimes even the most skilled medical professionals make the wrong diagnosis. Similarly, in security, sometimes even the most skilled security professionals make the wrong diagnosis. To ensure the right one, it can be helpful to work with a trusted colleague, a group of colleagues, or a partner. Don’t just trust one diagnosis, particularly if it’s your own. Take the time to get a second opinion.

Be patient: The right treatment based upon the right diagnosis may take time to have an effect. It’s important to give a new or modified approach time before giving up on it. Designing meaningful metrics allows a security organization to continually assess its progress against its goals and priorities. This gives the security organization much needed data points for evaluating whether or not a given approach is on track to produce the desired results.

Check the diagnosis: Risks and threats develop and evolve over time. The environment within the enterprise changes continually. Technology changes constantly. These and other changes mean that a diagnosis that was right some time ago may no longer be the right diagnosis. It’s important for a security organization to continually evaluate the circumstances and conditions it finds itself in and verify that a given diagnosis is still the correct one.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Josh (Twitter: @ananalytical) is an experienced information security leader with broad experience building and running Security Operations Centers (SOCs). Josh is currently co-founder and chief product officer at IDRRA and also serves as security advisor to ExtraHop. Prior to … View Full Bio

Article source: https://www.darkreading.com/operations/the-right-diagnosis-a-cybersecurity-perspective/a/d-id/1332913?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

California Enacts First-in-Nation IoT Security Law

The new law requires some form of authentication for most connected devices.

The nation’s first IoT security act was just signed into law in California. The law isn’t just about the IoT, but billions of small connected devices will have to add critical features if they’re sold in the state after Jan. 1, 2020.

SB-327 is broad legislation that applies, with some exceptions, to “…any device, or other physical object that is capable of connecting to the Internet, directly or indirectly, and that is assigned an Internet Protocol address or Bluetooth address.” Those devices will be required to have basic security capabilities installed — though precisely what those might be is not spelled out in the legislation.

Instead, the law requires steps that are “appropriate” to the device and the information it collects, protecting each from “…unauthorized access, destruction, use, modification, or disclosure.” Specifically, if a device has provisions for unique authentication of device and/or users, it is considered to be in compliance with the law.

The exceptions to the requirement are those devices that fall under federal laws or regulations, including medical devices.

For more, read here.

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/iot/california-enacts-first-in-nation-iot-security-law/d/d-id/1332934?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Employees Share Average of 6 Passwords With Co-Workers

Password-sharing and reuse is still prominent, but multifactor authentication is on the rise, new study shows.

An employee on average shares six passwords with his or her co-workers, and half of employees reuse passwords among work and personal accounts.

But there is a bit of good news: 45% of businesses are using multifactor authentication (MFA), up from 24.5% last year, according to a study by password manager LastPass of 43,000 organizations that use its service. Some 63% of organizations that employ MFA are in the US.

Even some smaller-sized companies are employing MFA: 41% of those doing so have 25 or fewer employees, the study found. Meantime, 3% of companies with 10,000 or more employees do so.

Read the full report here

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/employees-share-average-of-6-passwords-with-co-workers/d/d-id/1332933?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

October Events at Dark Reading You Can’t Miss

Cybersecurity Month at Dark Reading is packed with educational webinars, from data breach response to small business security.

Wednesday, Oct. 3, 1 p.m. Eastern: The Real Impact of a Data Security Breach

With Cindi Carter, deputy CISO of Blue Cross Blue Shield of Kansas City, and Joshua Bartolomie, director of research and development for Cofense. 

If only a data breach was orderly: neat stacks of breach notification letters, tidy invoices requesting manageable fines, straightforward forensic investigations with no surprises. Breaches are rarely so cooperative, though. In this webinar, learn about the some of the lesser-known impacts major incidents can have both on a business and on its security team, and learn how to better prepare for them. Register now.

 

Thursday, Oct. 11, 2 p.m. Eastern: Cybersecurity for Small- to Medium-Sized Businesses: 10 Steps for Success

With John Pironti, president of IP Architects LLC, and Perry Carpenter, chief evangelist and strategy officer for KnowBe4. 

Small and medium-sized businesses (SMBs) are learning the hard way that they are indeed prime targets for cyber attackers. But many enterprise security tools and practices don’t work for SMBs, which have neither the budget nor the skills to operate their own IT security department. In this webinar, learn tips for securing the smaller enterprise, and for implementing simple, affordable tools and best practices that make sense for the resource-limited SMB (and maybe even resource-limited large enterprises). Register now.

 

Wednesday, Oct. 17, 1 p.m. Eastern: Effective Cyber Risk Assessment

With John Pironti, president of IP Architects LLC and Matt Wilson, director of network engineering for Neustar

You might have been told you need to “support the business” and adopt a “risk-based approach” to security, but how do you do that? How do you assess constantly shifting risks in an accurate, data-centric way that relates to your business specifically? In this Dark Reading webinar, get started with the how-tos of risk assessment, including third-party risk and other hidden sources of cyber risk that might not be on your radar. Learn how to identify misspending and direct your security dollars to where they’re needed most. Register now.

 

Tuesday, Oct. 30: The Next-Generation Security Operations Center

With Roselle Safran, president at Rosint Labs entrepreneur in residence at Lytical Ventures, and Adam Vincent, CEO of ThreatConnect

In the past, the IT security department focused most of its efforts on building and managing a secure “perimeter” and spent most of its time managing passwords and access control lists. Today, however, the security operations center (SOC) has become a place for not only building a strong defense against the latest attacks, but for collecting threat intelligence as well as analyzing and responding to new threats that may evade traditional defenses. In this Dark Reading webinar, learn how to implement the latest threat intelligence technology, tools and practices in SOC operations –and how to prepare your organization for the next major security incident, even if your enterprise doesn’t have a SOC in place. 

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/october-events-at-dark-reading-you-cant-miss/d/d-id/1332941?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Monero fixes major ‘burning bug’ flaw, preventing mass devaluation

The developers of Monero (XMR) call it the “burning bug” and they might never have done anything about it if an anonymous user hadn’t posed an awkward hypothetical question on the cryptocurrency’s subreddit last week.

What happens if I spend from a specific stealth address and then someone sends more to it? Are the funds inaccessible as the key image has already been used?

The query must have sounded naïve until the developers realised that the apparent non-expert had just confirmed a major flaw in wallets used to transact the controversial and what is reportedly the world’s tenth most popular cryptocurrency.

Funnily enough, it appears that the same issue was brought up last year when it met with a sort of why would anyone do that? response.

The TL;DR is that a software patch was this week issued to exchanges on top of the v0.12.3.0 release branch as a source code pull request, which presumably they’ll apply promptly assuming they’re on the mailing list and know about it.

As for the burning bug itself, this presents an interesting problem created by the use of stealth wallet addresses, an anonymity concept used across the cryptocurrency world but which has become especially important to privacy-sensitive Monero users.

These are used by the recipients of currency (merchants or exchanges) so that anyone sending them currency must do so by creating their own one-time address in order to veil lots of transactions from everyone on the blockchain except themselves.

It’s not a million miles away from a PO box – you know who is sending you mail but your neighbours never see the postman deliver anything.

In the world of cryptocurrencies, however, how this is done can have big implications. An attacker exploiting the weakness could in theory send 1,000 XMR to the same stealth address, each one forged so they have the same unique key image. Normally, the blockchain would warn about the 999 duplicate keys, but in this case, it wouldn’t notice this because of the way transactions are handled with stealth addresses.

Explained Monero’s developers:

The attacker then sells his XMR for BTC [Bitcoins] and lastly withdraws this BTC. The result of the hacker’s action(s) is that the exchange is left with 999 unspendable / burnt outputs of 1 XMR.

In fact, the attacker wouldn’t be able to use the extra Bitcoins either because they would be logged as double spend, which would still leave the exchange nursing big losses for every such batch of fraudulent transactions.

Therefore, a determined attacker could burn the funds of an organization’s wallet whilst merely losing network transaction fees. They, however, do not accrue direct monetary gains.

No direct gains, then, but possibly indirect gains made by damaging exchanges or gaining from changes in the value of Monero arising from an attack.

What this says about Monero’s codebase is hard to assess, although sceptics will point to a sequence of problems this year, including another critical flaw from only a few weeks back.

Conclude its developers:

We, as the Monero community, should seek means to get more eyes on the code and especially new pull requests.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/pbvehlV3uFg/

You gave your number to Facebook for security and it used it for ads

What happens to the mobile numbers Facebook users add to their accounts to enable SMS two-factor authentication (2FA)?

If you assume the answer is nothing beyond their described purpose, prepare for a bit of a surprise courtesy of a study by researchers from Northeastern University and Princeton University, backed by plenty of dissatisfied commentary from the privacy community and tech press.

Facebook, the researchers found, has been adding these numbers to the other data it uses to target people with advertising.

It is already known that Facebook lets advertisers upload their own data – including email addresses and telephone numbers – which is matched to the same data on user accounts. As the researchers explain:

Facebook then creates an audience consisting of the matched users and allows the advertiser to target this specific audience.

What’s never been clear, however, is which personally identifiable information (PII) from its various services (including Instagram and WhatsApp) are used in ad targeting because it’s not easy to directly relate a specific piece of data from one context to the ads that show up.

The study offers a fascinating methodology for inferring this, in the process discovering that any data will do the trick, including numbers added as part of 2FA (or to receive login security alerts) but not used elsewhere.

An article in Gizmodo – which worked with the researchers – calls this data “shadow contact information,” perhaps deliberately echoing recent controversy surrounding Facebook’s shadow profiles used to gather data on internet users who come into contact with its sites without having accounts.

Facebook doesn’t clearly state that it does this anywhere, but seems to have admitted as much by telling another news site that if users were that bothered they could:

Opt out of this ad-based repurposing of their security digits by not using phone number based 2FA.

It is outrageous that Facebook is asking people to turn off SMS-based 2FA simply because they don’t like the fact that it is using that telephone number to target them with advertising.

Facebook uses advertising to make money from what is a free service – it harvests PII to target advertising and perhaps anyone bothered by this shouldn’t be on Facebook.

However, Facebook should draw the line at using information provided for security reasons in ad targetting, if they’re not going to allow users to specify its use.

Another way

The good news is that however convenient SMS-based authentication might seem, it’s not secure enough anyway and Facebook users would be better migrating to alternatives such as an authentication app, or even the Facebook’s app’s own Code Generator function.

This solution not only bypasses the whole issue of phone numbers being used in ways people aren’t happy with, but improves their security. What’s not to like?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/HCGqhRpvt5M/