STE WILLIAMS

Questions linger after ISP blocks TeamViewer over fraud fears

Last Wednesday, for no apparent reason, the TeamViewer remote desktop application stopped working on the network of one of the UK’s largest ISPs, TalkTalk.

It’s a popular application with remote support professionals and power users alike and so support forums soon filled with complaints from perplexed users who noticed that access was possible with 4G and some TalkTalk business connections but not home broadband.

Complaints such as the following:

No access in North Yorkshire with TalkTalk – nightmare for work. If they can’t fix this within the day will have to cancel as I need this connection for my livelihood (sic). Terrible.

By Thursday, journalists dragged the truth out of the company that it had “blocked a number of applications including TeamViewer,” which led to a joint statement confirming this on TeamViewer’s website:

TeamViewer and TalkTalk are in extensive talks to find a comprehensive joint solution to better address this scamming issue.

We now know (as some suspected at the time) that the block was connected to abuse of TeamViewer by criminals based in India who had been using it as part of a tech support scam targeting TalkTalk customers.

The BBC reported on this two days before the block, including the disturbing claim that the criminals had been able to quote stolen customer account data to make scam calls sound more convincing.

On Thursday, TalkTalk turned off the block and TeamViewer started working again. Still puzzled, we decided to probe deeper.

Pulling the plug on an application without warning is, as far as we know, almost unheard of for a UK ISP, so one might assume that this happened because the company believed it was an emergency situation.

But why block an application without informing customers until a day later? Forum comments suggest that even TalkTalk’s own telephone support staff were unaware of the TeamViewer block at first. And what changed on Thursday to allow it to be unblocked?

TalkTalk told Naked Security:

Like all ISPs we constantly monitor our network and testing regimes in order to protect our customers from any potential and known risks.

That’s hardly elucidating. TeamViewer, meanwhile, told us that it had raised the issue of the block with TalkTalk as soon as it heard of it and took the view that filtering one application missed the point that criminals could abuse numerous others too.

TeamViewer was not at fault for what happened. On that basis “you could go ahead and block email,” TeamViewer’s Axel Schmidt told Naked Security, pointedly.

Both companies alluded to improving security without giving detail. We’ll refrain from mentioning one or two possibilities for security reasons but an obvious mitigation would be for TalkTalk to temporarily filter application traffic from Indian IP addresses, a short-term solution at best. Presumably, TeamViewer is also combing its user base for fraudulent accounts.

Although far from new, it’s where this is going that worries us. Tech support scams that hijack remote desktop tools look like an ever-expanding front for fraud that, even without stolen customer data, is hard to counter. TeamViewer and TalkTalk will not be the last victims, nor India the last host. The industry – and customers – should take this threat very seriously.

Defence is about following simple rules:

Never allow a cold caller to install anything on your computer – hang up.

Never respond to web pop-ups suggesting you call a support line.

Be aware that fraudsters are now using stolen data to make their calls sound more convincing – no cold caller is trustworthy, period.

When encountering scams, complain through official channels such as Action Fraud in the UK or the FBI.

Above all, spread the word.

And the industry:

Start communicating. When blocking an application, tell customers ASAP.

Work pre-emptively with remote desktop providers

Customer intel is vital – don’t ignore complaints


 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/oyWGRr0kXLk/

Data-matching: what happens when firms join the dots about you?

You may not have heard of data matching, but I guarantee you’ve heard of the companies that do it. Data matching is where a company simply takes internally held data, matches it with publicly available data, analyses it and then uses it for the purpose of raising money, targeting people, or whatever the business goals are.

Recently, Uber was caught out for using a program it called Greyball to dodge law-enforcement officials in cities where the service was being rolled out. Essentially Greyball carried out a data-matching process to figure out whether users were government officials or not.

The New York Times explains in more detail about how it did this. Uber employees cross-matched usernames with social media profiles, employed “geofencing” around government offices to identify potential officers, and assessed whether credit card information was tied to an institution. And apparently it worked quite effectively, since they managed to evade law enforcement in several US cities.

Even charities have been caught red-handed using data matching to further their fundraising goals. Late last year the UK’s data protection authority, the Information Commissioner’s Office (ICO), fined the RSPCA and the British Heart Foundation for breaching the Data Protection Act. The ICO found they did this by targeting new donors by data matching, by trading personal data with other charities, and by screening donors using their data without their consent.

Most importantly, perhaps, is the question of using data matching to further political aims. I have written about the limitations of Cambridge Analytica – which has been credited with winning Donald Trump the White House and helping Leave.eu win the UK’s referendum on leaving the European Union – and its approach to Facebook data, but never touched on the potentials for data matching.

Outside of using Facebook’s tools, there are of course many ways in which the data can be extracted and matched to data that’s held elsewhere. And it doesn’t require a stretch of imagination to think that Cambridge would have access to donor lists or voter lists, whose data could be extrapolated to create an extended pool of potential supporters.

How ethical and legal is this?

It’s extremely tricky to legislate for this sort of behaviour and even harder to enforce it. After all, if people leave breadcrumbs of their identity around the internet and these pieces of data can be matched together, what’s the harm? Most of the data is freely available and not obtained by nefarious means.

The problem is volume – there is so much personal data spread about online that matching them can give companies insights into your life that you never really expected they would have. And what’s more, generally these users haven’t explicitly consented to using data-matching methods to mine the internet for more information.

Data matching is limited in some ways because most of us don’t have unique names and there tend to be at least a handful of other people around. But what happens if someone is trying to match my details to public data and instead they get another Sophie Warnes who might have a different social media presence to me or have a different job?

This happened to a friend of mine in a PR stunt gone wrong – she was sent a “dossier” on herself, which detailed where she lived, and assumed the person living with her was her partner. Only… it was another woman in the same city with the same name. It’s a bit… Orwellian.

Minimising the risks

Many of us assume that the data we give away to companies is protected, precisely because the Data Protection Act specifically stipulates that companies can only use the data in certain ways. In the US, there are also federal laws around privacy that cover this topic in some ways – indeed, Spotify, Spokeo, and several other US websites have been sued over the way they tracked users on the web.

In the case of Facebook, lessening the risk of being targeted or used as part of a mass data-gathering campaign is relatively easy. Make sure your account is locked down and everything made private as much as possible. Don’t “Like” any fan pages unless absolutely necessary. Don’t do quizzes, no matter how fun they are. Install the Chrome extension DataSelfie to audit your Facebook usage and see exactly how much information you’re giving away.

When it comes to installing new services or opening new accounts, make sure you know how your data will be used. Specifically, avoid checking the box that mentions sharing data with third parties. You don’t want your data being sold on by these companies so that you can be harassed by something you didn’t explicitly sign up for.

The biggest problem is the wealth of “publicly available data” that companies can get their hands on, and this is much harder to counter. Many sites aggregate this data, and some use it together with social media accounts, email accounts, and anything else they can find.

The thing is, while this data was always publicly available – records are created from the electoral roll and other public records – it has always been tucked away in physical locations like town halls or libraries. Technically publicly available, yes – but not available at the click of a button. Until recently.

What’s more, these aggregators make it pretty difficult for you to opt out, and while they must be transparent in order to comply with the law, they are hardly extensively advertising the fact that you can remove your personal data.

In the US, removing yourself from such sites like BeenVerified, Spokeo, etc, is a lot harder. In fact, you often need to sign up with them and give them more of your personal data (copies of ID cards, etc) in order to be removed.

And then there’s the problem of them re-accessing your data and putting you on again. As “Getting your data off once is not enough because the sites buy data and aggregate more info continually, making it likely that if you don’t take precautions, you’ll be put back in,“ a ZDNet article about removing yourself from US search websites says.

While you can get your data removed from these sites, it might take a while and the chances are, until you tackle it a source, the data will appear there again as they access it again. This is why you need to tackle it at source.

There are two versions of the UK’s electoral register – the main one which is only used to prevent fraud, for elections, and for checking credit or financial applications. The other is the open, or edited register, which is available for sale and can be used for marketing purposes. In this case, you need to contact the local council (wherever you registered to vote) and tell them that you want to be taken off the open/edited register.

TL;DR: how to protect yourself online

  • Always check the terms and conditions of anything you sign, and opt out of giving third-party companies your data
  • Request data harvesting companies remove your details – you usually need to fill out a form
  • Ask to be taken off the edited/open electoral register (if in the UK)
  • Log out of Facebook when browsing the web elsewhere to prevent companies tracking you
  • Use Incognito mode on Chrome
  • Use a VPN (Virtual Private Network) service when accessing the internet outside trusted networks
  • Don’t do quizzes on Facebook and external sites, you have no idea what they’re using the data for or what it could be used for in future
  • Keep social media accounts as separate and un-linked as far as possible
  • Use alternative names online to what it says on your electoral roll

What next?

In the UK, the ICO is now investigating Cambridge Analytica, citing “concerns about Cambridge Analytica’s reported use of personal data”.

The company says that it doesn’t have access to Facebook data, and that the information discussed “relates to a research project”. The ICO hopes to publish its findings on the case later this year, but it will be an interesting one to watch as this raises serious questions about these data-matching tactics, which are used by marketing professionals worldwide.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Dyx8WbSvomk/

Brit infosec’s greatest threat? Thug malware holding nation’s devices to ransom – report

The National Crime Agency and newly formed National Cyber Security Centre joint report on cybercrime unsurprisingly names ransomware as the top internet menace.

The report notes that ransomware is a “significant and growing” risk, with file-encrypting malware poses a threat to a greater range of kit beyond PCs. Smartphones, connected devices, wearables and even TVs are also at risk. Distributed Denial of Service (DDoS) attacks are also becoming more aggressive.

David Mount, director, security consulting EMEA at Micro Focus, said: “As this report demonstrates, the IoT is ushering in a new era in security terms. It’s positive that issues like ransomware and IoT security are now part of the national conversation, but we still have a long way to go to encourage connected tech companies to build security into IoT products from the start. All too often device vendors prioritise usability and customer experience over security, and that is putting consumers and businesses at risk. Quite simply, IoT security can no longer be treated as an afterthought.”

Malcolm Murphy, technology director Western Europe at Infoblox, added: “Ransomware was a dominating trend in cyber-crime in 2016 and is only set to increase, with its commoditisation through cyber-crime toolkits allowing even the most novice criminal to deploy it.”

“Many Internet of Things manufacturers may be contributing to this rise by not prioritising security when building their devices [for example] many are being produced with predictable passwords that cannot easily be changed.”

He added: “Too many electronics firms want to make their IoT device as cheap as possible. Security is expensive and paying developers to write secure code might mean a gadget is late to market and costly. Ultimately though, insecure products will lead to greater attacks.”

The cyber threat to UK businesses report can be found here (pdf). A press release summary is here. The release of the report on Tuesday coincides with the opening of the CyberUK 2017 conference, hosted by the National Cyber Security Centre (NCSC), five months after the organisation’s launch in Liverpool. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/14/cyber_security_agencies_ransomware_warning/

Cybercriminals getting as good as nation state spies – report

The European energy sector is being targeted by advanced threat actors seeking proprietary information to advance the capabilities of domestic companies, according to FireEye Mandiant.

The latest annual report by FireEye’s incident response arm further warns that cyber threat groups are also targeting European industrial control systems for potentially disruptive or destructive operations.

The capability of cybercriminals is starting to rival that of nation state spies, according to FireEye Mandiant.

While nation-states continue to set a high bar for sophisticated cyber attacks, some financial threat actors have caught up to the point where we no longer see the line separating the two. Financial attackers have improved their tactics, techniques and procedures (TTPs) to the point where they have become difficult to detect and, challenging to investigate and remediate.

Enterprises in general are getting a little better at detecting breach more quickly, but responses still run into weeks and months rather than days.

EMEA dwell time (time to detection) has decreased significantly over the last year. FireEye reports that the average figure last year was 106 days compared to 469 days in 2015. “This is still much greater/nearly a month longer than the global average of 79.5 days,” FireEye Mandiant adds. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/14/fireeye_mandiant_breach_report/

Manchester’s Munee Hut, Eyebrow Cottage and fresh hot spam from Belize

A Manchester-based firm that hired an outfit in the central American country of Belize to send around 64,000 spam texts promoting loans has been fined £20,000.

Munee Hut operated out of Eyebrow Cottage, a Grade II listed building in the south Manchester town of Sale.

An Information Commissioner’s Office (ICO) investigation, prompted by hundreds of complaints from the public about the spam texts, revealed Munee Hut had obtained the personal details used to send the messages from a variety of loan and prize draw websites. None of these sites indicated the data would be used for sending marketing text messages from Munee Hut.

Marketeers are legally obliged to obtain specific consent from people confirming they are willing to receive marketing text messages from, or on behalf of, their firm. Munee Hut failed to gain this consent.

Steve Eckersley, ICO Head of Enforcement, said: “Paying an overseas company to send text messages for you is not a get-out for failing to comply with the law. Munee Hut should have taken responsibility for ensuring that proper and specific consent to send the messages had been obtained.”

In addition to the £20,000 fine, Munee Hut has also been issued with a legal notice compelling it to desist from unlawful text messages. Failure to comply with this could result in court action. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/14/loans_text_spammer_fined/

Black Hat Asia 2017: CISOs Must Get Proactive about the Internet of Things

These four steps will help reduce the risk from looming IoT attacks

Already busy protecting IT environments upended by cloud and mobility, CISOs now must deal with another security and compliance game changer: the Internet of Things. IoT opens up a new universe of attack opportunities for hackers to:

  • Steal confidential information
  • Tamper with “things” and cause real-world harm
  • Distribute malware
  • Hijack computing capacity and network bandwidth for DDoS attacks

With IoT, the number of connected devices that transmit sensitive data and can be remotely managed – and hacked – has skyrocketed due to previously offline “things” that weren’t designed  to be protected from hackers, such as toys, appliances, door locks, industrial machines, building equipment, vehicles, medical devices and security cameras.

While IoT yields many benefits for businesses, governments and consumers, its security has been a glaring afterthought, and CISOs are justifiably alarmed. By 2020, more than 25% of identified attacks in enterprises will involve IoT, according to Gartner, from an estimated 8.4 billion connected “things” that will be in use.

CISOs got a nasty wake up call last October. Hackers infected 100,000 IoT devices with Mirai malware and used the botnet for a DDoS attack against DNS provider Dyn, crippling major websites. Many see the Dyn incident as the first of many nightmare scenarios in which attackers will be able to alter the thermostat on a data center, damaging expensive equipment, disable the breaks on vehicles, causing accidents, and tamper with medicine pumps in hospitals, harming patients.

Here are four proactive steps CISOs can take to help reduce the risk from potential IoT attacks

Step 1.  Identify IoT initiatives in your organization, understand their business goals, and get involved by:

  • Inventorying new IoT network endpoints
  • Planning for IT resources IoT systems will need, such as storage, bandwidth and middleware
  • Determining the physical security endpoints should have
  • Establishing the monitoring and alerting required to detect atypical endpoint behavior
  • Drafting policies governing IoT systems’ secure usage, management and configuration
  • Communicating IoT systems’ InfoSec, compliance and physical risks to business managers, IT leaders, CxOs and board members

Step 2. Poll service providers, partners, contractors and other third parties about their use of potentially insecure IoT systems that may endanger systems or data you’ve given them access to.

Step 3. Do due diligence on IoT system vendors by testing their products’ security and getting answers to questions like:

  • Can products be scanned, monitored and patched to fix vulnerabilities?
  • Are they baking security into product design?
  • Do their systems use secure hardware and software components?
  • Does product development have expertise on InfoSec areas like secure application development and data protection?

Step 4. Shine the harsh light of regulatory and policy compliance on your organization’s IoT plans, to determine:

  • Which data will be captured and transmitted by IoT endpoints?
  • What is the business risk of that data getting breached?
  • What regulations apply to IoT systems?

Article source: http://www.darkreading.com/strategic-cio/it-strategy/black-hat-asia-2017--cisos-must-get-proactive-about-the-internet-of-things/d/d-id/1328372?_mc=RSS_DR_EDT

Canada Takes Tax Site Offline After Apache Struts Attacks

Hackers exploit vulnerability in Apache Struts 2 software of Statistics Canada but no damage done.

A newly discovered vulnerability in the Apache Struts 2 software has forced the Canadian government to close down the Statistics Canada site used for filing federal taxes, Reuters reports. The site came under attack from hackers but was immediately shut down before any damage could be done.

The security bug in Apache Struts 2 software, used mostly in websites of government, banks, and retailers, was reported last week after the Apache Software Foundation came out with an update to fix the vulnerability. Users of this software around the world spent the weekend patching up this bug which reportedly was being exploited in the wild.

Canadian government official John Glowacki said that other countries “are actually having greater problems with this specific vulnerability.”

“This vulnerability is super easy to exploit,” says Chris Wysopal of security software firm Veracode. “You just point it to the web server and put in the command that you want to run.”

Read full story here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/canada-takes-tax-site-offline-after-apache-struts-attacks/d/d-id/1328394?_mc=RSS_DR_EDT

The Industrial Revolution of Application Security

DevOps is driving big changes in the industry, but a cultural shift is needed.

The industrial revolution marks a significant time period in our history because it was one of the first “disruptions” that led to advances in productivity and innovation. The most important inventions were the machines that automated work done manually with human capital and various tools. It is responsible for the cotton gin, the steam engine, the telegraph, new chemical manufacturing and iron production processes, and the rise of the factory system.

There are many parallels to the industrial revolution in the technology sector, including the advent and growth of the Internet, the migration to cloud computing, and mobile devices as an endpoint. One of the main forces driving this technological revolution is the adoption of development and operations (DevOps) culture. DevOps is all about the collaboration and communication. Its core tenets are culture, automation, measurement, and sharing.

 More on Security Live at Interop ITX

The first step is to break down the walls between teams, building a culture where individuals are encouraged to work with other teams and step outside the traditional channels of the waterfall model. Automation brings productivity gains, higher accuracy, and consistency. Measurement is crucial in DevOps for continuous improvement—data and results need to be readily available, transparent, and accessible to all. The fourth tenet—the sharing of best practices, discoveries, etc.—includes sharing both inside an organization between teams and departments but also with other organizations and companies from the community to best drive innovation.

Unfortunately, cybersecurity, specifically code and application security, hasn’t kept pace with this rapid progress. Far too many solutions have been vertically focused on the how instead of horizontally focused on the why. Much like how the railroad provided the platform to support numerous aspects of the industrial revolution, there needs to be a convergence of disparate tools and human capital initiatives onto a common platform that seamlessly integrates code and application security analysis and vulnerability testing without requiring developer intervention. That assertion was validated for me by walking the floor at the RSA Conference in February. There are simply too many vendors using the same messaging relying on FUD (fear, uncertainty, and doubt).

Barriers to Success
Before the industrial revolution, there were several barriers to innovation and advancement. There is certainly a corollary to the current state of application security. The first barrier is the vast landscape of tools and point solutions, which all tend to be vertically focused on specific areas and capabilities. This presents a serious challenge of scaling out both human capital (security engineers) and complete coverage of code repositories and application catalogs effectively.

Another barrier is that the security team is typically not integrated into the software development life cycle. This leads to the security team having to be the gatekeeper to application update delivery, or acting as police after the delivery. These two barriers often lead to the creation of a contentious relationship between the DevOps and security operations (SecOps) teams, instead of the collaborative, sharing culture that is inherent to DevOps. Another barrier is the serious cybersecurity skills gap—the nonprofit Center for Cyber Safety and Education estimates there will be a shortage of 1.8 million information security workers by 2022. Without security talent, we can’t expect to further our innovation and security resiliency.

Risk, to me, is a four-letter word. I believe that there is too much focus and emphasis on mitigating risk, which is primarily a defensive stance, versus “playing offense” and managing and monitoring risk as an “elastic asset.” My contrarian view of application security is that we, as an industry, need to start playing offense in a continuous manner instead of passive defensive approaches performed on a weekly/monthly/quarterly/annual basis. For starters, we need to incorporate application scanning way earlier in the software development life cycle. Security can’t be an afterthought. Attackers at all levels are scanning applications and infrastructure for the smallest vulnerability on a continuous basis so we need to act accordingly. If we hope to move the security and resiliency needle at all, we need to adopt the same automated and continuous approach.

I firmly believe that social and cultural changes—a key driver of the industrial revolution—will power the shift that needs to happen in the application security sector to positively disrupt our overall security resiliency, leading to an industrial revolution of application security. The “base of the stack” is the cultural change and mental shift to the culture of DevOps, which then drives the culture of DevSecOps.

Going forward, we collectively need to focus on the end game or the why rather than fixating on individual tools that address only some segments of the DevOps security challenge. The industrial revolution of application security is ours for the taking, and we’re so close! We just need our common platform “railroad,” widespread trust in the DevSecOps approach, and an eye on the prize (focusing on why not how). 

Related Content:

Mike D. Kail is Chief Innovation Officer at Cybric. Prior to Cybric, Mike was Yahoo’s chief information officer and senior vice president of infrastructure, where he led the IT and global data center functions for the company. Prior to joining Yahoo, Mike served as vice … View Full Bio

Article source: http://www.darkreading.com/application-security/the-industrial-revolution-of-application-security/a/d-id/1328373?_mc=RSS_DR_EDT

Awareness Training Can Help Quell Ransomware Attacks

53 percent of organizations fall victim to ransomware, despite multiple technological defenses; but the right awareness training brings that infection rate down significantly, KnowBe4 study finds.

A recent survey by KnowBe4 on prevention of ransomware attacks has revealed that antivirus deployment is not enough to ward these off and using a “human firewall” is more necessary. The 2017 Endpoint Protection Ransomware Effectiveness Report says that regular training and phishing attack testing of employees is necessary to counter ransomware, which in the last one year brought the success ratio of such attacks down to 21 percent.

KnowBe4, a provider of security awareness training, found that despite security solutions, 53 percent of organizations have still been a victim of ransomware.

Stu Sjouwerman of KnowBe4 says: “Our research findings are fascinating as they illustrate that most companies are in an arms race to deploy endpoint solutions, such as antivirus protection, but their focus on this investment is leaving massive gaps that can be manipulated. The bottom line: even with antivirus, ransomware is going to get in.”

The company says that any given ransomware attack will, on an average, impact six endpoints and two servers and not just one machine. An attack, it discovered, sets back the victim by 12 hours of user downtime and 12 hours of technology investment. 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/operations/awareness-training-can-help-quell-ransomware-attacks/d/d-id/1328395?_mc=RSS_DR_EDT

7 Things You Need to Know about Bayesian Spam Filtering

What’s This?

Knowing how spam filters work can clarify how some messages get through, and how your own emails can avoid being caught.

Bayesian spam filtering is based on Bayes rule, a statistical theorem that gives you the probability of an event. Bayesian filtering is used to give you the probability that a certain email is spam.

1. The Name
It’s named after the statistician the Rev. Thomas Bayes, who provided an equation that allows new information to update the outcome of a probability calculation. The rule is also called the Bayes-Price rule after the mathematician Richard Price, who recognized the importance of the theorem, made some corrections to Bayes’ work, and put the rule to use.

2. Spam
When dealing with spam the theorem is used to calculate a probability about whether a certain message is spam. The probability is based on words in the title and message, derived from messages that were identified as spam and messages that were identified as not being spam (sometimes called ham).

3. False positives
The objective of the learning ability is to reduce the number of false positives. As annoying as it might be to receive a spam message, it is worse to not receive a message from a customer just because he used a word that triggered the filter.

4. Scoring
Other methods often use simple scoring filters. If a message contains specific words a few points are added to that messages’ score and when it exceeds a  certain score, the message is regarded as spam. Not only is this a very arbitrary method, it’s also a given that it will result in spammers changing their wording. Take for example “Viagra” which is a word that will surely give you a high score. As soon as spammers found that out they switched to variations like “V!agra” and so on. This is a  cat and mouse game that will keep you busy creating new rules.

5. Learning
If the filtering is allowed for individual input the precision can be enhanced on a per-user base. Different users may attract specific forms of spam based on their online activities. In other words,  what is spam to one person is a “must-read” newsletter to the next. Every time the user confirms or denies that a message is spam, the filtering process can calculate a more refined probability for the next occasion.

6. Poisoning
A downside of Bayesian filtering, in cases of more-or-less targeted spam, is that spammers will start using words or whole pieces of text that will lower the score. During prolonged use, these words might get associated with spam, which is called poisoning.

7. Bypasses
A few methods to bypass “bad word” filtering.

  • The use of images to replace words that are known to raise the score

  • Deliberate misspelling, as mentioned earlier.
  • Using homograph letters, which are characters from other character-sets that look similar to letters in the messages’ character set. For example, the Omicron from the Greek looks exactly the same as an “O,” but has a different character encoding.

Bayesian filtering is a method of spam-filtering that has a learning ability, although limited. Knowing how spam filters work will clarify how some messages get through, and how you can make your own mails less prone to get caught in a spam filter.

Links to more information:

Was a Microsoft MVP in consumer security for 12 years running. Can speak four languages. Smells of rich mahogany and leather-bound books. View Full Bio

Article source: http://www.darkreading.com/partner-perspectives/malwarebytes/7-things-you-need-to-know-about-bayesian-spam-filtering/a/d-id/1328370?_mc=RSS_DR_EDT