7 Reasons Why Duplicate Content hurts your Local SEO

Written by: Jason Bayless | April 05, 2016

There exist two main myths regarding duplicate content when it comes to local SEO:

  • That Google will penalize a website instantly for every single instance
  • That duplicate content is one single problem to solve

Neither is true because, as even Google says, there are many kinds of duplicate content.

Causes for Duplicate Content

Dozens of reasons can result in duplicate content, and most are technical. It’s not actually common for a human being to decide one day to put the same words on two parts of the site without explaining where the original source of the text is. The technical reasons, on the other hand, are usually because developers aren’t thinking as search engine spiders so much as developers. For example, a developer will say that page.php exists once, even though it can be reached at example.com/page.php or example.com/category/page.php.

When is Duplicate Content Bad?

The easiest way to figure out if your duplicate content is going to harm your search rankings is to ask why you’re duplicating something in the first place. If the only reason is to get more people to see the words, to make it appear higher on the search engine results page, or to manipulate ranking in any way, then you should not do it.

Seven Ways to Reduce Duplicate Content

Whenever possible, your primary goal is to avoid creating duplicate content at all. Once it exists, there are some ways you can reduce how it affects search engines and improve the indexing of your website.

  1. Use a permanent redirect. Using a 301 informs the search engine that a page was moved. Moving pages without this can be considered duplicate content when the search engine finds the new page.
  2. Format links the same way. As shown above, there’s more than one way to access a page. Help out the search engine by picking one method of typing your links and sticking to it each time.
  3. Use noindex. If you have syndicated content, be sure to include the noindex meta tag so any duplicate content is not included in Google’s index. Anyone else who is going to use your syndicated content should also include the same tag in the code to prevent the same issue.
  4. Don’t repeat boilerplates. If there is a series of lengthy boilerplate text, such as a long copyright trail, it’s best for you to create a separate page that displays only that content. Then, the active page can load the content of that page and display it where it normally goes. This way, you won’t be dinged for duplicate content each time.
  5. Expand current pages. Already have plenty of pages with too-similar text on them? Maybe you own a travel site and talk about flying to various cities. In this case, providing unique, specific text to each one helps differentiate it while also helping the visitor out in the meantime. If that’s not possible, merge pages.
  6. Add more content. Running an affiliate site? Sure, it’s easy just to copy/paste product details on the page, but it won’t help your rankings. Search engines merely will point to the original site where the product is sold. Adding information about the products makes your content unique and worth standing out.
  7. Learn rel=canonical. Google uses this link relationship to understand which version of duplicate content is preferred for ranking. One page is the parent, and the other is merely referencing the content on another page.