In recent years, SEO basics like the best ways to build links to boost search engine rankings have changed, and content marketing has become more important. However, what many people would call “traditional SEO” is still a great way to get traffic from search engines.
As we’ve already talked about, keyword research is still important, and there are still many technical SEO problems that make it hard for Google and other search engines to understand and rank the content of a site.
Technical SEO for bigger, more complicated sites is really its own field, but even small and medium-sized businesses can benefit from knowing about some common mistakes and problems that most sites face.
Search engines are putting more and more weight on sites that load quickly. The good news is that this is good not only for search engines, but also for your users and the conversion rates of your site.
Google has actually made a useful tool that will tell you exactly what to change on your site to fix page speed problems.
If your site gets a lot of search engine traffic from mobile searches, or could get a lot of traffic from mobile searches, how “mobile friendly” your site is will affect how well it ranks on mobile devices, which is a fast-growing market. In some niches, mobile traffic already beats desktop traffic.
Google just announced a change to its algorithm that will focus on this. In my most recent post, I talk about how to find out what kind of mobile search engine traffic is coming to your site and give specific suggestions for things to change. Again, Google offers a very helpful free tool to find out how to make your site more mobile-friendly.
Header response codes are a technical SEO issue that is very important. If you’re not very technical, this can be a complicated topic (again, more detailed resources are listed below), but you want to make sure that working pages return the right code to search engines (200), and that pages that can’t be found also return a code to show that they are no longer there (a 404).
If you get these codes wrong, you can tell Google and other search engines that a “Page Not Found” page is actually a working page, which makes it look like a thin or duplicate page, or even worse, you can tell Google that all of your site’s content is actually 404s (so that none of your pages are indexed and eligible to rank). You can use a server header checker to see what status codes your pages are returning when search engines crawl them.
If you don’t use redirects correctly on your site, it can hurt your search results in a big way. If your content is on example.com/page and that page is getting traffic from search engines, you don’t want to move it all to example.com/different-url/newpage.html unless there is a very strong business reason that would outweigh a possible short-term or even long-term loss in search engine traffic.
If you do need to move content, make sure to use permanent redirects (301) for content that is moving permanently. Developers often use temporary redirects (302), which tell Google that the move may not be permanent and that they shouldn’t move all of the link equity and ranking power to the new URL. (Changing your URL structure could also break links, which would hurt your referral traffic and make it hard for people to get around your site.)
With Google’s recent Panda updates, a lot of attention has also been paid to thin and duplicate content. By duplicating content (putting the same or nearly identical content on multiple pages), you spread link equity across two pages instead of putting it all on one page.
This makes it harder for you to compete with sites that are putting all of their link equity on a single document for competitive phrases. When search engines look at your site, it looks like it is full of low-quality (and maybe even fake) content if you have a lot of duplicate content.
Duplicate or thin content can be caused by a number of things. It can be hard to figure out what’s wrong with these things, but you can get a quick idea by looking at Webmaster Tools under Search Appearance > HTML Improvements.
Check out what Google has to say about duplicate content. Many paid SEO tools, like Moz Analytics and Screaming Frog SEO Spider, also have ways to find duplicate content.
Google and Bing can understand your site and find all of its content with the help of XML sitemaps. Just make sure not to include pages that aren’t useful, and know that submitting a page to a search engine in a sitemap doesn’t guarantee that the page will rank for anything. XML sitemaps can be made with a number of free tools.
Robots.txt, meta noindex, & meta nofollow
In a robots.txt file, you can tell search engines how to handle certain content on your site. For example, you can tell them not to crawl a certain section of your site. Most likely, your site already has this file at yoursite.com/robots.txt.
Make sure this file isn’t blocking anything you’d want a search engine to find from being added to their index. You can also use the robots file to keep search engines from indexing things like staging servers or large chunks of thin or duplicate content that are useful for internal use or customers. Both the meta noindex and meta nofollow tags can be used for similar things, but they work in different ways.
For technical SEO, site security is very important. You’ll also want to make sure that your site is served over https instead of just http. You need an SSL certificate to do this. You can get one in a number of ways, such as by using HubSpot’s SSL certificate.
- How Search Engines Work and What is the Point of a Search Engine Algorithm?
- How to Keyword Research and Keyword Targeting Best Practices?
- What is the Information Architecture and Internal Linking?