Quick Website SEO Fixes Every Webmaster Should Do

To many webmasters and website owners, doing search engine optimization may be a laborious, even risky effort to remedy issues lingering within the website. They often rely on SEO consultants or members of their digital marketing or IT team to get those issues fixed.

But there are also things that are simple enough to discover and easy to fix.

Like a car that carries you from origin to destination and gets the job done, your website seems to be working fine, visitors see no error pages nor broken links. But a car may also have internal troubles that can ruin its long-term road worthiness: leaking radiator, squeaky belts or faulty spark plugs. A website might look okay and loads fine but internally there are signs of trouble, at least from the perspective of SEO: slow site loading speed, unintentional blocking of search crawlers in robots.txt or out of sync XML sitemap.

SEO is hardly a discipline that is defined by quick wins and instant gratification. Many tasks performed in SEO takes time — much of which significant — before seeing results, such as migration to HTTPS, optimizing server performance and on-site scripts to improve loading time or optimizing for rich snippets.

However, there are SEO fixes that don’t take long to implement but can significantly improve your website’s performance. Without the important elements — inbound links, quality content and RankBrain methodology — essential to ranking, these hacks won’t bring your website its desired results. But even with decent amount of links and good quality content, your website might not reach its goals without also complementing it with these quick fixes.

Step 1: Make sure robots.txt doesn’t block important search engine
If search engine crawlers cannot reach our website, the web pages we wish to rank cannot get indexed, so we should start our series of fixes here. Check if our website has robots.txt (websites will work fine even without robots.txt) by typing in robots.txt at your website’s root folder. So if your website is mywebsite.com, access it at www.mywebsite.com/robots.txt. If it does exist, examine if it has the following code:

User-agent: *
Disallow: /

A disallow directive explicitly tells search engine crawlers (or robots, hence the filename) not to crawl the folder as indicated.  This is helpful during sites currently in development and not yet ready for launch. Telling robots not to crawl these pages also means not to include them in search engine index and, ultimately, list them on search results.

Although this routine can help check if the file blocks entry of crawlers into a website, it’s not the only reason websites don’t appear on search results. It can be Meta robots specification, site penalty or other reasons.

Look at this Saudi Arabian news website whose pages do not appear on search engine results.

It has a robots.txt file but its content doesn’t indicate blocking entry of crawlers yet did not yield results using the site: operator:

User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php

Again if your website doesn’t have robots.txt file in it, it’s fine as this is only optional and only necessary if you wish to block certain pages from getting crawled. Otherwise, if your site doesn’t have robots.txt, it means all pages within the website are open to crawlers/robots to scan.

Step 2: Check if a crawl simulation captures your site’s text content
With crawling access confirmed, we can now check if crawlers not only can access our pages, they also must be able to scan them unimpeded.

It is obvious that our objective is for our websites to gain better search engine visibility. To achieve that objective, we have to start from the point where our web pages are discovered by search engines.

There are plenty of free tools out there: Webconfs and SEOChat are good examples. But for more authority, don’t miss Google Search Console’s Fetch as Google feature.


Step 3: Ensure XML Sitemap is updated
An XML sitemap is a file that contains a list of your website URLs created to help search engines find these pages as they crawl for content. If you have a website that has plenty of content, an XML file is a more desirable tool to have, especially if such content is often updated.

Just like robots.txt, you can access your XML sitemap at the root folder of your website. However, it is also possible to have multiple XML sitemaps across your website.

Check if the last entry in the XML sitemap reflects the latest page update you made. It can be a new blog post, latest news published or a static page added.

Step 4: Ensure HTTP and HTTPS versions of the URL resolve to one version
In case you have just switched to HTTPS (some web hosts actually do this for free), check if typing in http:// at the beginning of the URL will resolve to the https:// version and not issue a “Not Secure” note on Chrome browser which can create an off-putting impression to visitors.

If your website uses Apache server, you can update httpd.conf file or the file where your virtual host is specified and add these lines to redirect http to https

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{SERVER_NAME}/$1 [R,L]

Or update your .htaccess code to fulfill the same purpose:

RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]

If you are using WordPress, there are also plugins that do the same job.

Step 5: Check if Google Search Console has issued critical errors
With plenty of tools out there, Google Search Console can be taken for granted (I think design revamp is long overdue) but it contains plenty of useful diagnostic tools that come handy to webmasters. It categorizes errors such as soft 404, server errors and page not found based on device types.


Step 6: Ensure there is no or minimal duplicate content
Duplicate content occurs when the same exact content appears on multiple URLs, making search engines think further where is the ‘official’ URL to rank for relevant queries. This may result to search engines indexing URLs they think are original or most comprehensive version but different from pages we target to rank.

This can be due to the URL structure that employs multiple parameters, multi-page search results, inconsistent use of small and capital letters, creating versions for mobile devices, redundant category or tag pages, or printer-friendly versions. Although it’s not bad to create those versions for the sake of better user experience, but make sure to establish an “official” URL that search engines can recognize.

Fixing the issue can be in any or all of the following:

a. Establish a rel canonical tag to indicate which URL search engines should acknowledge. This can be applied to all pages that display similar content.

<link href=”http://www.sony.com.hk/zh/electronics/playstation” rel=”canonical”>

b. Apply meta robots on duplicate pages that normally occur on category, tag or search results served in multiple pages.

<meta name=”robots” value=”noindex,follow”>

c. Use Set Preferred Domain under Google Search Console
This enables Google to display the preferred version (www or no www) on search results, and restrict the other version.

d. Implement discipline on internal linking practices, including the adoption of standard way of linking. For example, shall we use www or without www? https and not http now that your site has HTTPS setup?

Step 7: Check if your website passes the mobile-friendly test
If you check your analytics report you might see more visitors using mobile device instead of desktop computers, so it makes sense to take care of your mobile-friendly website version. Use Google’s Mobile-Friendly Test tool.

Step 8: Check if your website loading speed is good enough

Just like the car analogy I shared earlier, the website from our end might load just fine. But to other users, it might take longer and that’s not good sign to search engines especially Google which has been preaching about user experience of late — long wait for site to load is not good UX. Test the page speed of your website using PageSpeed Insights tool under Google Developers.

Step 9: Make sure there are no Meta content issues
In my opinion page titles remain as the most important on-page factor so let us not waste the opportunity for every page to stand out and be distinguished against other pages in the website.

On Google Search Console, under Search Appearance > HTML Improvements, you’ll find what Googlebot has discovered. Duplicate page titles, long or short meta descriptions and non-indexable content, take time to take a look especially if your website just underwent a significant design or content makeover.

Depending on the severity of issues discovered in your test and the amount of time it takes to fix them, the entire exercise should be worth the time and takes your website in the right direction.