The Top 36 – SEO Insights of the Year from Google’s John Mueller

For the uninitiated, Google’s Webmaster Trends Analyst John Mueller generously runs Hangouts every other week where SEO professionals can get their burning questions answered.

In 2018 alone, Mueller has provided us with hundreds of noteworthy answers that continue to inform our understanding of SEO.

Read on for 36 of Mueller’s most important insights of the year, categorized into key SEO topic areas.


1. Sites Still Waiting to Be Switched over to Mobile-First Indexing Aren’t Necessarily Not Ready

Although a large proportion of sites have now been switched over to mobile-first indexing, sites that are yet to be moved over aren’t necessarily not ready for it.


2. After a Site Has Been Switched to Mobile-First Indexing, Content Parity Won’t Be an Issue Because the Desktop Version Won’t Be Used for Indexing

Content parity across mobile and desktop versions of a site is important in order for it to be switched over to mobile-first indexing.

Once the site has been switched over, parity is less of a consideration because it will only be the mobile version that will be used for indexing.

3. Make Sure Mobile Versions Aren’t Oversimplified & Include Relevant Content That Helps Google Understand the Page

The mobile versions of a site shouldn’t be oversimplified to the point it impacts Google’s ability to clearly understand what your site is about.

Make sure to retain relevant content and descriptive elements. Also, maintain alt text on the mobile version of pages on a site.



4. There Are No Differences in How Content in the HTML Is Treated Dependent on Whether It Is Visible or Not by Default

Google doesn’t treat content in the HTML differently dependent on whether it is visible on the page by default.

It is no problem to include secondary content that is hidden on page load as long as it is in the HTML.


5. Use Rel=Canonical over Noindex for Duplicated Content

When a page is noindexed this means all of the signals will be dropped for this page.

When it comes to duplicate content, Mueller recommends using a rel=canonical to the preferred version of the page so that the signals for both page versions can be combined rather than dropping the signals from the noindexed page.


6. Google Can Index Duplicate Pages But Only Shows the Most Relevant One

For duplicate pages that exist across different sites with no canonicalization in place, Google will index both versions of the page but will only show the most relevant one in the search results based on relevancy and personalization factors, like location.



7. Ensure That Scripts in the Page Header Don’t Close It Prematurely

Scripts which insert non-head elements (like divs and iframes) into the page header can cause it to close prematurely.

This could mean that Googlebot isn’t able to crawl hreflang links because it assumes the head has already closed.

You can check for this potential issue by testing pages with Google’s Rich Results tool using View Code.


8. Googlebot Doesn’t Use Cookies When It Returns to Crawl a Site

Googlebot won’t replay any cookies provided when it returns to crawl a site.

If you use cookies to group users for A/B testing, make sure that Googlebot is put in the same group so that it sees a consistent version of the pages on your site.


9. Crawl Budget Is Only an Issue for Large Sites

Googlebot is able to crawl sites with “a couple hundred thousand pages” just fine.

Only large sites with a higher volume of pages should be concerned about crawl budget issues.



10. Serving a 410 Response Can Remove Pages from Google’s Index Quicker Than a 404

In the long term, 410 and 404 errors have the same impact, as they both remove pages from the index and cause it to be crawled less frequently.

In the short term, however, a 410 error may cause a page to be removed from the index a couple of days faster compared to a 404.


11. Linked Pages Blocked by Robots.txt Can Still be Indexed

Google can still index pages that are blocked by robots.txt, if they have backlinks.

Mueller recommends using the noindex tag for these pages instead.


12. Indexing Can Be Delayed If Canonical Tag Points to Redirect Instead of the Preferred Version

Make sure that canonical tags point to the preferred version of a page rather than a page which redirects the preferred page.

Canonicalizing to a redirect may mean that it takes longer for Google to index the preferred version of the page.



13. Serving Critical JavaScript in the Head Can Block Rendering

JavaScript that is deemed to be critical is likely to be of a significant file size.

As such, critical JavaScript should not be served in the head because this may delay rendering, meaning the user will have to wait longer to see any content.

Priority content should be served to users as quickly as possible.


14. Show Fully Rendered Pages to Googlebot Using Dynamic Rendering

By implementing dynamic rendering you can serve the fully rendered version of a page to Googlebot.

Doing so will overcome the delay between the initial indexing and rendering of a page.


15. There Will Be a Delay Between Initial Indexing & Rendering for the Foreseeable Future

Rendering JavaScript is a resource-heavy process and can’t happen at the same time as the initial indexing of the page with Google’s current system.

Therefore, there will be a delay between the initial indexing of the HTML and rendering for the foreseeable future.



16. PageRank Is Still Passed for Links That Aren’t Visible by Default

Mueller confirmed that Google processes and passes PageRank for all links in a page’s HTML, regardless of whether the links are visible by default on the page.


17. Bad Internal Linking on Mobile Can Affect How Pages Are Indexed & Ranked

Monitor the internal linking on the mobile version of your site, as done poorly, it may impact how Google indexes and ranks a page after it has been switched over to mobile-first indexing.

You can use tools to check internal linking on mobile by crawling sites with a mobile user-agent.


18. Google Ignores Links Both Manually & Algorithmically

Google uses algorithms to ignore links as well as manual methods.

Google’s Web Spam Team can take manual action to block sites from passing PageRank.



19. Multiple Options Available to Serve Different Country Versions of Product Pages

Mueller says that there are several options when it comes to serving different country versions of product pages.

Having all versions on the same page and using IP redirects to serve the correct version will mean Googlebot will only ever see the U.S. version.

Using separate landing pages with hreflang will mean that the value of the page is diluted.

Another option is to serve country-dependent elements using JavaScript and to block them from being crawled.


20. Ensure Each Country Version of a Page Has a Clear Canonical & Hreflang Is Implemented Between Equivalent Canonical Versions

Mueller recommends that you don’t specify canonicals between equivalent country versions of a page because Google will probably only index the preferred version.

Instead, he stated that there should be a clear canonical version for each country version and hreflang should be implemented between each equivalent canonical version.


21. Hreflang is a Minor Canonicalization Signal

When Google chooses the canonical version of a page, hreflang is used as a minor signal.

Hreflang should be reinforced with consistent signals in the form of:

  • Sitemap files.
  • Internal linking.
  • Rel=canonicals.


Structured Data

22. Structured Data Is Only Used by Google If a Site Is Considered High Quality & Trustworthy

Google needs to consider a site as high quality and trustworthy before it will display structured data as rich snippets in the search results.

In addition to this, structured data needs to be added correctly from a technical perspective and applied to relevant content.


23. Date Published & Modified Structured Data Recommended for Articles

Mueller recommends adding date markup for articles as this enables Google to extract the correct date from the page more easily.


24. Make Sure Ratings Markup Reflects Actual User Feedback & the Primary Page Topic

For Google to display ratings markup in the search results, it should be reflective of the primary page topic and actual user feedback.

For example, ratings markup on a page about a car model should be about that specific car and it should be open for anyone to submit feedback and not handpicked testimonials.


Site Architecture

25. Page Importance Is Determined More by Click Depth Than URL Structure

When determining how important a page is, Google gives more weight to how many clicks it takes to get to a page (click depth) rather than what the URL structure looks like.


26. Using Subdirectories over Subdomains Is Recommended Unless Pages Are Markedly Different from Rest of Site

Google sees subdomains and subdirectories in much the same way.

However, Mueller recommends keeping pages together on the same site in most cases and using a subdomain when these pages are substantially different from the rest of the site.


27. New Content Should Be Linked High up in the Site Architecture

It can make a big difference for Google if new content is linked to either from the homepage or high up in the site architecture.

For example, you may want to consider adding a “New Products” or “New Articles” section on a site’s homepage.



28. Google Uses User Experience Metrics to Determine Page Speed

With the Page Speed update, Mueller revealed that Google now uses actual user experience metrics from Chrome to determine page speed, alongside a number of other factors.


29. Google Uses a Scale to Assess Page Speed

With the Page Speed update, Google now assesses speed on a scale where the faster a page is the more Google can take this into account.

Prior to the update, Google only differentiated between pages that loaded within a normal time range and very slow pages.


30. Improve Page Speed by Loading Scripts After Rendering

Load time can be improved by embedding scripts so that they load after the page has rendered.

This is especially useful if the scripts are only required for functionality.


Relevance & Quality

31. Focus on Improving the Relevance of Sites That Suffered After August 1 Update

Mueller recommends that sites hit by the August 1 update should focus on improving their pages so they are more relevant for searchers.

However, it is worth keeping in mind that it can take Google’s algorithms some time to understand the impact of changes in the relevance of pages and to the site as a whole.


32. Consider Noindexing Low-Quality, User-Generated Content

Sites that feature a lot of user-generated content should look into tracking pages which are of low quality, so that rules can be set up to noindex these pages.


33. Google Only Takes Indexed Pages Into Consideration When Assessing Site Quality

Noindexed pages don’t reduce Google’s assessment of the overall quality of a site, as it is only indexed pages that are taken into consideration.



34. Lazy-Loading Images Should Use the Noscript Tag or Markup

In order to help Googlebot process lazy-loading images, Mueller recommends using a noscript tag or marking them up with structured data.


35. Don’t Forget to Redirect Old Image URLs When Migrating to a New CDN

Remember to redirect old image URLs to the new URLs when migrating to a new CDN, as well as updating the embedded links in the webpage.

This is especially important for images because they aren’t crawled as regularly as webpages, so all signals should be aligned so that Google can process the URL change as quickly as possible.


36. Image Tags Need to Have an ‘A’ Tag & ‘Href’ Element to Be Seen as Links

An image is only be considered as a link by Google if it has an ‘a’ tag and ‘href’ element.

For example, using an image tag to point to an image on another site won’t be seen as a link by Google.


A massive thank you to John Mueller for his dedication and tireless work in answering countless questions about SEO and for keeping us up to date with Google’s latest developments in search.



WhatsApp chat