I’ve audited many publishing sites over the years - well over 50 at last count - and suffice to say I’ve seen certain patterns emerging. Many publishing sites tend to suffer from the same types of SEO issues.
A while ago Itweeted a short threadwhere I listed these common issues. The thread took off and gathered a substantial number of likes and retweets, and I figured it would be worth expanding into a full newsletter edition.
So here are the most common issues I find when analysing news publishing sites for SEO:
Usually, a publishing sites has their NewsArticle structured data implemented correctly. At worst, I tend to see missing recommended attributes like, but generally the structured data for news articles is comprehensive and valid.
One thing the validation tools, like Google’sRich Results test, don’t check for is image sizes and aspect ratios. When a news article’s structured data references images that don’t adhere to Google’s recommendations, there is no warning or error in the validation tools.
As a result, many publishers don’t realise that their article images don’t match the guidelines. Referencing images whose size and/or aspect ratio don’t meet Google’s recommendations means that the article in question won’t be able to achieve its full potential in Google’s news ecosystem.
It may not rank well in Discover because Google can’t use a big banner image. The publisher logo may not be usable for Google, so brand recognition could suffer. Or the image is entirely unusable for Google, so the article can’t rank in any news element on Google (as they all have a visual component).
I dedicated one of my earlier newsletters to the intricacies of images and news articles, which you can read here:
You should also read Google’s relevant documentation on theirArticle structured data page, which contains all the recommendations for correct image sizes and aspect ratios.
While many publishers use article tags to some degree, there is usually little structure or strategy driving their approach. Tags are usually left to individual journalists’ discretion with little guidance on what constitutes appropriate use.
Moreover, internal linking to tag pages and other category pages is often non-existent. This is a huge missed opportunity.
There will be an entire upcoming newsletter dedicated to tags & topic hubs, as it deserves proper exploration. Summarised, tag pages serve the same purpose for publishers as product category pages serve for ecommerce websites: they show that you have articles/products in that category.
Such signals are important for search engines. When Google sees you have abundant content - be it articles or products - in a specific category, it serves as a strong sign that perhaps your content should rank for relevant searches.
Whereas if you have empty category pages, that serves as the opposite sign. Google would be less likely to rank your content as you lack substance in that area.
So keep an eye out for future editions of my newsletter (subscribe here) where I’ll explore tagging best practices in detail.
I’ve previously covered internal linking, so make sure you read up on that here:
A site’s top navigation links are the clearest signals it can send to users - and, by extension, to Google - what topics the site covers. Every top nav link is a big neon sign that says “This is what we write about!”.
I like it when websites use contextual subnavigation. This second layer of navigation links vary based on the site section the user is currently in, and adds extra signals to the site’s topical expertise.
Sadly, for such important signals top navigation and subnavigation links aren’t always given proper consideration.
It’s a delight whenever I get to work with publishers who have given their taxonomy the care and attention it deserves. These publishers understand the importance of proper navigation labels.
When the principles of taxonomy and information architecture have been applied to a website, the foundation for SEO excellence is laid. Unfortunately, not every publisher has such a foundation in place.
As an SEO consultant, my remit to advise clients on taxonomies and IA is limited. I know the basics, but am not in a position to guide a publisher through a proper taxonomy and site structure project. At best, I can advise clients to engage professionals in that area to help them get these foundations laid.
When an article is published on a publisher’s website, it’ll receive link value from the site’s homepage and section pages when it is listed there. This link value is important as it gives the article a certain amount of authority, which will enable it to rank in Google’s organic search results.
Additionally, the number of articles on a section page is an indicator to Google about your topical expertise on that section’s topic. Simply speaking, more articles = more expertise.
Showing older articles beyond the first page of a section continues the flow of link value to these articles, which gives them a better opportunity to rank in organic search. That can be especially valuable for evergreen articles.
Additionally, older articles beyond the first section page add to the site’s perceived topical expertise on the section’s topic.
On many publishing sites, pagination of article lists on category pages is handled with Load More buttons or infinite scroll.
When indexing webpages, Google doesn’t perform any actions - it does not ‘click’ on buttons, nor does it scroll down a webpage - so these Load More / infinite scroll features are not triggered.
This means that once an article drops off the first page of a section, it can become unfindable for Google through internal links. It effectively becomes an orphaned page, and doesn’t contribute to the site’s overall SEO as much anymore.
So pagination needs to be implemented in a way that is discoverable for Google. However, crawlable pagination needs to be implemented carefully as it can also be a source of crawl waste. We don’t want to create endless pages for Google to crawl historic articles.
There is a balance to be found with pagination between creating more work for Google and letting Google see more articles in a section. This is a fine line to thread, and the best implementation varies from publisher to publisher.
Google have providedtheir own recommendations for pagination. These are geared primarily towards ecommerce sites, but news publishers can take most of these to heart as well.
As Google is increasingly trying to determine a website’s quality signals, the so-called E-A-T factors come into play more and more.
E-A-T stands for Expertise, Authoritativeness, and Trustworthiness. These are factors that Google’s human search quality raters are tasked with evaluating websites for. The feedback gathered by these quality raters is used to improve Google’s ranking algorithms which are increasingly based on machine learning (ML) systems.
Due to the nature of ML systems, we are losing visibility on individual ranking factors and are heading towards a true ‘black box’ search engine environment. Basically, the ML systems rank websites without any human truly understanding what factors these ML systems are using. We just know that it works.
That’s where the quality raters come in. By using humans to double-check and evaluate the rankings that ML systems generate, Google can finetune these systems and ensure the quality of SERPs continues to improve.
Because quality raters look for E-A-T signals and their assessments are fed back into the ML systems, we see that E-A-T signals are playing an increasingly large role in rankings. Websites that lack proper E-A-T signals tend to suffer whenever Google rolls out a major algorithm update (roughly every six months or so).
This is why I check for all the known E-A-T factors that quality raters look for. These signals are not secret in any way; Googlepublishers the guidelinesthat their quality raters are instructed to use.
The E-A-T factors that many publishers are lacking on relate to transparency signals (author pages & bios), editorial policies, and ownership details.
The constant need to balance monetisation (ads, paywalls) with user experience is a tough challenge for almost all publishers.
On the one hand, publishers need to monetise their content or they might cease to exist altogether. On the other hand, on-page ads (the simplest and most scalable form of monetisation) make webpages slower and also cause all kinds of privacy concerns.
There is no easy answer here. Every publisher has to determine for themselves what their best approach to monetisation is.
Personally, I like freemium models where most news is accessible for all but readers are encouraged to sign up for a paid subscription that allows access to more and/or ad-free content.
What I tell my clients is that there’s no such thing as ‘too fast’ when it comes to an article’s ad-free load speed.
When optimising an article’s load speed, before any ad slot is loaded, I encourage clients to aim forhalf a second or faster. The article itself should load almost instantaneously before any ad tech comes in and starts throwing huge JavaScript payloads at the browser.
The article is what the publisher can control themselves, so that needs to be as fast and user-friendly as possible. Then, when the ad networks crap all over the article with whatever junk they’re serving to users, at least the baseline load speed is fast and allows users to read the content straight away.
Every news site is different. I try to approach every SEO audit with an open visor and (as much as I am able) without preconceived notions. Many sites do the big things well, but lose ground on the details. And those details add up to make a big impact.
Like most things in life, there's not one big change you can make for the better. It's about doing little things marginally better every day.
In every audit report’s introduction, I start with the concept of marginal gains. James Clear explainedthis conceptin hisAtomic Habitsbook, and I love how it can be applied to SEO as well.
This concept helps publishers come to grips with the day to day realities of improving their website. It’s not about making huge complicated changes that suddenly make everything better. Those ‘silver bullet’ solutions don’t exist.
It’s about doing things a little bit better, every single day. Every new improvement builds on every previous improvement, aggregating over time to boost your website in every meaningful metric.
It’s been a while since the last newsletter, and lots has happened in the wonderful world of SEO and publishing. Here are some of the best bits of news and content from the past few months.
Google published and/or updated their documentation in a number of areas: