Is Writing for Robots the Future of Online Content Creation? (Part One)

1

Have you ever wanted to find information on something but not known exactly what to search on Google? We’ve all been there and ended up typing (after an almost headache-inducing process) something that we’re convinced might make the processing chip in our device fry up from trying to parse the gibberish we’ve typed. Yet somehow the search engine is able to present results that relate to or directly answer what it is you intended to find out about. It can be quite amazing, right? Well, you’ve got Google’s investment in natural language processing (NLP), salience, and neural matching to thank for that.

What is Natural Language Processing?

The main thrust of NLP is Google (i.e. its robot) trying its best to understand the relationship and interactions between entities and content. Entities, in this case, refers to things like names, places, food, sports, proper nouns, regular nouns – basically the keywords you type into a search engine. Content doesn’t only refer to the text on a webpage but also information provided, purpose, context, user intent, semantics, images, and more. The idea is for the robot to be able to serve up search results that match the intent of the user even when the search phrase used is quite ambiguous or difficult to parse. This is even more crucial than ever considering how search has evolved to be more conversational, especially with developments like voice search. Natural language processing works to make the robot understand the nuances and sometimes unclear ways people do searches. One of the main components of natural language processing is salience.

Salience is Golden

The best way to describe salience is that it’s a way that Google’s robot attempts to make sure a heptagon peg goes into a heptagon hole rather than an octagon hole. From a certain angle or distance, the two shapes seem the same, but a closer study reveals the difference. This is to say, salience is concerned with matching search with the right results by looking at how much an entity relates to a possible result (a piece of content).

Salience extracts entities from a piece of online content and asks, “In its entirety, how much does this relate to what the user has searched?” It’s important to note the “entirety” aspect as this allows the robot to tell the difference between only a small part of a piece of content matching the user’s search, versus the whole or most of the content being dedicated to what the user searched. I’m sure you’ve had the experience of searching for something on Google, clicking on a result, only to find that a single line of the content on the page relates to your search or barely relates to it at all.

As content creators, this is important to understand. It is tempting to create sales-driven content under the guise of evergreen content or hygiene content in order to hit sales or lead generation targets. What often happens in such cases is that only one paragraph in a piece of content is dedicated to a topic while the rest is used to promote brand or product (in ways that are clever to us, but not to the robots), therefore not satisfying the user’s intent. Google wants to avoid this and will lower the ranking of content that barely scratches the surface of what the user is searching for.

What is Neural Matching?

Neural matching allows the Google robot to find a connection between those vague search phrases and the content on websites. Neural matching makes it easier for Google’s robot to understand concepts and their relationship with keywords (by the way, phrases, keywords, search terms, queries and any combination of these are used interchangeably in this article to mean the words people type into a search engine to find information). According to Danny Sullivan of Google, a neural matching algorithm is being used to handle at least 30% of searches.

But what does all this mean for content marketing and content creators? Well, it means prioritising the writing of content for a robot over a human. GASP! Listen as dead poets turn in their graves and English Literature degree holders scream blue murder! Thou doth protest too much. Hear me out first…

Man vs Machine

Over the last decade, Google has really grown in sophistication when it comes to understanding the relationships between what a person searches, the content on a website, and the person’s intent. Back in the day, if a user searched “Why do I have bad reception?” for example, the robot would simply scan the internet and show results for webpages that have that search term mentioned dozens of times. But now, the robot will consider the context, semantics, and use neural matching to provide the best results.

The robot will take into consideration things like the word “reception” being similar to “signal”, the difference between reception in the context of devices versus the context of a wedding reception or an office reception. It will look at other entities or keywords within the content like device names, office, marriage, wedding, network providers and so on. With a better understanding of language and how people search, the robot can assume the user is referring to a device, not a negative experience that was had at a wedding reception.

Due to the deeper expanse of Google’s knowledge graph and its robot’s humanoid ability to grasp concepts as well as its improved understanding of context, semantics and natural language processing, online content creation has gone way past the act of simply stuffing and repeating keywords dozens of times in an article to influence the search engine robot into ranking a webpage highly. Even the practice of using minor variants of a keyword and TF-IDF (term frequency-inverse document frequency) are not the major factors they used to be.

While having the phrases “beef recipe”, “beef recipes”, “meat recipe”, “beef mince recipes”, “how to cook beef” in your Beef Recipe article obviously helps it be salient to the search term “Beef recipes”, the robot will also consider keywords or entities like cooking time, temperature (of oven heat), herbs, preparation time etc. when deciding how relevant the page is to the search. In Google’s ideal world, a user should be no more than one click away from having their query answered. It’s one of its selling points and it invests heavily in its robot understanding search in order to accomplish this objective.  

What the above paragraphs mean is content creators have to write content with the demands of the robot in mind, which, thanks to improved AI and the shortening of the distance between human and machine readability, also means writing for humans.  

Two Birds, One Nest

Dead poets, all is not lost! As much as I’ve bigged-up Google’s AI thus far, a naked Arnold Schwarzenegger with a metallic exoskeleton coming back from the future in some dark alleyway to prevent us from stopping the rise of the machines is still a long way away. The robot is still learning to contextualise and connect the relationship between entities and content. Content creators can help both the human user and robot by providing clear, informative content that is dedicated to a topic and written in such a way that there is no ambiguity as to the context and purpose of the piece. 

This development is actually something to celebrate for those in the content marketing and content creation business because it means clients will look to experts in relative fields to create their content. A writer that has in-depth knowledge of a topic or experience writing online content will naturally and subconsciously include terms that contextualise the article. Add to that some knowledge of SEO, natural language process, and salience and you have a recipe for content that is optimised for search engine robots and satisfies user intent. In part two, I’ll go into more detail on exactly what online content creators can do to write content that is optimised for Google’s robot but is still more than fit for human consumption.

Leave a Reply

Your email address will not be published.