Cut Through!
Cut Through!
Why is communications so difficult today? Pt 4: AI Transformers, Rise of the Machines
0:00
-22:49

Why is communications so difficult today? Pt 4: AI Transformers, Rise of the Machines

How AI is changing communications

In this, the last of my series on how our media and information environment has changed, I’m looking at the impact of AI. There is too much to say in one article, so I have ended up splitting it in two.

In this article, I look at how AI is changing communications, why, and where we might be going next. Next week, I’ll look at the practical steps communicators should be taking this year to respond.


In 2017, eight Google researchers released a paper titled “Attention Is All You Need”. It was nine pages long and buried among hundreds of technical papers published that year.

It received little attention at the time, but it was the Big Bang moment of modern AI.

It proposed a new way of structuring neural networks in artificial intelligence systems. The authors had solved a problem that had eluded machine learning for decades: how to generate human-like language at scale.

The model it proposed was called The Transformer. It is the foundational technology behind every large language model (LLM) that it is reshaping communications today.

In this series, I have been looking at how the technological leaps of the last 25 years have created a more D-FACC information environment (Democratised, Fragmented, Abundant, Corroded and Concentrated).

AI puts this on steroids.

Here’s my assessment of what AI means for communications:

1. Democratisation and abundance: Drowning in a sea of synthetic content

While search and social media created a more democratic information system, in which anyone could find or publish content, it didn’t mean anyone could create content. But with generative AI, almost anything can now be created instantly by anyone.

Visual content that only a few years ago would have been the preserve of the biggest Hollywood studio budgets, can now be produced in seconds by children for next to nothing.

When social media collapsed publishing and distribution costs, the volume of content boomed. Now creation and production costs are almost zero too, the volume of content is exploding again.

Much content is no longer being produced because someone has something important to say. It is being produced simply because it is cheap to generate.

According to Reuters, by some estimates the majority of content being produced on the internet today is already being created by artificial intelligence. Automated news, photo-realistic images, new music, videos of any and every genre are flooding the web.

The ability of any communications team to have their message stand out from the background noise is becoming more difficult every day.

When even great content is being drowned in a sea of AI slop, how can you avoid your visibility being reduced to virtually zero?

2. Fragmentation: Tailored content at scale.

One of the responses from communications teams is going to be ever more tailored and personalised content.

To cut through, people will use the enormous data crunching and production power of AI to develop content which is more tailored for each individual: more useful and more emotionally resonant to them personally.

Here are three examples of where this is happening right now in communications:

First, you can train your AI tools on customer data so that you’re able to apply an ‘audience filter’ to any communications task. For example, select a filter for “retirees struggling with the cost of living” and get a plan tailored for their preferences, platforms and preferred messengers.

Second, you can work with research companies offering “a digital twin” - a real-time simulation of the views of a specific audience segment. Communicators can test two messages instantly and understand which works best, or hold a conversation with an AI focus group to understand how best to position an announcement.

Third, you can avoid your communications and marketing team altogether. If you tell Meta your communications objectives and budget, its AI engines will identify the right audience, create the messages, test variants, optimise them for each user and report the results for you.

This isn’t science fiction, these tools are all available to brands today.

Creating and distributing ever more tailored content is going to become a more important skill for communicators. But it’s also going to raise more challenges because it will mean we will each live in even more fragmented information environments, with shared reference points eroding further.

What your AI assistant says to you will be different to what it says to me. You might get different facts, different framing, different explanations. The reasons why will be driven by our data, but be totally opaque to us both.

To give you a sense of where we might be heading, here’s one possible future.

The online world we know today is splitting in two - between tasks and entertainment.

Each task or goal we undertake on the internet - doing a tax return, ordering food, planning a holiday - could in future be done by an AI agent. If it’s your turn to arrange ‘date night’ then an AI agent might check your diaries, search your photos to see what restaurants and food you both like, and book a table for when you’re both free. Who knows, maybe it will even contact a solicitor the next morning if it goes badly.

Meanwhile, we will use the time saved doing these online tasks to spend more time being entertained by content produced and curated by AI. That AI will have access to unmatched behavioural data - it will know what you react to, share, or scroll past. And it will know what you might buy or be persuaded of. It will micro-tailor messages and recommendations for every moment and your every mood.

If all this sounds like an episode of Black Mirror, then check out this promo for Visa Intelligent Commerce - “Using AI to Buy!”

We are creating automated personal persuasion engines. And as the saying goes: “if you are not paying for the product, then you are the product”.

3. Corrosion: An AI feature not a bug

AI will inevitably lead to a further corrosion in our information environment. To understand why, it’s necessary to know something about the underlying technology and its intrinsic limitations.

Fundamentally, LLMs are predictive text machines. Because they are trained on enormous amounts of data (pretty much the entirety of the internet) they are extremely good at guessing what the next word in a sequence should be.

But that is all they are doing: predicting the next word in a sequence based on everything they have seen.

If you ask, “Who was the first man on the moon?”, a large language model will almost certainly reply “Neil Armstrong”. But it doesn’t know what the moon is or that Neil Armstrong was a person. It is just calculating the most likely next words.

This is integral to understanding AI risks and hallucinations. AI systems are optimised for believability, not accuracy. They generate very convincing human-sounding text.

But they don’t tell you what the answer is. They tell you what a plausible answer sounds like.

There are two key ways that this will reduce quality and make fake content more likely.

First, AI slop. Because LLMs are trained on what works well at scale, they gravitate towards what an average or standard good answer sounds like. But they struggle to produce original or inventive work, even when prompted to do so.

You can already spot it in half the posts on LinkedIn. Intros that start with a question. Generic ‘rule of three’ lists. Surface level analysis. (See what I did there).

AI tools are designing for the clickable content that engagement-driven algorithms favour. And so, the clickbait and empty rhetoric rises to the top and the original writing sinks in the slop.

The second corrosive trend is fluent falsehoods. It used to be fairly easy to spot the person emailing you with a scam. Disinformation often looked amateurish. Articulate and coherent language could be a useful proxy for assessing likely knowledge and expertise.

Today, audiences cannot rely on language quality to assess credibility. LLMs are optimised for plausibility and fluency. They will make an answer sound professional and authoritative even if the underlying content is nonsense.

And it’s not just written text. The person who sounds like your Chief Executive on the phone, the politician making the outrageous statement online, or the video call from your grandson who needs an urgent loan - they simply may not exist at all.

In a world where total nonsense can sound convincing, people’s propensity to trust even authentic information will reduce further.

And that matters to communicators because trust is becoming harder because falsehoods are becoming more convincing.

4. Concentration. Power hungry tech and tech bros

While social media had very strong network effects, the barriers to entry for new social media platforms were really quite low. That is very different with AI.

To build a foundational AI model (and the vast data centres that power its answers) requires vast amounts of capital. Tens if not hundreds of billions of dollars.

The most capital intense race in business history is happening right now.

Perhaps for the first time, this will put more power in the hands of companies than in nation states. Every aspect of our lives is going to be influenced, if not governed, by AI. Yet a medium sized nation state like the UK, despite being one of the top ten economies in the world, could not afford to develop an AI model of its own. There is little prospect of an AI equivalent of public service broadcasting.

The incredible costs of AI will therefore result in an information environment that is not just more automated, but more centralised. This series began with the story of how Google Search swept away the traditional gatekeepers of information, and it ends full circle with Google crowning itself as a new information gatekeeper.

AI has changed Google from a search engine to an answer engine. People used to search for information, click on the links and find the answer. Now they are presented with the answer in an AI summary. This is already having a real-world impact on news sites. Google search traffic to publishers declined by one-third in the year to November, according to Chartbeat data.

The decisions about which sources are credible, what to summarise, and what to omit altogether are being shaped by opaque AI models with incentives that communicators cannot fully understand.

AI is different from previous technological leaps in communications. It is not a new channel for communicators to master: it will be an almost invisible layer mediating almost every interaction between your organisation and its audiences. In the next episode, I’ll look at how we might navigate this.

As with search and social before it, influence will flow through whoever owns the infrastructure. How will AI investors get their billions back? I suspect part of the unspoken deal will be the same as for every other new medium - by selling your ears and eye balls.

The consequences for communicators

Across this series, I’ve argued that communications feels harder today because our information environment has been changing more quickly than our communications practice.

Search dismantled the old gatekeepers and democratised access to information. Social media and the smartphone turned information into an endless, habitual scroll. Algorithms fragmented our information environment and concentrated visibility in the hands of a few platforms.

AI is now accelerating all of those trends as well as industrialising content creation itself.

Seen together, these changes explain why today’s environment feels so unforgiving. The qualities that once helped good communicators cut through - relationships, authority, storytelling and credibility - still matter but are being eroded.

The question every communicator should be asking themselves today is ‘how can I be seen and be trusted in this new world’?

In the next episode, I’ll look at what communicators should be doing right now in response to AI: how to adapt your strategy and practice for an AI-mediated world.

A final thought

For those who see AI primarily as an opportunity, it can help us operate faster, test ideas more quickly, and tailor messages with unprecedented precision.

For those who see AI as a threat, it can create hyper-realistic deepfakes, take our jobs, and target messages in ways that blur the line between persuasion and manipulation.

At my most optimistic, I think of the invention of AI as like the invention of the washing machine for communications - freeing us up from laborious and repetitive tasks to focus on creativity and judgement.

At my most pessimistic, I wonder if the invention of the power loom is a better parallel - replacing artisans weavers with automation and producing high-quantity but inferior-quality guff.

There may be truth in both positions.

However, I think one lesson is already clear. Communicators who treat AI as a partner, not a rival, will go further. That doesn’t mean relying on it uncritically, but using it to produce higher quality work more quickly.

We also need to recognise (and be grateful for!) AI’s limitations. For now, they lack intent, nuance, creativity and understanding. They give us human-sounding language, but that is not the same as human language.

In a world of infinite automated speech, it is our human capacity to innovate, discern truth, build trust and exercise judgement that matters more than ever.

Our role as communications leaders is not disappearing but evolving, as it has constantly throughout the last 25 years.

Simon

Thanks for reading Cut Through! Subscribe for free to receive new posts and support my work.

Contact Simon

Share

Leave a comment

Discussion about this episode

User's avatar

Ready for more?