Brain with multiple network connectors that look like legs.

Digital Marketing in 2026: Strategy, Humans, and a Business Reset

As AI floods every channel with content, smart CMOs are rediscovering an old truth: strategy before tactics, humans before machines. Structure, relevancy, and credibility are the foundations for thought leadership success in the coming year.

Oliver Pickup, Co-Founder and Editor-in-Chief, Pickup_andWebb

If you're a Chief Marketing Officer planning your digital marketing approach for 2026, you already know the problem. You have more tools than ever to create and distribute content, artificial intelligence assistants that can generate copy in seconds, and platforms competing for your budget with promises of reach, engagement, and conversion.

And yet none of this is making your job easier.

The challenge isn't capability. It's credibility. In a world where anyone can produce content at scale, the question has shifted from "can we create this?" to "why should anyone believe us?"

This isn't a theoretical concern. Steven Bartlett, the entrepreneur and Dragons’ Den investor, captured the mood in a LinkedIn post in early December. "I scroll social media these days and it genuinely feels soulless," he wrote. "The posts are AI. The comments are AI. The same writing style, the same format, the same emptiness."

Bartlett went further, announcing he's betting everything on humans. His reasoning is this: when AI can generate infinite content at near-zero cost, content stops being scarce. And when something stops being scarce, it stops being valuable. The only thing AI cannot commoditise, he argued, is “lived human experience”.

He used a smart analogy: nobody would watch Formula One if there weren't a human in the driver's seat. It's the human that creates the emotion, and the emotion that makes everything else. The same is increasingly true of thought leadership. Audiences are learning to spot when the human isn't really at the wheel – and when they do, brands risk running off the track entirely.

KWS_0152.jpg

"What people seem to be wanting, and therefore what the LLMs and search engines are responding to, are really good, compelling answers that they can trust."

Simon Bullmore Lead Facilitator on the Digital Marketing Strategy and Analytics programme at London School of Economics

The slop problem is real

The backlash against what's become known as "AI slop" – low-quality, generic, often inauthentic content churned out by generative AI – has reached such prominence that Australia's Macquarie Dictionary named it Word of the Year for 2025. And the costs are showing up on balance sheets.

Recent research from BetterUp Labs and Stanford University's Social Media Lab, published in Harvard Business Review, puts a figure on the damage: employees (often CMO, or those they are in charge of) spend an average of nearly two hours resolving each instance of AI-generated “workslop": content that masquerades as good work but lacks substance. That translates to $186 per employee per month in lost productivity. For a mid-sized marketing team, the annual cost of fixing AI-generated content exceeds $22,000. And, to stress, that’s not to create value, but to clean up after tools that were supposed to save time.

Meanwhile, 89% of UK CMOs already use generative AI in their marketing operations, yet 43% cite a lack of creativity or original content as their top concern. The tools have been adopted. The value proposition remains unproven.

image.png

If you need a visual reminder of what happens when oversight fails, look no further than a recent article in a Pakistani newspaper about automotive sales. Buried in the final paragraph, clearly visible in print, was an AI prompt: "If you want, I can also create an even snappier 'front-page style' version with punchy one-line stats and a bold, info-graphic-ready layout – perfect for maximum reader impact. Do you want me to do that next?"

Someone forgot to delete it. It's almost comical, until you realise this slip is happening across industries, in boardrooms, in thought leadership that's supposed to build credibility.

As another – incredibly costly – example, in October, Deloitte was forced to issue a partial refund to the Australian government after a $440,000 report was found to contain AI-generated “hallucinations”, nonexistent academic references, citations to non-existent reports, and even a made-up court case. A Labor senator accused the consultancy of having "a human intelligence problem" and suggested that clients "would be better off signing up for a ChatGPT subscription". The substance of the report may have been sound, but the damage to credibility was done. When even the big four are getting caught out, nobody is immune.