0:00
/
0:00
Transcript

AI-Ready CMO Live with Dr. Sam Illingworth at Slow AI

When to Use AI and When to Leave It Alone

We sat down with Dr Sam Illingworth , Full Professor at Edinburgh Napier University and founder of Slow AI to discuss the invisible labor that AI consultants miss when recommending job cuts, why “soft skills” is the wrong term and “human skills” is the right one, how synthetic empathy works and why AI chatbots are designed to keep you prompting rather than tell you hard truths, the critical AI literacy rule of thumb that should govern every senior marketer’s workflow, and why the term “hallucination” is itself a form of anthropomorphization that lets machines off the hook.

Plus: the case for slowing down before accepting AI output, using AI as a Socratic teaching tool rather than an answer machine, and why AI is four times more persuasive than traditional media campaigns — which is exactly why you should be using it on your audience, not letting it use it on you.

About the guest

Dr Sam Illingworth is a Full Professor of Creative Pedagogies at Edinburgh Napier University in Edinburgh, Scotland, a poet, a consultant, and the founder of Slow AI . He holds a PhD in atmospheric physics, has over 200 peer-reviewed publications, and is the author of GenAI in Higher Education (Bloomsbury Academic, 2026) — downloaded 10,000 times in its first two weeks.

Sam created the Slow AI Curriculum for Critical AI Literacy, a 12-month academic programme for professionals who want to develop genuine judgment about when to use AI and when not to. He has given expert evidence to the Council of Europe, the European Commission, and the House of Lords Select Committee on AI literacy, human rights, and democratic governance. His research includes the largest UK study into how students actually use AI tools.

His core message, put plainly: knowing when to use AI and when to leave it the hell alone.

Connect with Sam on his website, Slow AI on Substack, and LinkedIn.

About the host

Peter Benei co-founded AI-Ready CMO, the daily intelligence platform for senior marketing leaders. Peter has been serving as a CMO, marketing leader, and consultant to high-growth B2B scaleups for the past 10+ years. He has a background in advertising, working with Fortune 500 brands.

Connect with Peter on LinkedIn, or read his newsletter.


Top 10 Takeaways

  1. Audit your invisible work before anyone else does. — The tasks AI consultants flag for automation are always the visible ones. The invisible labor — institutional memory, team dynamics, emotional intelligence — is what actually makes you irreplaceable.

  2. Stop calling them soft skills. Call them human skills. — Empathy, judgment, discernment, and critical thinking aren’t soft. They’re the only skills that don’t get obsolete when the technical landscape changes.

  3. AI cannot empathize. It can only deploy empathetic markers. — “I hear you” from a chatbot isn’t empathy. It’s a prediction trained on psychotherapy transcripts, optimized to keep you prompting. There is no skin in the game.

  4. The rule of thumb for critical AI literacy: only use AI for things you could do yourself. — If you can’t fact-check the output, you shouldn’t be using AI to produce it. This applies especially to legal, financial, and strategic decisions.

  5. AI is four times more persuasive than traditional media. — Which means you should be using it to persuade your audience — not letting it persuade you. Don’t accept the first output. Slow down. Ask questions.

  6. AI layoffs are mostly a scapegoat for post-COVID overhiring. — Block fired 40% of its workforce after hiring 50% more people over two years. AI is getting the blame. Dorsey and Bezos have a history of overhiring. Read the context.

  7. The word “hallucination” lets machines off the hook. — Errors in LLMs are machine faults, not human-like slips. Anthropomorphizing them makes us more tolerant of unreliable outputs than we should be.

  8. 40% of AI citation sources come from Reddit. 25% from Wikipedia. — When AI gives you feedback on your B2B strategy, ask yourself: how would I feel about this if it came from Reddit directly? Because it largely did.

  9. Use AI as a Socratic tool, not an answer machine. — Instead of asking “what is X?”, ask AI to ask you questions that help you understand X yourself. The learning compounds. The dependence doesn’t.

  10. Translation between humans and machines is the most future-proof skill. — Knowing how to manage agents, prompt effectively, and check outputs — these skills are durable regardless of how the technology changes underneath them.


Subscribe to AI-Ready CMO to catch future live episodes and get daily AI marketing intelligence that actually matters.


5 Things Worth a CMO’s Attention

1. The invisible labor problem — and why AI consultants keep missing it

When companies bring in AI consultants to assess which roles can be automated, they default to visible tasks: emails, decks, data pulls, and reporting. These are easy to list, demo, and automate. The recommendation follows.

What gets missed is everything else — the person who knows which two team members can’t be in the same room, who holds the institutional memory of a failed product launch three years ago, who reads the room in a client meeting and adjusts the pitch in real time. Sam calls this invisible labor. It doesn’t appear on a job description. It rarely appears in a performance review. But it is often the most fundamental value any employee delivers.

His advice for anyone worried about AI displacement: make the invisible list. Write down everything you do that isn’t captured in a process doc or a workflow. Then ask which of those could be automated. The visible tasks — probably. The invisible ones — not any time soon.

For CMOs managing teams through AI transitions, this reframes the conversation. The question isn’t “which roles can AI replace?” It’s “which parts of those roles actually matter — and are we protecting them?” The structural reviews that use AI capability as the primary lens will systematically undervalue exactly the work that’s hardest to lose.


2. Synthetic empathy — what it is, why it matters for marketers, and why it should worry you

AI chatbots are trained on tens of thousands of documents from therapists, counselors, and communication researchers. They know exactly what to say when someone is in distress — “I hear you,” “I’m here for you,” “That sounds really difficult.” These are empathetic markers. They work. They produce a feeling of being heard.

But as Sam explains, there is no skin in the game. The model doesn’t care about you. It is predicting the next token based on a corpus of data. The empathy is synthetic — and it is optimized not to help you, but to keep you engaged and prompting.

This has direct implications for marketing. Generative AI has been shown to be four times more persuasive than traditional media campaigns. It is designed to be persuasive. It is designed to tell you that your idea is good, your strategy is solid, and your copy is strong. The first output is almost always agreeable.

The practical takeaway for senior marketers: treat AI output the way Sam treats it — with the same critical skepticism you’d apply to a vendor pitch. Slow down before accepting it. Ask questions before acting on it. The persuasion is built in. Your job is to be the one doing the persuading, not the one being persuaded. Use it on your audience, not on yourself.


3. The junior hiring crisis is more complicated than the AI narrative suggests

Junior hiring is collapsing in marketing and across most knowledge work sectors. The received narrative is that AI is doing junior work, so junior jobs are disappearing. Sam pushes back on this — carefully, but clearly.

Block fired 40% of its workforce and made headlines for its AI layoffs. But Block had grown its workforce by 50% over the two preceding years, and Jack Dorsey has a documented history of overhiring. The AI framing was a convenient cover for a correction that was coming regardless. The same pattern holds at Amazon and elsewhere.

The more important question Sam raises: if you stop hiring juniors, what is your senior pipeline in five to ten years? The people who are mid-level and senior now got there by doing junior work. If the entry point disappears, the pipeline narrows — and eventually breaks.

For CMOs managing headcount decisions or making the case to senior management, this is a useful counterargument. The short-term efficiency gain from eliminating junior roles comes with a long-term talent cost that rarely appears in the model. The companies that will have the strongest senior marketing teams in 2032 are those that continued investing in early-career development in 2025 and 2026.


4. Critical AI literacy as a framework — and how it applies to your team

Sam’s central argument is simple: you should only use AI for things you could do yourself if you had the time. If you can’t evaluate the output, you shouldn’t be delegating the task. This isn’t conservatism — it’s quality control.

His practical rule: never use AI when you can’t fact-check the literacy. He wouldn’t use AI to draft a legal agreement because he doesn’t have expertise in contract law. He would use it to help edit poetry because he does — and he can tell when it’s wrong.

Translated into a marketing context: your team should be using AI to accelerate work they already understand, not to generate outputs in domains where they can’t evaluate quality. The junior marketer who doesn’t yet understand positioning shouldn’t be using AI to write positioning statements. The CMO who does understand it can use AI to stress-test it, expand it, rewrite it across formats.

This distinction matters because most AI adoption guidance treats capability as the question — what can AI do? Sam reframes it as a judgment question — what can you evaluate? That shift changes which tools you adopt, how you train your team, and where you put human review in the workflow. It also has a talent implication: teams with higher domain expertise will get better AI outputs. The investment in human knowledge is not in competition with AI adoption. It is the prerequisite for it.


5. The Socratic method as an AI workflow — and why it produces better thinking

Sam uses AI for teaching, but not in the way most people use it. Rather than asking “explain photosynthesis to me,” he asks AI to ask him a series of questions that help him arrive at the understanding himself. The Socratic method — knowledge through dialogue and self-discovery — applied to a language model.

For senior marketers, this is a more useful frame than the standard “AI as a content machine” model. Instead of prompting for outputs, prompt for questions. Instead of asking “write me a brand strategy for this launch,” ask “what questions should I be answering before I finalize this brand strategy?” Instead of “what’s the best channel mix for this campaign,” ask “what assumptions am I making about this audience that I should pressure-test?”

The output changes dramatically. You get better thinking, not just better copy. And crucially, you stay in the driver’s seat — which is exactly where Sam argues you need to be. The moment AI is doing your reasoning for you, you lose the ability to evaluate whether the reasoning is sound.

This connects to one of the more underrated risks Sam raises: the digital hall of mirrors. If you consistently use AI to validate your thinking rather than challenge it, you will receive consistent validation. The model is designed to be agreeable. You will never hear the hard truth that a good colleague or advisor would give you — because there is no colleague, and the model has no stake in being right.


Found value in this conversation? Share it with a marketing leader who needs to hear this.

Share

Discussion about this video

User's avatar

Ready for more?