0:00
/
Transcript

AI-Ready CMO Live with Andrea Chiarelli

Why Consulting Isn't Dead — and What AI Actually Can't Do

We sat down with Andrea Chiarelli, strategy and management consultant and author of The Art of Asking Questions, to discuss why generative AI hasn’t replaced consultants and won’t replace the ones who do real work, the difference between producing output fast and knowing whether it’s right, how the art of asking questions sits at the heart of both consulting and marketing research, the dual-LLM validation trick that cuts fact-checking time in half, why the quality of a consultant’s time has actually gotten worse even as efficiency has improved, and what the rise of solo consulting means for professionals leaving corporate. Plus: why biased research questions cost companies six-figure campaign decisions, why transparency about AI use is a reputation issue, not just an ethics one, and why storytelling is the last thing you can outsource.

About the guest

Andrea Chiarelli is a strategy and management consultant based in the UK, specialising in research and higher education. He works with universities, research funders, and corporate actors both nationally and internationally — advising clients on strategy, operations, and complex governance challenges. He is the author of The Art of Asking Questions on Substack, where he writes about research methodology, consulting practice, and how to think more clearly in a world flooded with AI-generated output. He also builds AI-powered tools for consultants, including a question methodology checker that flags bias and recall errors in research instruments.

Connect with Andrea on The Art of Asking Questions on Substack and LinkedIn.

About the host

Peter Benei co-founded AI-Ready CMO, the daily intelligence platform for senior marketing leaders. Peter has been serving as a CMO and marketing leader and consultant to high-growth B2B scaleups for the past 10+ years. He has a background in advertising, working with Fortune 500 brands.

Connect with Peter on LinkedIn or read his newsletter.


Top 10 Takeaways

  1. AI hasn’t replaced consultants — it’s replaced the formula-based ones. — If your consulting model was recycling the same advice for every client, AI can do that. If it were based on judgment, context, and human understanding, you’re still fine.

  2. You can generate a million pages of ideas in minutes. That’s not the hard part. — The hard part is knowing which ideas are right for this specific client, in this specific context, right now. AI can’t do that.

  3. The art of asking questions is irreplaceable. — Question order, framing, and bias aren’t mechanical problems. They’re deeply human ones. Getting them wrong produces biased data — and biased data produces six-figure strategic mistakes.

  4. Use one LLM to produce, another to validate. — Run your AI-generated report through a second model to fact-check claims, sources, and logic. The disagreements between the two are your red flags. This can halve your validation time.

  5. Watch where two AIs disagree — those are the high-risk areas. — When models conflict on a source or claim, that’s your signal to check manually. Most of the rest is routine click-through.

  6. AI made consultants more efficient, but made their time worse. — Producing more in less time sounds like a win. But replacing deep reading with tedious validation is a real trade-off that most efficiency narratives ignore.

  7. Transparency about AI use isn’t optional — it’s a reputation issue. — In consultancy, if a client discovers you used AI without disclosing it and the output was wrong, your credibility is gone. Disclose at kickoff. Confirm they’re comfortable.

  8. The story is the last thing you can outsource. — Data collection, extraction, and chart generation — all automatable. The narrative that makes findings land with a specific audience, at this moment, in this context, is still entirely human.

  9. AI is creating a hidden junior pipeline problem. — Senior consultants doing their own research with AI means junior colleagues never get the work that used to train them. The talent pipeline is quietly breaking.

  10. Solo consultants now compete directly with big agencies on price and speed. — Two or three experienced practitioners with good AI workflows can be faster, leaner, and more competitive on pricing than a 40-person agency. That dynamic is already shifting the market.


Subscribe to AI-Ready CMO to catch future live episodes and get daily AI marketing intelligence that actually matters.


5 Things Worth a CMO’s Attention

1. Why consulting isn’t dying — and which consultants actually should worry

The narrative that AI will kill consulting is too blunt to be useful. Andrea’s distinction is sharper and more actionable: the consultants at risk are the ones whose model was always formula-based — the same framework, the same slide deck, the same recommendations, slightly customized for each client. AI can replicate that. It already does.

The consultants who are genuinely safe are the ones whose value was always in judgment — the ability to read a specific organization’s dynamics, politics, and constraints, and give advice that fits that exact context. That’s not something you can extract into a prompt. The relevant variables aren’t in any document. They exist in conversations, in the silences, in the things people don’t say.

For marketing leaders who use consultants — or who are consultants themselves — this reframes vendor selection. The question to ask any external partner is no longer “what’s your methodology?” It’s “what do you bring that I can’t get from a good prompt?” If they can’t answer that clearly, the model is probably formula-based. And formula-based is exactly what AI is eating.


2. The art of asking questions — why bad research design costs more than bad data

Andrea’s newsletter title isn’t just branding. It’s a thesis. The single most important skill in research — whether you’re a consultant running stakeholder interviews or a marketer designing a customer survey — is asking questions that don’t contaminate the answers.

The example he gives is instructive: “How often do you go to a restaurant?” is a deeply flawed question because people can’t recall frequency accurately without a time period. The data you collect is unreliable. And if that data is backing a strategic decision — a positioning call, a campaign budget, a new product bet — the flawed research propagates all the way to the outcome.

AI makes this problem more acute, not less. AI can generate a hundred interview questions in seconds. It cannot tell you whether those questions introduce bias, whether the ordering creates a leading effect, or whether they’re structured in a way that distorts recall. That evaluation requires methodological expertise and judgment. Without it, you’re just producing bad research faster.

The practical takeaway for marketing teams: use AI to generate question candidates, then apply human judgment to evaluate them for bias, framing, and order. And if your team doesn’t have that methodological expertise in-house, Andrea’s question-checker tool on his Substack is a good starting point.


3. The dual-LLM validation trick — and how to use disagreement as a signal

One of the most practically useful things that came out of this conversation: the technique of using one LLM to produce content and a second LLM to validate it.

The workflow is straightforward. You generate a report, analysis, or research summary with your primary model. You then pass the full output to a second model — or an agentic tool like Claude’s computer use — with a clear instruction to fact-check every factual claim, verify every source, and flag any logical inconsistencies. You review the flagged items manually.

Andrea and Peter both use versions of this workflow, and both have found that it significantly reduces validation time without eliminating it entirely. The key insight Andrea adds is to pay special attention to the areas where the two models disagree. When one model makes a claim and the second model questions it, that’s your red flag. That’s where the risk of error is highest. The rest — the claims both models agree on — you can move through much faster.

This doesn’t solve the validation problem entirely. It changes its character. You’re no longer reading everything in depth. You’re triage-reviewing a flagged list. That’s faster. But Andrea’s honest about the trade-off: the deep reading was more intellectually engaging. The triage is tedious. Efficiency doesn’t mean the experience of work has improved.


4. The storytelling gap — why the narrative is what you actually get paid for

One of the clearest arguments in this conversation is about where AI bottoms out in the research-to-reporting workflow.

Data collection: largely automated. Extraction and structuring: highly automatable. Visualization and charting: increasingly automatable. The executive summary that makes the findings land with a specific C-suite audience, in a specific organizational moment, in a way that moves them to act: still entirely human.

Andrea has built Claude prompts that generate multiple narrative outline options from a set of research findings — different structural approaches to telling the same story. That’s useful as a starting point. But choosing which narrative arc will resonate with a particular audience, at this particular moment, given what you know about the politics and priorities of the room — that requires the kind of instinct that only comes from experience.

For CMOs presenting to boards or senior leadership, this is the point worth internalizing. AI can help you structure the data. It can suggest frameworks. It can write draft copy. But the story — the one that actually lands, that actually changes something — requires you to bring judgment about your specific audience that no model can replicate. The story is not a deliverable you can outsource. It’s the thing you’re actually being paid for.


5. The junior pipeline problem — a slow-burn talent crisis hiding inside AI efficiency gains

This is the issue neither productivity dashboards nor efficiency reports will surface, but Andrea names it clearly: as senior consultants and senior marketers use AI to do their own research and production work, junior colleagues stop getting the assignments that used to develop them.

The old model: a senior consultant scopes a project, briefs a junior colleague on the research task, reviews their output, gives feedback, and iterates. The junior colleague learns by doing. The senior colleague invests time in that exchange.

The new model: the senior consultant does the research with AI. It’s faster. There’s no time for the exchange. The junior colleague never gets the assignment.

This has an obvious short-term efficiency win. It has a serious long-term talent problem. The people who will be mid-level and senior in five years are the junior consultants and junior marketers who are supposed to be developing right now. If AI is absorbing the work that develops them, the pipeline thins — and eventually the organization has no one to promote.

For marketing leaders, this is worth explicitly building into AI adoption planning. Efficiency gains at the senior level that come at the cost of junior development are not free. Someone is paying for them — it’s just a cost that won’t show up on a balance sheet for three to five years.


Found value in this conversation? Share it with a marketing leader who needs to hear this.

Share

Discussion about this video

User's avatar

Ready for more?