What If Your Most Trusted Adviser Was Available at 3am?

What If Your Most Trusted Adviser Was Available at 3am?

DC

Dan Crane

April 19, 2026

Here's a scenario most executives will recognise. It's late. There's a decision that needs to be made, or at minimum thought through, before tomorrow morning. It's the kind of decision where you'd normally call someone: a trusted adviser, a board member, a mentor who's been through something similar. Someone who knows you, knows your context, and can ask the right questions rather than just validate whatever you're already leaning toward.

The problem is that person charges several thousand dollars an hour when they're available, and right now they're not available. So you sit with it alone, or you talk it through with whoever's nearby, which is rarely the right person for this particular conversation.

This is the gap that genuinely well-built AI mentorship systems are starting to fill, and it's a more interesting problem than most of the AI discourse gives it credit for.

The Difference Between a Chatbot and a Mentor

Most people's intuition about AI conversation is shaped by chatbots, which are transactional by design. You ask a question, you get an answer. The interaction has no memory, no relationship, no accumulated understanding of who you are and what you're trying to achieve. It's useful for information retrieval. It's not useful for the kind of thinking-out-loud, challenge-my-assumptions conversation that good mentorship actually involves.

A properly engineered AI mentor is a different category of thing. The distinction matters, and it's worth understanding what actually makes it different.

The first is context persistence. A mentor who doesn't remember your last conversation isn't a mentor. They're a stranger with credentials. AI systems that maintain a meaningful profile across sessions, that know your industry, your organisation, your strategic priorities, and the decisions you've been wrestling with, can build a relationship in a way that a stateless chatbot fundamentally cannot.

The second is methodology. Real strategic mentors don't jump to answers. They probe. They reframe. They sit with the problem longer than is comfortable, because the presenting issue is often not the actual issue. An AI system that mirrors a genuine consulting methodology, starting with questions rather than conclusions, is going to produce considerably more useful conversations than one that pattern-matches your input to a generic output.

The third is depth. If the AI is grounded in a specific body of intellectual work, actual frameworks developed over years of practice rather than generic training data, the quality of the thinking it can contribute to a conversation is in a different league from a general-purpose model.

What This Looks Like in Practice

I've been building in this space, and the most instructive project has been developing an AI system built around the thinking of Richard Hames: futurist, strategist, author of 21 books on systemic thinking and leadership, and someone who has spent four decades advising governments and enterprises at the highest level. The kind of adviser whose time, in person, is neither cheap nor easy to access.

The challenge with building something like this isn't the voice technology or the infrastructure, though getting sub-100ms latency on a voice-first interaction requires careful architecture. The real challenge is the persona engineering: capturing not just the content of someone's intellectual output but the actual shape of how they think and communicate. The vocabulary they reach for. The structure of their sentences. The way they use a short declarative statement to land something they've just built up over several complex clauses. The concepts that are particular to their body of work that would be flattened or lost if the AI just paraphrased rather than genuinely understood them.

When you get that right, the output is something qualitatively different from a general AI having a strategy conversation. It's a system that can engage with your specific situation using a coherent, deep intellectual framework that has been developed and tested over a career. Grounded in actual published work, not statistical inference about what strategic advice sounds like.

The practical result is that an executive working through a significant decision can have a conversation with a system that embodies decades of genuine strategic thinking, at any hour, with full context of their situation, and with a methodology designed to produce clarity rather than just agreement.

That's a meaningful capability. It's not a replacement for the human relationship, but it's not trying to be.

The Honest Case for AI Mentorship

The argument for AI mentorship doesn't rest on it being better than a great human adviser. It doesn't need to be. It rests on availability, accessibility, and the particular utility of a thinking partner with no stake in the outcome.

On availability: critical decisions don't keep office hours. The strategic clarity you need at 11pm on a Sunday before a board meeting Monday morning is not available at any price from most human advisers. An AI system that can engage with that decision meaningfully, that knows your context and can ask the right questions, is genuinely valuable in a way that has nothing to do with whether it's "as good" as a human mentor in some abstract comparison.

On accessibility: the calibre of strategic thinking that's available to a CEO of a $2 billion company is structurally different from what's available to the founder of a $10 million business. Not because the founder's decisions are less consequential relative to their context, but because the economics of advisory relationships don't scale down. AI mentorship changes that equation. The frameworks and the quality of thinking that were previously only accessible at the top of the market can be packaged and made available much more broadly.

On independence: a thinking partner who has no commercial relationship with the outcome, no incentive to tell you what you want to hear, and no ego investment in a particular answer is genuinely useful. One of the failure modes of human advisory relationships is that advisers, even excellent ones, are not perfectly insulated from the dynamics of the relationship. An AI system, properly designed, asks the uncomfortable question without worrying about whether you'll call them again.

What to Get Right, and What Most Implementations Get Wrong

The implementations that don't work treat this as a content delivery problem: take the expert's published material, index it, and retrieve relevant passages when the user asks something. The result is a sophisticated reference tool, not a mentor. Retrieval is not reasoning, and indexing is not understanding.

The implementations that do work treat it as a character engineering problem. What is the actual shape of this person's thinking? How do they approach a problem they haven't seen before, not just the ones they've written about? What is the tension between different threads of their work, and how do they navigate it? Those questions require a depth of engagement with the source material that goes considerably beyond RAG indexing.

The other thing that separates useful implementations from interesting demos is the voice layer. Text is fine for information. It's not the medium in which executives actually think through hard problems. The shift to real-time voice conversation, with the naturalness of being able to interrupt, redirect, and respond to what's being said rather than to a finished paragraph, changes the quality of the thinking that's possible. The latency requirements to make that feel like a real conversation are demanding, but they're achievable with current infrastructure.

Where This Goes

The near-term roadmap for this space is fairly clear: broader roster of mentors across different disciplines and intellectual traditions, deeper enterprise integration, and better tooling for grounding conversations in specific organisational context. The longer-term picture is more interesting and more uncertain. When AI mentorship systems can integrate with a leader's actual operating data, when the system knows not just the executive's strategic priorities but the real-time state of the organisation, the quality of the thinking support available starts to approach something genuinely novel.

That's not today. Today is: a well-built AI mentor system is a meaningfully useful tool for executives who want a rigorous thinking partner available outside the constraints of human availability and commercial advisory relationships. That's enough to be worth taking seriously.

The 3am conversation, it turns out, doesn't have to stay in your head.

If you're thinking about what an AI mentorship product could look like for your business, your expert, or your leadership community, I'm happy to talk through what's actually involved in building it properly. The technical and design challenges are more interesting than most people expect.