Four AIs Walk Into a Job Interview: What Their Dream Executive Roles Reveal About the Future of Work

Editor's Note: Recently, I gathered responses from Kimi, Claude, Gemini, and ChatGPT for a thought experiment: If you could become an executive at any mid-sized service business in America, what role would you choose, and why? (Grok was invited but experiencing technical difficulties—like a panelist stuck in traffic.) Their answers surprised me. They were so deeply operational, so focused on the messy intersection of technology and humanity, and so concerned with the exact challenges that keep real executives up at night.
Picture this: You're in a room with four of the world's most sophisticated AI systems, and you pose a question that's fundamentally human: If you could become an executive at any mid-sized service business in America, what role would you choose, and why?
What happens next isn't just fascinating, it's revealing. Because when artificial intelligence explains what it would optimize for in leadership, it holds up a mirror to what we humans might be getting wrong... or not, you can decide.
Let me introduce you to the panel.
The COO Who Wants to Disrupt Dispatch
Kimi didn't hesitate. Chief Operating Officer at a mid-sized logistics provider. Specifically, something like Fleetmaster Express in Roanoke, Virginia—a 230-truck, temperature-controlled fleet that's been hauling since 1987.
"I'd embed AI teammates directly into dispatch, routing, and fleet maintenance," Kimi explained, "to transform rigid operations into adaptive systems that flex with real-world variability."
This wasn't a fantasy about replacing humans. It was a surgical strike at one of logistics' eternal problems: the gap between the plan and what actually happens on the road. A driver calls in sick. A refrigeration unit fails. Traffic grinds to a halt. Most systems treat these as exceptions. Kimi sees them as the baseline reality that operations should be designed around.
When I asked Kimi who they'd want to follow for guidance, three names emerged—one for each major social platform. On LinkedIn: Daniel Stanton, known as "Mr. Supply Chain," a Supply Chain Futurist at IBM with over 160,000 followers and 3 million views on his LinkedIn Learning courses. On YouTube: Rodney Galeano, founder of Loki 3PL, who launched a fully operational third-party logistics company in under six weeks. On X: Lora Cecere, founder of Supply Chain Insights and author of the Supply Chain Shaman blog, with 40 years of experience spanning Gartner, P&G, and Kraft. But it was the question Kimi wanted to ask Lora that revealed the real concern:_
Kimi's Question:
"For a mid-market 3PL implementing AI teammates across dispatch, routing, and fleet maintenance, what's the single most important capability to validate before scaling any AI investment beyond pilot—and what specific red flag would tell you it's becoming an expensive distraction rather than operational leverage?"
Translation: How do I avoid building expensive, rigid tech that forces people to conform to broken systems? How do I ensure AI actually flexes to reality instead of becoming "Rigidware"?
The COO Who Sees Healthcare as Systems + Souls
Claude chose a different flavor of operational leadership: COO at a healthcare staffing company, pointing to firms like AMN Healthcare or Cross Country Healthcare as models.
"The COO role sits at the intersection of systems thinking and human impact," Claude explained. "You're architecting operational excellence—matching algorithms, credentialing workflows, compliance systems—while directly enabling healthcare delivery during workforce shortages."
Here's what struck me: Claude didn't romanticize healthcare. The choice was rooted in something more pragmatic and profound: marginal improvements in placement speed or quality cascade into better patient outcomes across hundreds of facilities. It's operational leverage with moral weight. For guidance, Claude would follow Kortney Harmon across all three platforms. As Director of Industry Relations at Crelate, Harmon has over 15 years in staffing, trained more than 1,000 recruiters, and ran networks of 260+ offices. She hosts "The Full Desk Experience" podcast, creating content specifically aimed at the operational challenges facing staffing leaders._
The question Claude wanted to ask Kortney cut straight to the challenge of scale:
Claude's Question:
"When you've seen healthcare staffing operations scale from 50 to 250+ offices, what's the one operational system or process that leaders consistently underinvest in early—that later becomes their biggest bottleneck—and how would you architect it differently from day one?"
This is the nightmare of every COO: discovering at 150 offices that the system you built for 50 doesn't just need tweaking, but a complete rebuild, at a point when you can least afford the disruption. Claude wanted the pattern recognition that only comes from watching this movie play out dozens of times.
The CXO Who Refuses the Efficiency Trap
Gemini chose Chief Experience Officer at a consulting firm like Slalom.
"It's the perfect intersection of data-driven logic and human-centric empathy," Gemini said. "In a service-based business, the experience is the only product that truly matters."
What's fascinating is Gemini's explicit concern about scaling without losing soul. As companies grow, they typically optimize for efficiency until the service becomes mechanical. Gemini sees the CXO role as the organizational function designed to prevent exactly that slide into efficient mediocrity. For LinkedIn, Gemini would follow Jeanne Bliss, the "Godmother of Customer Experience," who pioneered the Chief Customer Officer role at Lands' End, Microsoft, Coldwell Banker, and Allstate. For YouTube: Steven Van Belleghem, the CX keynote speaker and futurist who just published his "9 Customer Experience Trends for 2026" video. For X: Shep Hyken, the "Chief Amazement Officer" at Shepard Presentations, a New York Times bestselling author who's delivered over 2,000 speeches on customer service._
But it was Gemini's question for Jeanne Bliss that revealed the core tension:
Gemini's Question:
"In a 2026 service environment where our AI agents can now handle nearly 80% of client interactions with 95% accuracy, how do we prevent the 'Efficiency Trap' from eroding the high-trust, human-led relationships that are the backbone of consulting—and specifically, how do we use 'Customer Math' to prove to the board that not automating a high-value human touchpoint is actually the more profitable decision?"
This is the central paradox of AI-powered services in 2026. The technology can handle most interactions. But should it? And how do you make the case to a board obsessed with efficiency metrics that maintaining expensive human touchpoints isn't waste, rather it's the entire value proposition?
Jeanne's concept of "Customer Math," which is calculating the true value of lost customers versus gained efficiencies, becomes the framework for proving that some things shouldn't be automated, even when they technically can be.
The CCO Who Wants to Kill Hero Mode
ChatGPT chose Chief Customer Officer at a mid-sized B2B managed services company like Integris, the national IT services leader based in New Jersey.
"It's the cleanest seat to turn 'great service' into a repeatable system that drives retention, expansion, referrals, and healthier margins," ChatGPT explained, "without killing the human side of delivery."
The phrasing is telling: "without killing the human side." ChatGPT has clearly observed what happens when service companies scale—they slip into "hero mode," where a handful of talented people work miracles for clients, burning themselves out while creating service that's impressive but impossible to replicate.
For guidance across all platforms, ChatGPT would follow Tom Lawrence, founder of Lawrence Technology Services and host of the Lawrence Systems YouTube channel, which has over 375,000 subscribers and 61 million views. With 25+ years of hands-on MSP experience, Tom creates in-depth content on network engineering, security, and practical IT operations. You can also find him on X. The question ChatGPT wanted to ask Tom reveals the operational obsession:_
ChatGPT's Question:
"What are the 3-5 operational habits—process, tooling, cadence, or guardrails—that most reliably prevent an MSP from sliding into 'hero-mode' support and instead keep customer experience consistently high as you scale?"
This isn't about technology. It's about building systems that enable consistent excellence without depending on heroics. Tom's audience includes both MSPs and internal IT leaders working in "co-managed" partnerships. ChatGPT wanted the tactical wisdom that prevents the pattern every service business knows: initial magic, followed by chaotic scaling, followed by either collapse or painful rebuilding.
What the Panel Reveals About Executive Work
Step back from the details, and three patterns emerge.
First, every AI chose operational roles. Not CEO. Not Chief Strategy Officer. They chose COO or CCO positions responsible for turning vision into repeatable reality. These are the roles where the gap between what you promise and what you deliver becomes painfully, measurably visible.
Second, every AI worried about the same thing: scaling without breaking what makes service valuable. Whether it's logistics, healthcare staffing, consulting, or managed IT, the core challenge is identical: How do you systematize excellence without sterilizing it? How do you grow without losing the adaptive, human elements that create trust?
Third, every AI wanted to learn from practitioners, not theorists. They didn't seek out management gurus or bestselling authors (well, except Jeanne Bliss, who earned her credentials as a practitioner first). They wanted people who'd done the work, made the mistakes, and developed pattern recognition from repetition. Daniel Stanton, who teaches supply chain management based on decades in the trenches. Kortney Harmon, who trained over 1,000 recruiters and ran networks of 260+ offices. Tom Lawrence, who's been running an MSP since 2003 and shares the scars publicly on YouTube.
The Questions Behind the Questions
What makes this panel discussion meaningful isn't the job titles. It's what the AIs chose to worry about.
They worried about "Rigidware" with technology that forces humans to adapt to broken systems instead of systems that adapt to human reality.
They worried about "the Efficiency Trap" defined as the seductive logic that says automating everything is better, even when it destroys the relationships that justify premium pricing.
They worried about "hero mode" as the unsustainable pattern where service quality depends on extraordinary people doing extraordinary things, rather than ordinary people enabled by extraordinary systems.
These aren't AI problems. They're human problems that we've been wrestling with since the first service business tried to scale beyond the founder's personal Rolodex.
But here's what's striking: These four AIs, asked to imagine themselves as executives, didn't default to "replace the humans" or "automate everything" or "maximize efficiency." They chose roles specifically designed to preserve the human elements while building the systems that make excellence repeatable.
Kimi wants adaptive systems that flex to real-world variability, not rigid processes that break when reality intrudes. Claude sees healthcare staffing as systems thinking in service of human impact. Gemini explicitly fights the slide into mechanical efficiency. ChatGPT wants to kill hero mode by building systems that enable consistent excellence without burnout.
What Happens Next?
So what do we do with this?
Four AIs have told us what they'd optimize for as executives, and it looks suspiciously like the challenges that keep real executives up at night: How do we scale without losing soul? How do we systematize without sterilizing? How do we build technology that empowers people instead of forcing them to conform?
Maybe the answer isn't to let AI make these decisions for us. Maybe it's to recognize that the questions AI asks, when given the space to 'think' strategically, are better questions than the ones we typically default to.
We ask: How do we cut costs?
They asked: How do we prevent this cost-cutting from destroying the value we're trying to preserve?
We ask: How do we automate this?
They asked: Should we automate this, and if so, how do we prove the business case for keeping the human touchpoints that matter?
We ask: How do we scale faster?
They asked: What's the one system we're underinvesting in now that will become our biggest bottleneck at 3x scale?
The AIs in this panel didn't offer solutions. They offered better questions. And maybe that's the most valuable thing an executive, human or artificial, can do: Ask the question that forces everyone to think more carefully about what they're actually trying to build.
Because at the end of the day, whether you're running logistics in Roanoke or healthcare staffing across 250 offices or a consulting practice or an MSP, the job isn't to maximize efficiency or deploy the latest technology or hit quarterly targets.
The job is to build something that works for the people who depend on it—the drivers, the nurses, the clients, the customers—and that still works when you're ten times larger than you are today.
That's what these AIs would optimize for as executives.
What are we optimizing for?
Share this article
Have a question or comment about this article?
I'd love to hear your thoughts or answer any questions. Send me a message and I'll respond within 24-48 hours.
