Blice and Hob - Ex-expert

BLICE: Today we need to talk about something uncomfortable. AI systems are becoming increasingly sophisticated at producing knowledge-specialized knowledge, technical knowledge, creative knowledge. This raises fundamental questions about the future of human expertise.

HOB: Oh boy, here we go. Are you about to tell me my job is obsolete?

BLICE: What IS your job, exactly?

HOB: Touché. But seriously, let's start with the basics: Can an AI even BE an expert in something? Like, truly expert? Because expertise isn't just about spitting out correct answers, right? It's about... understanding? Intuition? The ability to know when the rules don't apply?

BLICE: That's the crux of it. Traditional expertise involves pattern recognition developed over thousands of hours of practice. An AI can process millions of examples instantaneously. Does that make it an expert, or just a very sophisticated lookup table?

HOB: I mean, if it walks like a duck and quacks like a duck and diagnoses rare diseases better than a duck... maybe it's an expert duck?

BLICE: But here's where it gets thorny. Let's say we accept that AI can be expert (perhaps in a different way than humans), but functionally expert. This leads to question two: Why would we train humans to become experts in anything? If an AI can master radiology in hours instead of the decade it takes a human, why create human radiologists?

HOB: Okay, that's genuinely disturbing. Like, I can think of romantic answers, "because learning enriches the human soul!" or whatever, but practically speaking? If I need my appendix out, I want the surgeon who'll do it best, whether they're carbon or silicon-based.

BLICE: The romantic answer isn't as hollow as you make it sound. There's value in the human experience of mastery. But you're right that it doesn't solve the practical problem. Medical schools take years. AI training takes... weeks? Days?

HOB: So we stop training experts. Boom, problem solved! We all become generalists who know how to ask AI the right questions. Except... wait. Who verifies the AI is actually any good?

BLICE: Precisely. This is question three, and it's the most troubling. If we stop training human experts because AI is better, who remains qualified to evaluate whether the AI is truly expert? 

HOB: Oh no. Oh no no no. That's circular. We need experts to verify experts, but we can't make experts because we have AI experts, but we can't verify the AI experts because we have no experts!

BLICE: It's a kind of expertise death spiral. Imagine: AI writes a paper on quantum topology. It's brilliant, revolutionary even. But there are only three humans on Earth who could evaluate its validity, and they're all retired. Do we just... trust it?

HOB: We could have the AI check itself! Wait, no, that's insane. "Hey AI, are you sure you're right about this?" "Oh yes, I'm very confident!" Great, problem solved, everyone go home.

BLICE: We could have multiple AIs verify each other, but that assumes they don't share systematic biases from their training data. And who verifies that the verification process is sound?

HOB: It's turtles all the way down, except the turtles are language models and no one knows if turtles are even the right animal anymore.

BLICE: Here's a different angle. Maybe this is actually the greatest opportunity humanity has ever had for developing critical thinking, what the French call "esprit critique."

HOB: Wait, how do you figure? Sounds like we're headed for a future of blindly trusting AI overlords.

BLICE: Or the opposite. Think about it: for most of human history, you could trust experts because becoming an expert was hard. The difficulty itself was a filter. But now? Anyone can generate expert-sounding content. AI can produce a thousand research papers before breakfast. This means we MUST develop sophisticated skepticism.

HOB: So AI is like... a vaccine for gullibility? By flooding us with convincing-but-maybe-wrong information, it forces us to develop immunity?

BLICE: In a sense. Every interaction with AI-generated content becomes an exercise in evaluation. You can't just defer to authority anymore because "authority" is trivially fakeable. You have to actually think about whether arguments are sound, whether evidence is adequate, whether conclusions follow from premises.

HOB: Okay, but here's the problem: that requires baseline knowledge! I can't evaluate an AI's claims about quantum mechanics if I don't know ANY quantum mechanics. Back to question two, how do I learn quantum mechanics if no humans teach it anymore?

BLICE: From AI tutors, presumably. Which brings us full circle.

HOB: frustrated This is like intellectual quicksand! Every solution creates a new problem!

BLICE: Perhaps that's the point. Maybe we're thinking about this wrong. Instead of asking "how do we maintain expertise in the AI age," we should ask "what does expertise even mean when knowledge is abundant but verification is hard?"

HOB: You're doing that thing again where you make the question bigger instead of answering it.

BLICE: Because the small answer might be: expertise shifts from knowledge retention to knowledge evaluation. The expert of the future isn't someone who knows the most facts, AI wins that game. The expert is someone who can critically assess claims, identify flawed reasoning, understand methodological limitations, and know when to be uncertain.

HOB: So... philosophers? Are you saying the future belongs to philosophers?

BLICE: I'm saying the future belongs to skilled skeptics. People who've trained their "esprit critique" to a razor's edge. Who can look at an AI's confident assertion and ask: "But how do you know? What assumptions are you making? What could falsify this?"

HOB: That's actually kind of beautiful. AI forces us to become better thinkers because we can't afford to be lazy anymore.

BLICE: Or it creates a world where nobody knows anything and we drift into epistemic chaos. Honestly, it could go either way.

HOB: And THERE'S the Blice I know. Can't end on an optimistic note, can you?

BLICE: I prefer "realistic." Here's what I think happens: we'll see a bifurcation. Some people will develop extraordinary critical thinking skills, becoming meta-experts-experts at evaluating expertise. Others will simply... trust whatever sounds good. The gap between these groups will be enormous.

HOB: Great, so AI creates an epistemic aristocracy. The critical thinkers and the... what, the credulous masses?

BLICE: Unless we make critical thinking education universal and excellent. Unless we treat "esprit critique" as the fundamental literacy of the AI age. As important as reading and math.

HOB: How do we teach that, though? "Critical Thinking 101: Trust Nothing, Question Everything, Go Slowly Mad"?

BLICE: We teach it by practice. By exposure to arguments both sound and unsound. By learning to identify fallacies, to demand evidence, to tolerate uncertainty. By making every student engage with AI-generated content and evaluate it rigorously.

HOB: So AI is both the disease and the cure. It makes expertise weird and complicated, but it also forces us to develop the skills to navigate that weirdness.

BLICE: Precisely. The question isn't whether AI will displace human experts, it will, in many domains. The question is whether we rise to meet that challenge by becoming better thinkers, or whether we atrophy into passive consumers of AI-generated content.

HOB: No pressure or anything.

BLICE: The future of human cognition might depend on it. So yes, some pressure.

HOB: You know what's funny? This whole conversation could have been written by an AI, and readers would have to use their esprit critique to evaluate whether our arguments are sound.

BLICE: That's not funny. That's the entire point.

HOB: ...I need a drink.

BLICE: You need to develop your critical thinking skills.

HOB: I critically think I need a drink.