AI Snake Oil

This is what ChatGPT says about it:
AI Snake Oil: What Artificial Intelligence Can’t Do (Yet) — a book and ongoing project by Arvind Narayanan and Sayash Kapoor — is a critical look at the real versus exaggerated capabilities of AI systems, especially those marketed as transformative or “intelligent.”

Here are the key points and core arguments from AI Snake Oil (based on their book, talks, and essays):


1. Three Types of AI Tasks

Narayanan and Kapoor categorize AI applications into three broad types — and only some of them actually work well today:

  1. Perception – Tasks like image recognition, speech-to-text, and translation.
    • Status: Works well and is genuinely transformative.
    • Reason: These rely on patterns in sensory data where training data is abundant.
  2. Prediction – Forecasting future outcomes (e.g., who will repay a loan, which patient will get sick).
    • Status: Some success, but limited.
    • Problem: Past data may not predict future human behavior accurately, especially in changing contexts or biased systems.
  3. Social or “Judgment” Tasks – Evaluating human traits (e.g., hiring, criminal risk, emotions, “trustworthiness”).
    • Status: Fundamentally unreliable or unethical.
    • Problem: These tasks lack objective ground truth; often encode bias or pseudoscience.

2. AI Is Not Magic — It’s Pattern Recognition

  • AI systems find correlations, not causation.
  • They excel in narrow, well-defined domains but fail when contexts shift (lack robustness).
  • Overgeneralizing success from perception tasks to complex social ones is misleading.

3. The Problem of “Prediction vs. Decision”

  • Predictive AI is often confused with decision-making AI.
  • A prediction (e.g., “high risk of reoffending”) is not the same as a justified decision (e.g., “deny parole”).
  • Ethical, legal, and social consequences require human judgment and accountability, not automated scores.

4. Garbage In, Garbage Out — Bias and Data Problems

  • AI systems learn biases present in training data.
  • When used for hiring, policing, or credit, this amplifies historical discrimination.
  • Transparency and auditing are essential to mitigate harms.

5. The Hype Cycle and “Snake Oil”

  • Many AI claims are marketing hype, not scientific reality.
  • Terms like emotion recognition, AI-powered hiring, or AI justice systems often lack evidence.
  • Companies exploit public misunderstanding of AI to sell unreliable products.

6. Regulation and Accountability

  • The authors advocate for policy intervention, not just technical fixes.
  • AI tools that affect people’s rights (e.g., employment, housing, credit) should be subject to proven efficacy standards—like medical devices or pharmaceuticals.
  • Transparency, auditing, and documentation (e.g., “model cards”) should be required.

7. The Path Forward

  • AI can be valuable when used responsibly: in medicine, education, and accessibility.
  • Focus on evidence-based applications rather than speculative or pseudoscientific ones.
  • Critical thinking about what AI can and cannot do is essential for policymakers, companies, and the public.

In short:

AI Snake Oil argues that while AI excels at perception, it struggles with prediction and fails at judgment. The authors call for skepticism, transparency, and regulation to separate genuine progress from hype.

I agreed with the premise before I even listened to the audiobook. Much of it was countering media descriptions and tech celebrities talking nonsense about AI. My tech friends would already know this, but I’d like more non-tech people to have better context for what’s actually happening. I didn’t know how many expensive failures there were with predictive AI tools that demonstrated accuracy by testing them with their training data. You can’t do that. The book reminds us that a lot of the fears of new technology are really fears about capitalism. We shouldn’t be afraid of what AI is going to do, but what people are going to do with AI.

Scroll to Top