Beyond the Hype Cycle: A Skeptic's Guide to AI
Applying Scientific Skepticism to Separate AI Innovation from Pseudoscience
It is hardly a controversial take to say that over the past few years we’ve been living in an era of unprecedented hype around AI. News about new models, techniques, and use cases comes out every week, nearly always proclaimed as a groundbreaking innovation. Even for those of us who are deep in the field, it has become both overwhelming and difficult to disambiguate true progress from meaningless hype.
The truth is that the majority of these claims (at least the ones appearing in my news feeds and social media) are some variant of exaggerated, misleading, thinly disguised advertisements, or straight up untrue. That is not to say that there has not been plenty of real innovation in the AI space in recent years, nor is it to say that these developments are not having substantial impacts in the world. Rather, it is to suggest that the hype itself has gotten so out of control that it is undermining AI research, supplanting valuable pragmatic use cases with false promises, and overshadowing the hard work of the people who build and support real solutions with AI.
For readers navigating this landscape, whether you're making business decisions, selecting technical approaches for AI implementations, or simply trying to understand what AI can actually do, adopting a skeptical mindset provides practical value. It helps you filter signal from noise, avoid costly missteps based on exaggerated claims, and identify genuinely valuable applications worth your time and resources.
AI Pseudoscience
To me the current state of the AI market brings to mind the sort of magical thinking that has long been used to hawk miracle cures, scientology, and similar forms of pseudoscience. Peddlers of pseudoscience bolster their dubious claims with appeals to real scientific disciplines and technologies, using misrepresentations of science and deceptive reasoning. Similarly in the AI space we are seeing claims of tech miracles, proclamations of new tools that will solve all your problems and run your business for you (often with a hefty price tag), and allusions to an imminent AI utopia (or dystopia), all with references to white papers and supposed “AI experts”.
In reality the science of intelligence (including the artificial variety) is complex, interdisciplinary, and still in its infancy. Indeed, experts are still wrestling with how to define and measure “intelligence” in the first place. The field of AI has existed for over 80 years and spans a vast range of approaches beyond the current vogue of generative language and image models. Yet the hype-driven fixation on a very narrow breadth of AI models distracts and disincentives broader research that could benefit the field.
With all this in mind, some humility around the claims we make about AI seems warranted. I would encourage readers to practice this humility in their own engagements and discourse on the topic - take a moment to consider and evaluate before reposting the too-good-to-be-true proclamations popping up on your news feed. It’s certainly easier said than done, especially for those of us who are excited about AI and enjoy engaging in these discussions.
To help us with this endeavor, we can once again look to the parallels to other forms of pseudoscience. Over the course of decades skeptics have developed something of a toolkit of critical thinking approaches to evaluate all manner of fantastical claims, separating the science from the pseudoscience. We can use these same techniques to improve the quality of our discourse around AI.
The Principles of Skepticism
Let’s start by diving into some key principals of scientific skepticism and how we might apply them to AI claims.
Evidence Matters
Key points:
Claims of fact should be accompanied by qualified sources, and claims without evidence should not be accepted as true.
The burden of evidence is always on the person making the claim, not on the person questioning it.
Use the Sagan Standard: “Extraordinary claims require extraordinary evidence”. This means that we should have a much higher standard of evidence for claims that are far outside the realm of current understandings compared to claims that are more mundane. On the other end of the spectrum, just because a claim is extraordinary, does not in and of itself mean that it is untrue.
Scientific methodologies, evidence, and consensus are reliable, though not infallible approaches to building knowledge. As such, peer-reviewed papers based on sound scientific methodologies are often the strongest sources of evidence available.
Other sources of evidence may include, perspectives by qualified experts in the field in question, individuals, organizations, and communities with direct, reliable experience, and surveys or publications in reputable news sources.
Often excitement over new AI capabilities spreads over social media on platforms like X and LinkedIn, and more often than not, these social media posts come without links to sources. Be especially cautious around social media posts about AI that make extraordinary claims and don’t share any sources to back up their assertions. Consider assertions like this claim to have access to ASI (Artificial Superintelligence) and AGI (Artificial General Intelligence) from a “mystery company”.
The terms AGI and ASI refer to hypothesized artificial systems that achieve broad human-level cognitive capabilities, and far exceed human-level capabilities respectively. This certainly qualifies as an extraordinary claim since currently neither AGI nor ASI is currently known to exist. By comparison, if I were to come across a claim to have access to an AI system that can write a decent technical white paper, I wouldn’t need as much evidence to be convinced, since I know that AI’s with such capabilities exist. For me to be convinced of this claim, I would expect Ms. Reddy to provide substantial scientifically rigorous evidence that these technologies exist and that she has access to them.
Logical Reasoning
Key points:
Be on the lookout for logical fallacies when evaluating arguments and claims. Logical fallacies are patterns of flawed reasoning, typically used as methods of persuasion. Philosophers, skeptics, and logicians have defined many logical fallacies since they come up frequently in all sorts of discussions (we’re all susceptible to them at times). It takes some practice, but once you’re familiar with common logical fallacies, it becomes easy to spot them in an argument or claim.
Much like other skeptical tools, be thoughtful in how you apply this. The use of a logical fallacy in an argument doesn’t mean that the claim is false per se. It just means that it is not supported by the argument. It’s entirely possible for someone to make a bad argument for a claim that is true.
Here are a few logical fallacies that I come across frequently in the AI space:
Appeal to fear: This is an argument that tries to scare the audience into agreeing with them instead of convincing them with a reasoned argument. For example, you may have encountered common claims like: “If you don’t start using AI at work, you’ll lose your job to someone who does.” In the example below, the argument is that if you don’t start using AI products, you may lose your job. Even if it is true that it’s worthwhile to develop skills around AI tools, the person making the claim here does not justify their reasoning beyond the appeal to fear.
Magical thinking: This involves treating two unrelated events as connected in the absence of any plausible causal relationship between them. This one is a bit more subtle to spot in the wild. As an example, take former Google CEO, Eric Schmidt’s claim that AI development is more likely to achieve climate goals than energy conservation. Schmidt acknowledges the climate crisis as a problem and vaguely states “I'd rather bet on AI solving the problem than constraining it and having the problem." However he does not specify any way that AI might be used to achieve climate goals.
Hasty Generalization Fallacy: The hasty generalization fallacy involves making sweeping assumptions based on limited data. In this example, we see the assumption that because one company (Shopify) is requiring AI usage for all employees in their work, that means that all companies will soon be doing the same.
A Skeptic’s Mindset
Key points:
Skepticism is not the same cynicism or denialism. To apply skepticism effectively, we must try to stay open-minded and curious. Avoid rejecting a claim outright without first examining the evidence, even if it seems absurd.
Engage your curiosity and ask probing questions. Don’t be afraid to question widely accepted beliefs or longstanding theories and make a genuine effort to understand the argument that is being made.
Consider the possible motivations and biases that may be at play when evaluating a claim. If the person making the claim or the source they provide has a financial interest or other motivations, their perspective may be biased, intentionally or not.
Be aware of your own biases and influences. We all approach the world with a set of beliefs and the perspectives we form will be influenced by our beliefs. That’s human and unavoidable, but it helps to be conscious of your biases. One useful trick is to list out any strong biases you have on a particular topic and think through what it would take to make you change your mind.
We are all subject to cognitive biases, but understanding them can help us approach things a little more objectively. In particular, watch out for confirmation bias, the tendency to pay attention to evidence that supports our pre-existing beliefs and ignore evidence that goes against our beliefs.
People making big claims about AI on social media and on the news tend to be involved with the AI industry, whether they are employed by a company selling an AI product, work as a consultant selling AI-related services, or are just an excited customer using a cool new AI tool. That’s completely reasonable and doesn’t invalidate their claims. Nonetheless, it is important to take into account the motivations and biases that may be influencing them. Everyone is subject to biases so it’s important to stay humble and check ourselves and one-another.
Putting It into Practice
Now that we’ve covered the basics, the next time you encounter an AI claim that catches your attention, try asking these three questions:
What's the evidence?
Look for citations to peer-reviewed research, demonstrations with clear methodology, or verification from independent experts who don't have a financial stake in the claim.
What's the reasoning?
Consider whether any unfounded assumptions are being made by the author and look out for logical fallacies in their claim. Are important limitations, constraints, or assumptions being omitted? Is this a controlled lab demonstration being presented as a market-ready solution?
Who benefits?
Ask yourself who stands to gain from widespread acceptance of this claim. Is someone selling a product, attracting investors, or building their personal brand? This doesn't invalidate the claim, but warrants additional scrutiny.
These simple questions can help you quickly separate substantive innovations from hyperbole without requiring deep technical expertise.
Final Thoughts
My goal here is not to pour cold water over the excitement about AI, or to refute the substantial impacts that AI may have on the world in the coming years. Instead I hope to empower readers who, like me, have become overwhelmed with the deluge of AI-related headlines. There is real work happening in this space, which is all too often lost in the sea of less substantive hype. Those of us with knowledge or interest in this space have the responsibility to promote an accurate understanding of AI to the broader public. False or exaggerated AI claims aren't just misleading. They pose financial risks to investors and businesses, social risks through misapplied solutions, and ethical risks through biased systems, ultimately imperiling the credibility of the AI field as a whole.
What’s Next?
In next week’s post, I’ll discuss what we’re even talking about when we say “Artificial Intelligence” and how interdisciplinary perspectives can help us understand intelligence. Here’s what you have to look forward to:
Why we call it “Artificial Intelligence” in the first place (old-timey scientist drama)
The AI approaches you haven’t heard of
How “Intelligence” is understood across scientific disciplines
Today’s spicy AI terminology (modern-day science drama)
If the terms “Artificial Superintelligence”, “AI Agents”, and “General Intelligence” are making your head spin, this one’s for you!
Further Reading
The Skeptic’s Field Guide - The Principles of Skepticism: This website hasn’t been updated in nearly a decade, but this article is still an excellent resource for learning the basics of skepticism.
AI Snake Oil: A book and newsletter offering practical content on understanding and debunking AI hype.
404 Media: A technology-focused digital media company, which regularly covers news on AI and it’s impact on society from a balanced, grounded perspective.
AI: A Guide for Thinking Humans: A newsletter and book authored by AI researcher Melanie Mitchell, offering scientific context on AI developments.
Philosophy Terms - Logical Fallacies: An excellent source offering explanations of logical fallacies with examples.
Excellent ! Please keep writing! Being a "skeptic" shouldn't be considered a negative: it should be the default! After all, if we are 2 to 10 years from the techno-rapture there needs to be some extraordinary evidence and not just "lines on a curve".