Rebranding Intelligence: How Hype Narrows Innovation
Part 3 of a series examining the historical origins and fundamental flaws in how we talk about AI
Every few years, artificial intelligence seems to reinvent itself with new terminology, but behind the shifting buzzwords lies a consistent pattern of hype, disappointment, and strategic rebranding.
In the parts one and two of this series, I explored how the term "artificial intelligence" originated not from scientific grounding, but from academic positioning paired with grand vision of a few influential men. I then examined how intelligence itself is understood through diverse frameworks across disciplines, revealing perspectives rarely included in mainstream AI discourse.
This historical and conceptual confusion goes above and beyond academic debate to actively shape how we develop, fund, and deploy AI technologies today. Over the decades since the term was coined, "artificial intelligence" has undergone a series of rebrands, each signaling a shift in techniques, funding priorities, and public expectations while still targeting the ultimate goal of human-like intelligence. From symbolic AI to expert systems to machine learning to today's generative AI, these terminology shifts reveal a pattern: when one approach fails to deliver on the promise of general intelligence, the idea gets repackaged under a new name, creating fresh excitement while distancing itself from previous disappointments.
This cycle creates critical path dependencies that concentrate research funding, talent, and attention on increasingly narrow branches of the AI family tree. These same forces shape the tools available to practitioners, who are under pressure to develop “AI applications”, regardless of the suitability for the problem at hand. The result is a field where marketing buzz often trumps technical precision and practical utility. By examining how these terminology shifts drive both research agendas and practical implementation decisions, we can better understand why certain AI approaches flourish while others languish, regardless of their actual potential or appropriateness.
The Cycle of AI Rebranding
As I described in part 1 of this series, John McCarthy's symbolic approach to AI remained dominant until the late 1960's before enthusiasm began to wane. When AI in reality failed to meet theoretical expectations, funding dried up in the 1970's, the first "AI winter", and focus shifted towards more modest, practical applications.
In the 1980's, “expert systems” emerged as a more modest rebrand, using rules derived from expert knowledge to solve domain-specific problems. This practical approach briefly revived AI funding and business applications, but limitations eventually led to disillusionment and a second "AI winter," with symbolic AI relegated to "Good Old Fashioned AI" status.
The 1990's brought "Machine Learning" as the new term du jour, avoiding the taint past AI disappointments. This new “AI” was data-driven, using statistical and probabilistic techniques to identify patterns and make predictions from data. By the 2010's, improved hardware enabled more complex neural networks with many layers, now rebranded as "deep learning", to handle unstructured data like images and language. The term "Artificial Intelligence" came back into favor, now used not in reference to its logic-based origins, but rather as a term for the fanciest of statistical techniques.
OpenAI’s public release of ChatGPT in 2022, their impressively anthropomorphic Chatbot, signaled the start of the Generative AI Boom. Modern “GenAI” (short for generative AI) models are a form of machine learning and neural network, albeit on a massive scale. These models generate complex text and synthetic images, demonstrating seemingly human-like capabilities that far exceed their predecessors. Since then, promise of widespread automation of complex human tasks has fueled massive spending and investment from the private sector and governments alike. “AI” has become synonymous with “Generative AI”, and host of new buzzwords have taken center stage.
The cycle of rebrands that AI has gone through over the past 70 years shows some repeating patterns. Changes in terminology do not necessarily reflect change in underlying philosophy, scientific grounding, or techniques, but rather changes in the landscape of academia, funding, and business interests in the technology. For example, since their origin artificial neural networks have been named “perceptron”, “mulit-layer perceptron”, “deep learning”, “foundation models”, and eventually, “Artificial Intelligence”. To be sure, the technology and capabilities have evolved, but there has been little change in the foundational principals they are built on. Likewise, symbolic AI has historically been named “Logic Theory machine”, “Artificial Intelligence”, “Expert Systems”, and “Good Old Fashioned AI”.
In times of plenty, the “AI summers”, the term “Artificial Intelligence” refers to the most promising, well-funded approach of the day, while in times of scarcity, the term is avoided at all costs, even when we’re talking about the same technology.
The Narrowing Path: How Buzzwords Limit Innovation
Looking back at historical cycles of hype, disappointment, and rebrands, it's evident that buzzwords themselves limit AI's development. When hype concentrates on specific technologies, funding, investment, and research attention follow suit, creating a self-reinforcing feedback loop. Large investments build ecosystems of tooling, research methods, and educational resources around favored approaches, making them more accessible while simultaneously feeding back into the hype, driving even greater resource concentration toward the same narrow set of techniques.
Consider today's path dependency around GPUs. Artificial neural networks only became mainstream "AI" in the 2010's with deep learning enabled by graphics processing units (GPUs). Since then, GPUs have been AI's computational engine, while AI has driven GPU value, creating mutual reinforcement fueled by generative AI hype. While GPUs efficiently support neural networks, they offer little benefit to other promising AI approaches like evolutionary algorithms. With decades of research invested in hardware optimized for neural networks, alternative approaches struggle to advance despite their potential, demonstrating how resource allocation shapes AI's development trajectory in spite of inherent technological promise. While some researchers are continuing to work on evolutionary computation, it will likely be a long time before we benefit from the true potential of this field of study.
The AI Family Tree
While path dependencies in research go beyond just terminology, AI is in a unique position relative to traditional fields of study. Practical AI implementations in industry are happening before much basic academic research has been done. This means that buzzwords and hype have outsized impact. With each rebrand of AI, the field has invariably set its sights on an ever-narrowing branch of the much larger family tree of AI approaches. In my previous post, I discussed several interdisciplinary perspectives on intelligence theories, going beyond either AI or human-centered definitions. These broader understandings of intelligence feed into a much larger family of approaches to AI than many of us are familiar with. My “AI Family Tree” (shown below) provides a general overview of the wide range of AI approaches. While it’s far from comprehensive, it provides good perspective on where currently popular AI techniques sit and what else is out there.

I've identified four main approaches to AI, each based on different foundational philosophies. Logic-Based Approaches (familiar "Good Old Fashioned AI") model intelligence through symbol manipulation and rule systems. Most currently popular techniques fall under Sub-Symbolic Approaches, where intelligent behaviors emerge from statistical principles and network structures without explicit symbols. Environment-Centric Approaches focus on intelligence emerging through interactions between agents and environments, including evolutionary, embodied, and collective techniques. Each of these techniques corresponds directly with a perspective on intelligence, described in detail in my previous post. Finally, Integrative Approaches combine multiple technologies to balance benefits and drawbacks, like Neurosymbolic AI and Reinforcement Learning. Each category represents a fundamentally different philosophical approach to artificial intelligence.
By centering hype over substance, we unwittingly narrow our perspectives and our options. Today the term “AI” colloquially means Generative Artificial Neural Networks, a branch within a branch of the AI Family Tree. In the next section, we’ll zoom into that branch and review some of the buzzwords driving the feedback loop of AI hype today.
Today's Confusing Buzzword Landscape
It’s hard to deny that last three years have been the most eventful in AI history. As such, the buzzwords have run rampant, some more meaningful than others. I’ll cover a few of the most common buzzwords here. For more detailed definitions check out my Anti-Hype AI Dictionary.
Artificial Intelligence (AI): An umbrella term, encompassing a range of systems designed to “perform tasks that typically require human intelligence”. During this current wave of generative AI, the term ”AI” colloquially most often used in reference to generative AI systems specifically.
Machine Learning (ML) & Deep Learning: Statistical approaches where algorithms learn from data rather than following explicit rules; neural networks with many layers constitute "deep learning". While these terms have fallen out of favor in recent years, currently popular Generative AI techniques are based on both ML and deep learning.
Generative AI & Foundation Models: Systems creating new content based on patterns in training data; "Foundation Models" are simply large models trained on vast datasets adaptable to multiple tasks.
Frontier Models: Foundation models that “exceed the capabilities currently present in the most advanced existing AI models”. Essentially these are the newest, fanciest foundation models. I don’t think this term has much meaning beyond hype.
Large Language Models (LLMs): A subset of foundation models specifically trained on text data to predict and generate human language.
AI Agents & Agentic AI: "Agents" are systems that can perceive environments and take actions toward goals, while "Agentic AI" suggests independence with minimal supervision. These terms create some confusion, as true autonomous capability remains unproven.
Artificial Narrow/General/Super Intelligence (ANI/AGI/ASI): A spectrum of hypothetical systems from domain-specific (ANI) to human-level cognition across domains (AGI) to superhuman capabilities (ASI). "AGI" essentially rebrands McCarthy's original vision of AI, creating a new north star after narrow AI failed to deliver on original promises.
The modern-day terms for AI that I defined above are overlapping and imprecise. They fail to offer the critical information that is needed to make use of the technology practically and responsibly. The use of words like “Agentic”, “Learning”, and “Intelligence” suggest that models are more human-like than they really are. As humans engaging with these systems, we naturally anthropomorphize, assuming the same capabilities we associate with our own intelligence. These misguided assumptions are further validated by AI products that are intentionally designed to be anthropomorphic, emulating human patterns of communication and behavior.
The constant shift in optimistic language inspires a sense of novelty, progress, and momentum. Business leaders, investors, and individuals are faced with the fear of missing out on the next big thing, further driving the feedback loop of fixation on the same set of technologies. AI researchers focused on generative neural network-based technologies are presented with funding opportunities, high salaries, and a sense of purpose, while less hyped areas of research remain underfunded and ignored.
An ecosystem of hardware, middleware, and other tools have been built up around neural network approaches, making them the default method for AI developers when faced with a problem to solve. At the end of the day, even if another approach may work better in theory, the most practical solution to any problem is the one that is already half complete.
Final Thoughts
The story of AI terminology is one of strategic rebranding rather than conceptual clarity. With each cycle of hype and disappointment, the field narrows its focus to whatever branch of the AI family tree currently promises the most immediate returns, leaving potentially valuable alternative approaches unexplored. This pattern actively shapes the technological landscape through concentration of research funding, talent, and tool development.
When it comes to our discussions about AI, we need to consider the baggage that comes along with the terms we use. Try to use precise technical language when possible (e.g. “Generative Language Models” instead of “AI”) and avoid anthropomorphic terms - the chatbot is not “lying”, it’s incorrect. It’s easier said than done, and we have to choose our battles wisely. Despite my many grievances, I will continue to use “AI” as an umbrella term for all of these systems. At the end of the day, insisting on calling it “complex information processing” or something equally unassuming will only cause more confusion at this point.
For researchers, breaking free from this path dependency means being willing to look beyond the latest buzzwords and explore approaches that aren't currently in vogue. This might involve revisiting discarded techniques from earlier AI eras or exploring interdisciplinary perspectives that challenge conventional assumptions about intelligence. The most innovative breakthroughs often happen at the boundaries between established fields.
For practitioners and businesses building AI applications, there's practical value in recognizing when you're being led by marketing rather than technological fit. Consider whether your use case might benefit from simpler, more transparent techniques like statistical models or rule-based systems rather than generative AI -or perhaps from integrative approaches that combine multiple paradigms. And of course there’s nothing wrong with using whichever techniques are accessible. Most of us don’t have time to implement novel algorithms from scratch.
The terminology we use shapes how we think about and develop technology. By becoming more conscious of how buzzwords drive path dependencies, we can make more intentional choices about which branches of the AI family tree deserve our attention. This doesn't mean rejecting neural network-based approaches outright. These approaches have demonstrated real value and will likely shape our technology for decades to come. Rather, we must maintain a broader perspective that recognizes the rich diversity of ways we might approach the challenge of building intelligent systems.
What’s Next
In my next post, I'll move beyond terminology critique to propose a practical framework for understanding AI capabilities - one that focuses less on marketing buzzwords and more on what various approaches can actually accomplish, their limitations, and their appropriate applications.
Further Reading
Surfing the AI waves: the historical evolution of artificial intelligence in management and organizational studies and practices: A historical review of AI hype waves.
Generative artificial intelligence: a historical perspective: A history of Generative AI approaches
Neurosymbolic AI and its Taxonomy: a Survey: A broad overview of integrative Neurosymbolic AI approaches
AI Knowledge Map: How To Classify AI Technologies: Another taxonomy of AI approaches