We keep hearing this: “AGI is coming.” But what exactly is AGI? Let’s explore this critical question.

Three Different Lenses

Let’s look at AI through three different lenses.

The Economics Lens

OpenAI defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Their focus is entirely economic—it’s about jobs and economic productivity.

When framed this way, the definition essentially states: if AI surpasses humans at most work that people are paid to perform, then we have achieved AGI. The moment AI can produce the majority of economically valuable work, we’ve crossed the AGI threshold.

This represents the economic perspective on artificial general intelligence.

The Cognition Lens

Google DeepMind takes a different approach—they don’t establish a single AGI threshold. Instead, they conceptualize AGI across three dimensions: performance, generality, and autonomy. This creates a levels-based framework rather than a binary definition.

Let me focus on autonomy specifically. According to Google DeepMind, autonomy operates on six levels, from level zero to level five. Currently, no AI system has achieved level five—which represents a fully autonomous agent capable of independent operation.

The key insight here is that there’s no single AGI threshold. Instead, AI systems can progress and regress across these three dimensions, creating a more nuanced understanding of artificial intelligence capabilities.

The Skeptic Lens

Yann LeCun at Meta argues that there is no single, definable general intelligence. He advocates for the term “human-level AI” instead of AGI, emphasizing that intelligence isn’t a monolithic concept. This represents what I call the skeptic lens; questioning whether AGI as a unified concept even exists.

My Personal Take

I favor the levels approach advocated by Google DeepMind because it avoids the binary trap of a single threshold. I don’t believe we’ll wake up one day and declare, “AGI has arrived.” The transition will be gradual and nuanced.

Moreover, our understanding of what constitutes AGI will likely evolve over time. The levels framework accommodates this reality.

Progress can occur across multiple dimensions simultaneously. AI systems can improve or decline in performance, specialize in specific domains or become more generalized, and operate with varying degrees of autonomy, from requiring constant human oversight to functioning independently. This multidimensional view captures the complexity of artificial intelligence development far better than a simple binary definition.

Why Definitions Matter

These aren’t merely academic debates. AGI definitions have real-world consequences that affect laws, contracts, and safety mechanisms.

The EU AI Act

The EU AI Act is now in effect, and notably, it doesn’t explicitly define AGI. Instead, it focuses on “AI systems and general-purpose AI models.”

The Act establishes a computational threshold: if a model’s training compute exceeds approximately 10^25 FLOPs, it’s presumed to create systemic risk. This threshold becomes a regulatory trigger point.

Once your model crosses this computational boundary, the regulatory landscape transforms completely. Model providers must implement comprehensive safety systems, establish incident reporting mechanisms, conduct cybersecurity testing, perform detailed risk assessments, and maintain immediate communication channels with the European Commission.

While this isn’t technically an AGI definition, it fundamentally changes the operational requirements once specific capabilities are reached. The implications are significant and immediate.

Contractual Implications

Consider the OpenAI-Microsoft partnership as a prime example. OpenAI’s board has the authority to determine when AGI is achieved, using their economics-focused definition as the benchmark.

The moment AGI is declared—when that critical milestone is reached—the entire partnership dynamic shifts. Microsoft’s access to new models in the post-AGI era becomes restricted, fundamentally altering the terms of their collaboration.

This single determination transforms agreements, responsibilities, and business relationships worth billions of dollars.

The Bottom Line

AGI definitions aren’t philosophical abstractions, they move real money, determine access permissions, and reshape entire industries. These definitions literally change the game.

The stakes couldn’t be higher. How we define artificial general intelligence today determines who gets access, who faces regulation, and who controls tomorrow’s most powerful technologies.

Understanding these competing definitions isn’t just intellectually interesting, it’s essential for anyone working in or investing in the AI ecosystem.

Watch the Video

I also shared this perspective on AGI definitions in video format. You can watch it here: