- The Datable
- Posts
- Agency Beats Intelligence in AI Development
Agency Beats Intelligence in AI Development
Action Beats Planning (In Almost Everything)
Agency Beats Intelligence in AI Development
Action Beats Planning (In Almost Everything)
Intelligence without action is knowledge untapped. When Andrej Karpathy, former Director of AI at Tesla and a founding member of OpenAI, stated "Agency > Intelligence" (Karpathy, 2025), he highlighted a fundamental truth about artificial intelligence that warrants further exploration.
I've been thinking about Karpathy's insight, and would like to offer an important qualification.
A qualification that signals a shift from Intelligence toward Agency that is now occurring (and not just in AI).
The Threshold Principle
While Karpathy establishes that agency trumps intelligence, my extension of his concept introduces a critical nuance: Agency may consistently outperform intelligence, but only after crossing a minimum intelligence threshold.
This threshold principle is important because it combines the idea of minimal viable intelligence and when “agency” actually adds value.

This diagram ties value creation to the point where the minimum intelligence is reached. Unfortunately, below a certain intelligence threshold, increasing agency becomes problematic (later, we will examine the issues from "High Agency, Low Intelligence" systems).
I bring this up because too many people (in every meeting I’m in) still admit they aren’t paying for the more premium models. The free models do not meet the minimal intelligence threshold in my opinion.
Go ahead and pay $20 a month to access the level of intelligence that will absolutely scale your ability to leverage AI. Heck, your Peleton membership you don’t use is almost twice as much.
Once sufficient intelligence is achieved, agency becomes the differentiating factor that creates tremendous value.
This threshold principle applies to artificial intelligence systems at scale.
But it also applies to humans (as you’ll read more about below).
Assuming you are using a model that is beyond the minimum threshold, let’s add a conceptual framework to divide AI capabilities into four distinct quadrants based on intelligence and agency levels. Each quadrant reveals something crucial about development trajectories and potential impact.
The Four Quadrants of AI Capability
The four quadrants are broken up according to “low” or “high” levels of intelligence and agency. To initially orient yourself, consider that the last five years were predominantly a trap in the “low intelligence, low agency” section of annoying chatbots. These ridiculous “if then else” loops ruined a lot of consumer and corporate trust, as they were only slightly more intelligent than IVR phone tree systems.
Current AI models predominantly occupy the "high intelligence, low agency" quadrant. They possess extensive knowledge but limited ability to act independently on that knowledge.
They can draft compelling content, solve complex problems, and generate innovative ideas, yet remain confined to their text interfaces.
These systems wait for human prompts before taking any action.

The truly transformative quadrant, "high intelligence, high agency," remains largely theoretical. Indeed, people have claimed it, but it remains largely theoretical and is not in widespread use.
This represents AI systems that can both think critically and execute complex tasks across multiple platforms without constant human direction.
Such systems could initiate actions, adapt to changing circumstances, and pursue goals with minimal oversight.
The Business Implications
Agency Theory, established by Jensen and Meckling in 1976, describes the principal-agent relationship where authority is delegated from one party to another. This economic framework has traditionally been applied to human relationships, where employers delegate to employees and shareholders to executives.
AI agents introduce an entirely new dimension to this dynamic.
As we increasingly delegate complex tasks to systems with both knowledge and action capabilities, the traditional principal-agent tensions will manifest in different ways. Issues of alignment, monitoring, and incentives become exponentially more important.
Organizations that understand this shift gain a strategic advantage.
Those focusing exclusively on intelligence metrics while neglecting agency development may find themselves with impressive but ultimately ineffective AI tools.
Redirecting Development Focus
The current AI development landscape skews heavily toward intelligence enhancements. Companies compete fiercely to release models with higher parameters, better reasoning, and more extensive knowledge. And that’s not a bad thing.
However, in the premium models, we now need to focus on platforms to increase agency.
Practical steps for organizations navigating this shift include:
Audit your AI tools for both intelligence and agency capabilities
Identify specific processes where increased AI agency would create tangible business value
Establish clear boundaries and oversight mechanisms before expanding AI agency
Develop internal expertise in principal-agent dynamics as applied to AI systems
To some extent, automation is the beginning of the agency expansion.
Consider where your organization's AI initiatives fall within the four quadrants. Are you maximizing intelligence while neglecting agency? Or perhaps pursuing agency without sufficient intelligence thresholds?
What specific business functions could benefit most from increased AI agency rather than just more intelligence?
The Risk Factors
Increased agency without corresponding safety mechanisms introduces significant risks. Autonomous systems making consequential decisions require robust safeguards and oversight, similar to humans with agency.
Not every business function benefits from increased AI agency.
Some processes require human judgment, ethical considerations, and contextual understanding that even advanced AI lacks. The appropriate balance varies by industry, organization, and specific application.
Regulatory frameworks remain underdeveloped for highly agentic AI systems. Organizations that pioneer these capabilities may face unexpected compliance challenges as regulations evolve to address emerging technologies.
The investment required to develop truly agentic systems may exceed the realistic return on investment (ROI) for many applications (but I don’t think so in the long run). Intelligence enhancements often deliver more immediate value at lower development costs.
Strategic Implementation
Forward-thinking organizations should develop a capabilities matrix mapping their AI needs along both intelligence and agency dimensions. This provides clarity about development priorities and potential gaps.
Start with small-scale agency experiments in low-risk environments.
Establish clear metrics for measuring the success of agency implementation beyond traditional intelligence benchmarks. What constitutes effective action varies significantly by context and objective.
Partner with AI developers, prioritizing both dimensions rather than focusing exclusively on intelligence improvements. The competitive landscape is shifting toward more balanced capability development.
The Broader Transformation
I know this post is primarily focused on the growing emphasis on AI agencies. But something much bigger is happening, to a large degree because of AI.
Agency is becoming the most talked-about and theorized topic in a decade.
George Mack, one of today's most insightful essayists, has also recognized agency as a transformative concept. In his recent essay, he argues that agency might be the most important idea of the 21st century (Mack, 2025).
I agree.
This perspective extends beyond AI applications. But, it also extends because of AI!
After seven months of research and writing, Mack released a comprehensive exploration of agency as a transformative concept. His work examines how increased agency, the capacity to act independently and make meaningful choices, fundamentally reshapes individuals, organizations, and technological systems.
The convergence between my threshold principle, Karpathy's original insight, and Mack's broader agency thesis highlights the multidisciplinary importance of this concept.

All recognize that knowledge without action creates limited value, though my threshold model explicitly addresses when and how agency becomes truly valuable. Recent research from McKinsey suggests that agentic AI represents the next frontier for AI innovation (McKinsey, 2024), with implementations expected to increase dramatically by 2028.
Karpathy's post on this topic is definitely worth reading:
Mack's essay is a must-read.
The coming years will reveal which organizations recognized the intelligence threshold principle early enough to redirect their AI strategy accordingly.
Lastly - if you are attending SEO Week in NYC this week, ping me on LinkedIn.
Reply