logo
Published on

The Uncomfortable Economics of AGI Nobody's Talking About

opinion
Authors

We're all excited about AGI. The promise is transformative: artificial general intelligence that can do any cognitive task a human can do, available to everyone, democratizing access to expertise and capability.

But here's what I realized today looking at current AI pricing: AGI won't be cheap.

Right now, the most capable AI models cost anywhere from 315permillioninputtokensand3-15 per million input tokens and 15-75 per million output tokens. For complex reasoning tasks, you can burn through thousands of tokens in minutes. Scale that to AGI-level capabilities running continuously, and you're looking at costs that put it firmly out of reach for most individuals.

The Two Futures We're Not Discussing

Scenario 1: AGI Stays Expensive

If compute costs don't collapse dramatically, we end up with superhuman intelligence that costs hundreds or thousands of dollars per hour to run. Only corporations, governments, and the wealthy can afford sustained access. The productivity gains? They accrue to whoever can pay the premium.

Think about what this means in practice:

  • A Fortune 500 company can afford to run AGI systems 24/7, automating complex decision-making and strategic planning
  • A startup gets a few hours per day if they're well-funded
  • An individual? Maybe occasional access for specific tasks, if they can afford it at all

Scenario 2: Costs Collapse... Eventually

Maybe competition and efficiency gains drive prices down to pennies. But even then, there's a transition period - possibly years - where access is stratified. The top 1% get AGI assistants. Everyone else gets budget models or nothing.

This isn't speculation - we've seen this pattern before with every major technology. The question is how long the transition takes and what happens during that window.

What This Actually Means

We keep talking about AI "leveling the playing field" and "democratizing expertise." But if the most capable systems remain expensive, what we're actually building is a tool that amplifies existing advantages.

The optimistic vision: A teenager in a developing country with AGI access can compete with Harvard grads. They have the same cognitive augmentation, the same access to expertise, the same ability to execute on ambitious ideas.

The realistic concern: Harvard grads get AGI, the teenager gets a budget model with 10% of the capability, and the gap widens further. The Harvard grad with AGI assistance can accomplish in hours what would take the teenager weeks with lesser tools - or months without them.

And for those whose jobs get automated? We're not talking about retraining as "AI supervisors." We're talking about gig work, service jobs, the same undignified pennies we've always offered people when their economic value disappears.

The Dignity Question

There's something we don't talk about enough: what happens to human dignity when cognitive labor is devalued?

For centuries, societies have struggled with how to provide dignity to people whose physical labor became redundant. We created social safety nets, unemployment insurance, job retraining programs - imperfect solutions, but acknowledgments of the problem.

Now we're racing toward a world where cognitive labor might be redundant. The knowledge worker, the professional, the expert - anyone whose value comes from what they know and how they think. What does dignity look like for them?

The optimistic AI community says "everyone becomes an AI supervisor" or "we'll all focus on creative work." But supervision requires the AI systems to supervise, and creative work at scale still requires capability amplification. If that amplification is expensive, we're back to the same stratification.

We Need to Talk About This

I'm not saying AI won't be transformative. I'm saying we need to be honest about who it transforms life for.

The questions we should be asking:

  1. What's the plan for universal access if compute stays expensive?

    • Are we counting on Moore's Law to save us?
    • What if it doesn't?
    • Do we need public infrastructure for AGI access, like we have for education and healthcare?
  2. How do we prevent a decade-long transition where capability gaps widen?

    • What policies ensure that during the transition, access isn't determined purely by wealth?
    • How do we prevent a two-tier system where the elite have AGI and everyone else has chatbots?
  3. What does dignity look like in an economy where cognitive labor has been devalued?

    • Universal basic income?
    • New forms of meaningful work we haven't imagined yet?
    • A fundamental rethinking of how we value human contribution?
  4. Who gets to decide how AGI systems are allocated?

    • If AGI time becomes a scarce resource, who makes the rationing decisions?
    • Do we let the market decide?
    • What role should governments and institutions play?

The Path Forward

I don't have all the answers, but I know we need to start having these conversations now, not after AGI arrives.

The AI community loves to talk about capabilities and safety. We need to talk just as much about economics and access. Because building AGI that only the top 1% can afford isn't solving humanity's problems - it's compounding them.

Some potential approaches worth exploring:

  • Public AGI infrastructure: Like public libraries or education systems, could we create public access to capable AI systems?
  • Progressive pricing models: Could AI providers tier access so that individuals and small organizations have affordable entry points?
  • Open source alternatives: Can the open source community provide viable alternatives that run on consumer hardware?
  • Regulatory frameworks: Should governments treat AGI access as a public good requiring certain guarantees?

Final Thoughts

The uncomfortable truth is that technology alone doesn't determine outcomes - economics and policy do. We built the internet, but the digital divide persists. We created smartphones, but billions still lack access. We developed advanced healthcare, but it remains unaffordable for many.

AGI will be no different unless we make it different. And that requires honest conversations about cost, access, and who benefits - conversations we need to start having today.

The future isn't written yet. But if we don't actively shape it toward equity and access, market forces will shape it toward whoever can pay the most. And that's a future where AGI's transformative potential benefits a tiny fraction of humanity while leaving everyone else further behind.

Let's build something better than that.


What do you think? Are these concerns overblown, or are we not taking them seriously enough? I'd love to hear your perspective in the comments or on Twitter.