20260423 - Claude Code rate limits: Anthropic AI squeezes customers
8 min
•Apr 23, 20265 days agoSummary
Host David Gerrard critiques Anthropic's unsustainable business model, revealing the company spends $8-13 for every dollar in revenue while losing money on Claude Code. The episode details Anthropic's recent pricing changes, rate limiting tactics, and account suspensions, positioning these moves as typical enterprise SaaS cost-cutting strategies amid massive operational losses.
Insights
- Major AI vendors are operating at severe losses with no clear path to profitability, forcing aggressive pricing changes and usage restrictions on customers
- Enterprise SaaS playbook of obscuring pricing and gradually increasing costs is being deployed by AI companies facing unsustainable burn rates
- Automated abuse detection and account suspension systems lack adequate human review, creating business continuity risks for customers relying on AI services
- Shift from seat-based to usage-based pricing reflects genuine changes in how customers use AI (agentic workflows), but is being used to justify revenue extraction
- Venture capital funding is masking the fundamental unprofitability of current AI business models, delaying necessary innovation in efficiency
Trends
AI vendors transitioning from flat-rate to token-based and usage-based pricing models to control costsIncreased rate limiting and account suspension policies as cost management strategy across AI industryGrowing tension between organic product adoption and unsustainable unit economics in generative AI servicesEnterprise SaaS playbook (obscure pricing, gradual cost increases) being applied to AI productsShift from productivity tools to agentic workflows driving higher usage and forcing vendor cost controlsLack of transparency in pricing changes and policy enforcement creating customer trust issuesVenture capital masking unprofitable business models, delaying market correction and efficiency innovation
Topics
Claude Code pricing and availability changesAI vendor unit economics and profitability crisisUsage-based vs. flat-rate pricing modelsRate limiting and account suspension policiesEnterprise SaaS cost-cutting strategiesAgentic AI workflows and usage patternsAutomated abuse detection systemsGitHub Copilot token-based chargingVenture capital funding sustainabilityAI product pricing transparencyCustomer appeal processes for account suspensionsRevenue vs. actual profitability discrepanciesAnthropic's financial performanceMicrosoft's AI service cost management
Companies
Anthropic
Primary subject; spending $8-13 per dollar earned, removed Claude Code from Pro plan, implemented rate limiting and a...
Microsoft
Clamping down on GitHub Copilot usage and moving to token-based charging to control losses from AI services
GitHub
Moving all Copilot customers to token-based charging as of June to address money-burning business model
Bello
Argentine finance app that had account suspended by Anthropic due to false positive abuse detection, affecting 60 emp...
People
David Gerrard
Podcast host analyzing Anthropic's business model and pricing strategy changes
Amol Avasari
Defended Claude Code pricing test as small experiment affecting only 2% of new signups; tweeted about agentic workflo...
Pato Molina
Tweeted about Anthropic suspending company account due to false positive abuse detection, affecting 60 employees
Ed Zitron
Obtained and reported on leaked internal Microsoft documents about GitHub Copilot token-based charging transition
Valerie Veitch
Director of anti-AI documentary 'Ghost in the Machine' featuring host David Gerrard in panel discussion
Quotes
"Anthropic's spending $8 to $13 for each dollar that comes in, which is a vastly greater loss rate than even I thought they were running up"
David Gerrard•Early in episode
"For clarity, we're running a small test on 2% of new prosumer signups. Existing pro and max subscribers aren't affected."
Amol Avasari•Mid-episode
"Long-running async agents are now everyday workflows. The way people actually use a Claude subscription has changed fundamentally."
Amol Avasari•Mid-episode
"Our automated systems detected a high volume of signals associated with your account which violate our usage policy"
Anthropic (automated system)•Late episode
"All the AI vendors are just setting money on fire. Their biggest problem is that people keep using their services when there was never a path to profit"
David Gerrard•Closing segment
Full Transcript