Claude Code and Weekly Limits
Warning: This post discusses AI and AI tools. If you think AI tools are immoral—I agree! I have a separate, longer essay that I'm working on that will talk about that, and my struggle with it, in excruciating detail. However, this is not that discussion.
If you're interested in the other essay, subscribe to this blog via RSS for updates. Do you know what RSS is?
I use Claude Code regularly to help me explore ideas and accelerate the realization of them. I like to think that I have more technical chops than your average designer, but I realize that I am no engineer. Putting aside how using LLMs makes me feel as someone that makes products for a living, it's hard to argue against what LLMs allow me to accomplish. This post is about that.
Anthropic recently introduced weekly limits to Claude Code to curb against excessive usage. In their communication on Twitter, they estimated that fewer than 5% of users would be affected by the change. Great! I've been on r/ClaudeCode, so I've seen the arcane machinations that people there construct around Claude Code. It seems perfectly reasonable to think that someone out there runs the tool as close to 24/7 as possible to maximize the value they get out of their subscription.
That's not me though. I have a full time job designing software. When I come home, a lot of the time, I want to play video games, tinker with my servers, and hang out with my cats and my friends. I have plentiful side projects, and LLMs let me work on those faster than I normally would be able to, but that amounts to 3-5 hours a day of concentrated time using Claude Code. Not even daily—we're talking 2-3 days on and 2-3 days off. Some weeks I don't do anything at all!
All that to say: I am a maker, not a hustler. Claude Code is a tool for me to accelerate what I was already going to do even if LLMs didn't exist.
This week, Anthropic released their newest model, Claude 4.5 Sonnet. It is apparently a pretty good model! Unfortunately, with it's release, Anthropic seems to have seriously nerfed the amount of tokens allotted to subscribers of Claude Code, even on their most expensive $200/mo plan, causing users to report chewing through their weekly allotment of Anthropic's previous—and arguably better—model Opus 4.1 in as little as a few hours.
This was new. I pay for the $200/mo plan and have never seen the weekly limit message since it was implemented, and I've only seen the session limit message a handful of times. It's worrisome that users are hitting these limits so quickly—and with no communication of changes—on a plan that costs such an astronomical amount of money.
And that's just it: communication is the primary thing that the Anthropic team is getting wrong. Just weeks ago, Anthropic confirmed degradations in the Claude Code service that users had speculated about for many weeks, if not a full month prior. This time, they managed to get out a statement (if you can even call tweets from personal accounts "statements") 24 hours after launch, but the gaslighting tone is not befitting of a company communicating to people paying $200 every month for their service. It’s condescending that their solution is to give people that are already paying $200 every month a way to pay more to keep using the service. Why was addressing this not in the launch plan? In what world is “haha cough up, paypig” an acceptable solution? Every product team knows the fervor of their users; did Anthropic think their paying customers just wouldn't notice this? Or did they simply not care?
Again, I make products for a living. I don't like accusing teams about not caring about their customers, because that's very rarely ever the case. But we know that AI subscription services are the twisted 2025 version of a loss leader, as Ed Zitron has written about extensively. From that perspective, these changes make sense. Claude Code is an overwhelming success for a company that had been struggling to find product-market fit against the aggressive and unyielding march of OpenAI just last year. (I interviewed there in 2024, I would know.) However, if every customer means you lose more money instead of making it, eventually you need to stanch the bleeding.
Despite all of this, I still generally like Anthropic as a company—at least, as much as one can and should allow themselves to like an AI company. I respect their approach to building products a hell of a lot more than OpenAI, whose driving product philosophy seems to be consume as much as you possibly can. But, I expect more respect and communication from a company that is taking $200 from me every month for a service that sometimes doesn't even work. If Netflix asked someone for $50/mo for a subscription, then suddenly stated that you could only watch 5 hours of TV a week before you lost all access to their service, people wouldn't just roll over and wait until next week to binge the next season of Stranger Things—they would cancel their subscriptions and sail the high seas.
I'm cancelling my Claude Code subscription—at least until Anthropic can clearly and respectfully communicate why they're making the decisions that they're making. If I'm realistically using Claude Code for 15 days a month because I am a normal human being with a life and a job and other interests, I shouldn't be punished as if I'm multiboxing eight Terminal windows simultaneously, sucking all the LLM juice that Anthropic's got. At $200/mo, I shouldn't even have to worry about the possibility. And yet, now I do.
How will I replace something that has actually increased my productivity tenfold? I don't know yet, but there's a good chance I won't. Maybe I'll go back to battling my ADHD-riddled brain to build things the slow, old-fashioned way. As my other essay will dive into, it's much more rewarding.
Or I'll buy an RTX Pro 6000 and pray that China keeps releasing good-enough models that can get me most of the way there. If I don't pay for Claude Code, the GPU will pay for itself in 24 months and I get to use it as much as I want!*
* I have solar panels.
There’s a separate conversation to be had here separate from the realities of Anthropic’s pricing today around the realities of subscription services, price hikes and the incongruent vision of AI that is being sold—or perhaps more accurately, force-fed—to us. Tech companies desperately want to realize a future where anyone can use these tools to build complete products, cure cancer, among other unimaginable far-off promises.
Given that we now have evidence that suggests that use of AI assistants contributes to cognitive decline, a scary picture begins to be painted. With students using AI more and more and not developing the skills (or attention span) to do things the slow, old-fashioned way, we’re setting ourselves up as a society for a dystopian future where humans have to pay tech CEOs whatever toll they demand of us so that we might effectively use our very own brains.