good question. the difference with AI tools is the interface isn't stable in the same way photoshop or excel is. with traditional software you learn it once and muscle memory carries you. with LLM tools the model itself changes, the optimal prompting style shifts, features interact with model behavior in unpredictable ways. so the cognitive load compounds differently. not saying features are bad, just that the tradeoffs are different
should have been clearer here. by "loading the project" I meant the initial context claude builds like CLAUDE.md, directory structure, etc... not literally putting every line of code into context. 7M tokens would obviously not fit in a 200k window
yeah- this is a fair concern and I should have been clearer. I wouldnt do this on anything with real data or production traffic. that hetzner instance was a side project with nothing sensitive on it. the point was more about claudes ability to reason through infrastructure problems not that everyone should hand over ssh keys. you're right to be cautious
I definitely relate with your sentiment and I like your term "configuration bankruptcy"
on MCP, the mental model that clicked for me is "giving claude access to tools it can call" so that instead of copy pasting from your database or API, claude can just... query it
Nice way to put it. Skills feel great for shaping how Claude works inside a repo, while MCP really shines when you want it to talk to “live” systems : databases, test runs, CI, all that external state.
I've not explicitly used skills or MCP, but have had zero issues with Claude calling apis via curl as an example. I'm not sure what the MCP server or skill is actually enabling at this point. If I wanted CC to talk to SQL Server, I'd have it open a nix-env with the tools needed to talk to the database. One of my primary initial claude.md entries has to do with us running on NixOS and that temporarily installing tools is trivial and it should do things in the NixOS way whenever possible. Since then it has just worked with practically everything I've thrown at it. Very rarely do I see it trying to use a tool that isn't installed anymore. CC even uses my local vaultwarden where I have a collection of credentials shared with it. All driven through claude.md.
Author here. I wrote this because everyone is talking about Claude Code right now and it's all over my timeline. Claude Code has this effect where you KNOW it's good but can't quite say WHY.
So I spent the weekend digging into the DX decisions that make Claude Code delightful.
How much AI did you use to write up this article? It tripped up my "fake AI-written article" detector a few times despite being interesting enough to read to the end
used claude to polish the draft and tighten sentences. the thinking, analysis, and examples are all mine and based on personal experiences. spent the weekend reflecting on my past experiences with claude code and actually digging into why claude code feels the way it does. curious to know what tripped your detector.
Adding to this: too many negatives before making a point, which AI text is prone to do in order to give surface level emphasis to random points in an argument. For example: "I sat there for a second. It didn't lose the thread. It didn't panic. It prioritized like a real engineer would." Then there is the fact that the paragraph ends in just about the same way, which also activates one's AI-voice-detector, so to speak: "This wasn't autocomplete. This was collaboration."
In my opinion, to write is to think. And to write is also to express oneself, not only to create a "communication object," let's put it that way. I would rather read an imperfect human voice than a machine's attempts to fix it. I think it's worth to face the frustration that comes with writing, because the end goal of refining your own argument and your delivery is that much sweeter. Let your human voice shine through.
Lots of things - typical llm em-dash situations although using dash. Lists of 3s after a colon where the 3 items aren't great. Short sentences for "impact" that sounds kind of like a high school essay i.e. "God level engineer...Zero ego."
I cannot at all understand writing an essay and then having an llm "tighten up the sentences" which instead just makes it sound like slop generated from a list of bullets
Jokes aside, my English is passable and I'm fine with it when writing comments but I'm very aware that some of it doesn't sound native due to me, well, not being native speaker.
I use AI to make it sound more fluent when writing for my blog.
As long as your bullet points+prompt are shorter than the output, couldn't you post that instead? The only time I think an LLM might be ethically acceptable for something a human has to read is if you ask it to make it shorter.
Well, actually, what if my own words make me come across as a raging pedantic asshole, you feckless moron!? I don't actually think you're a feckless moron, but sometimes I'll get emotional about this or that, and run my words through an LLM to reword it so that "it's not assholey, it's nice". I may know better than to use the phrase "well actually" seriously these days, but when the point is effective communication, yeah I don't want my readers to be put off by AI-isms, but I also don't want them to get put off by my words being assholey or condescending or too snarky or smug or any number of things that detract from my point. And fwiw, I didn't run this comment through an LLM.
> I wrote this because everyone is talking about Claude Code right now and it's all over my timeline.
Feels more like peer pressure induced post, than evaluating a tool critically for pros and cons.
> Claude Code has this effect where you KNOW it's good but can't quite say WHY.
Definitely gives the "vibe" of social media's infinite scroll induced dopamine rush.
Overall, this post just seems to be enforcing the idea that "fuzzy understanding of business domain will be enough to get a mature product using some AI, and the AI will somehow magically figure out most non-functional requirements and missing details of business domain". Thing is that figuring out "most non-functional requirements and missing details of business domain" is where most of the blood and sweat goes.
Anthropic has some docs at docs.anthropic.com but honestly most of what I learned came from just using it and poking around. the slash commands have help text built in. shrivu shankar has a good breakdown of the features too if you're looking for a more structured overview
I wrote this after following the ongoing back-and-forth between Vercel and Cloudflare on Twitter- the benchmarks, the pricing arguments, the memes.
It struck me that the fight isn’t really about performance or cost; it’s about philosophy.
Cloudflare believes good developer experience starts with visibility and control. Vercel believes it starts with empathy and flow.
Curious how others here think about this:
- Do you prefer abstraction or transparency when building?
- Has either platform changed how you think about deploying or designing apps?
Vercel integration is really good. It reduce setting times e.g set CNAME and connect with github repository.
However setting time only happen once. So once setup, there is no much difference between cf and vercel.
As someone who usually setup vpc for deployment, vercel value offering is not much. But i do see its value for product focused deployment when operating cost still neglegible or covered by vc fund.
That’s fair- they do operate at very different layers of the stack.
But I think what’s interesting is how their goals are starting to overlap, even if their architectures don’t. Even their recent product launches are alike.
Cloudflare’s building physical reach and reliability— real infra, like you said.
Vercel’s building emotional reach— developer trust, design, workflow integration.
Both are trying to own the default path developers take from idea to deploy.
So even if they’re not in the same market today, they’re converging toward the same developer mindshare.