Bespoke works to a certain extend, just like having various spreadsheets with macros in them, but after a while having a standard business process becomes quite vital. Especially when other people already made the same mistakes and came up with good solutions, but also you will need talent that is already trained for these procedure. Otherwise, you may need to take the burden of training folks all the time for something that they will not be able to transfer anywhere.
You're absolutely right on that one. It usually starts with querying data from complex systems, and slowly morphs into a dedicated solution of itself. I started seeing a lot of OEMs integrating projects like Grafana and Metabase into their product, and LLMs is making things a lot easier for everyone to start other bespoke apps as well.
Fair point, but because I spent a year building and refining my custom tool, this is now the reality for all of my AI requests.
I prompt, press run, and then I get this flow:
dev setup (dev-chat or plan)
code-map (incremental 0s 2m for initial)
auto-context (~20s to 40s)
final AI query (~30s to 2m)
For example, just now, in my Rust code (about 60k LOC), I wanted to change the data model and brainstorm with the AI to find the right design, and here is the auto-context it gave me:
- Reducing 381 context files ( 1.62 MB)
- Now 5 context files ( 27.90 KB)
- Reducing 11 knowledge files ( 30.16 KB)
- Now 3 knowledge files ( 5.62 KB)
The knowledge files are my "rust10x" best practices, and the context files are the source files.
How do you re-evaluate your approach? I'm asking because the landscape, at least from my lens, was completely different a year ago. So I fear that as the foundation shifts whatever learnings, approaches and mental models I have risk being obsolete and starts to work against me.
The problem of evaluating is hard enough as it is without layers of indirection built on top of it.
While the Lenire device may appear similar at first glance because it uses comparable stimulation protocols, I believe Susan Shore’s device is superior. Shore’s approach targets the root neurological cause of tinnitus and aims for measurable reductions in loudness, whereas Lenire primarily focuses on reducing how bothersome the ringing feels. Shore’s research also follows a more first‑principles, neuroscience‑driven path—from basic lab work to carefully controlled clinical trials. Additionally, her studies were more rigorous, incorporating proper control groups, something the Lenire trials lacked.
This is why people generally prefer Lenovos and Frameworks. You can upgrade when you need to and don't have to get a whole new laptop. Despite all what Apple claims as environmentally friendly, they are not. They are pushing more products that will be paperweight when apple deems them to be.
Skills are only loaded when you need them, so you’ll probably use fewer tokens overall compared to MCP servers or including them manually in your main AGENTS.md/CLAUDE.md file, which are always loaded in the system prompt.
reply