Hacker Newsnew | past | comments | ask | show | jobs | submit | jwr's commentslogin

> OrcaSlicer supports Bambu printers already

No, it doesn't. It used to, but then Bambu Labs "for security reasons" (as always) removed access to their "network plugin".

There is a lot of confusion around this, so: you lose access to bambu cloud, so quick upload, remote printing, remote monitoring of prints, synchronization of filament data, and lots of other useful features.

You get a half-baked "throw it over the wall" way of sending files to your printer using a standalone Bambu executable (largely neglected). Note that this does not provide a way to synchronize the filament list to your slicer before slicing, which is useful and important.

You also get a "developer/LAN mode", which is an either/or proposition. If you turn it on, you lose cloud features. No more remote monitoring of your prints using your phone.

I find it very annoying that Bambu managed to implant this shallow take of "you can use LAN mode so things are fine" in people's minds.


I re-read that book every 10 years and try to think carefully about whether what Brooks wrote still holds.

The last three times I read the book, everything held.

This time, I'm not so sure: AI does change things significantly. Perhaps not for all teams and not all scales of software, but in my case (solo developer, complex software system) I did measure a 12x productivity increase [1].

Also, some of the problems Brooks describes became much easier, if not borderline trivial with AI. For example, maintaining design documentation that stays consistent with the software being built. I do this and it is no longer a problem.

I still think most of what Brooks wrote is applicable today. I think the biggest difference is that AI enables smaller teams to work on larger systems, and the biggest benefit is for single-person teams (ahem) like me. I see it as another step that allows me to tackle larger systems: the previous one was Clojure which reduced incidental complexity so significantly that I was able to develop the system to the size it is today. AI is the next step: it allows me to build features that would have taken me years in a span of months. Not because of "vibe coding", but primarily because I can work on a set of design documents and turn my ideas into a coherent design.

[1] For the nitpickers: yes, measured, not guessed. Yes, the metric was reasonable. No, it wasn't "lines of code" or something equally silly, in fact one of my main goals is reducing code size as much as possible. Yes, I compared larger time periods: 2 months with AI to an average of 12 months of the previous year. No, the metric wasn't gamed: this is a solo business and I have no interest in gaming my own metrics. I earn a living from this work, so this is as objective as it gets.


Dario Amodei said in the most recent interview with Dwarkesh that Anthropic currently gets achieves an increase of around 20-30% coding productivity, which tracks with my experience. What do you do to reap orders of magnitude more?

Also, how much more money do you make? Or are you working less?


I can clearly see, and feel, Dario's associates' "increased productivity" in their Claude Code/Chat/Cowork desktop product...

So many updates, pretty much daily, so many tweaks to the interface. Sometimes the tweaks are a bit dumb, sometimes completely trivial, and other times they just undo what they did previously.

My favorite examples of their newfound velocity:

1. When there is a nice feature that was easily discovered, and then it was gone...

2. When the "customize" section moves around to random places in settings, or entirely out of it

Special mention to their scatterbrained keyboard shortcuts strategy


> What do you do to reap orders of magnitude more?

I don't know what Dario Amodei says, does, how Anthropic is run or structured, or what kind of people work there, so I can't comment on that.

I do know about myself, though. The increase is very real, measured by the number of (Linear) issues resolved. No, I haven't changed how I open or close those issues, I've been using the system for years. During the first three months of 2026 I went through 12x more issues per month than during past years.

But I guess I am not an "average programmer": 35 years of experience means I can work with AI as I would with a small (but very skilled) team. I can architect systems, notice unnecessary complexity, and intuitively choose solutions that are more maintainable. And I am a single-person company, with no managers to report to, KPIs to achieve, presentations to make, etc.

I do not make "more money". It's a mature SaaS. Changes in revenue are over the long term, on a scale of years more than months, and implementing features is no longer enough: marketing is needed for more growth.

But, to be honest, I am tired of defending myself this way. It's not the first time I post this metric, I thought people would find it an interesting data point. Instead, I get downvoted (see my comment above which currently sits at -1 in spite of being objective and factual), and then get plenty of responses asking me to defend my statements.

Come to think of it, I'd rather not convince people about AI increasing productivity so much. I'm not really sure why I bother to post here anymore. I'd rather have everyone (including my competition) believe whatever they want to believe and not use AI.


I think if you build out simple sites it is 10x. That number tends to 1x as the project gets more complex.

Side projects where you try an idea are you not finding 1h now to do what was 10h work?


Simple sites :-)

https://partsbox.com/ — it's an ERP/MRP for companies building electronics. Around 170k lines of Clojure and ClojureScript.


Sounds like a blog post on your experience would be very interesting.

Like a sibling comment - I'm also curious about what that 12x means for you and your business - same revenue at fewer hours? More revenue, fewer hours? Etc.


Mostly faster progress with same working hours, as I'm trying to improve my app and beat the competitors.

In a mature SaaS features do not map into more revenue quickly, so the effect on revenue can't be measured easily.


What did you measure? It’s a famously difficult problem, so I’m genuinely curious.

Maybe I’m misunderstanding but that sounds like a 6x improvement not 12x.

Or, you could do what I did, and process your markdown using Typst. The result is consistent, beautifully formatted documents, generated from your markdown, with mermaid diagrams.

Are you using that cmarker commonmark Typst package? Is it good?

https://typst.app/universe/package/cmarker/


No, I built my own pipeline using pandoc+Typst. Works great.

> has no inherent error model

I'll pitch in here, as I've been doing a lot of thinking about this issue and ended up writing my own (tiny) tools for handling anomalies, modeled on the very well thought-out https://github.com/cognitect-labs/anomalies categorization.

This is actually a much wider problem and not specific to core.async. Handling anomalies is difficult. It used to be that you would have exceptions and errors which would be thrown, unwinding the stack. This pattern no longer works in asynchronous code, or code that needs to pass anomalies between the server and the client. In practical applications, an anomaly might need to be returned from a function, passed through a `core.async` channel, then thrown, unwinding the stack on the server side, then caught and passed to the client side over a WebSocket, and then displayed to the user there.

Solving this well is not easy. I think my toolkit, iterated and improved over the years, is close to what I need. But I'm pretty sure it wouldn't handle all the real-world use cases yet.

But again, this is not specific to core.async in any way.


What is your opinion on farolero[0]?

[0]: https://github.com/IGJoshua/farolero


At a first glance, it does much more than what I would want to.

My status toolkit just extends the Cognitect anomalies to be statuses, adding ::failed (parameters correct, but could not perform request), ::ok, ::accepted and ::in-progress. It also adds a bunch of utility functions like status/?! (throws the parameter if it's anomaly, returns the parameter otherwise) and macros like status/-?> (threads if an expression is not an anomaly). That's it.

I deliberately avoid trying to do too much here.


Don't be afraid, it's great! I certainly wouldn't call it "obscure", I've been using it for 10 years now to compile a complex app into highly-optimized client-side code. And the community is very welcoming and mature.

> Anyone building software can start using a harness with a modern model to find bugs and harden their code today. We recommend getting started now.

From what I understand, that is a recipe for getting quickly banned by commercial LLM providers?


Not everybody might realize this, but this is a truly excellent and very impressive result. Most models on my M4 Max run at 150W consumption.

Power consumption numbers aren't useful for efficiency calculations without also considering the tokens per second for the same model and quantization.

I could write an engine that only uses 10W on your machine, but it wouldn't be meaningful if it was also 10X slower.

More power consumption is usually an indicator that the hardware is being fully utilized, all things equal (comparing GPU to GPU or CPU to CPU, not apples to oranges)


What are "bots"?

If I use Claude to gather and summarize information for me, is that a "bot"? Because I recently hit that wall and it wasn't great. Turns out in our quest to fight "bots" we also force humans to do the manual labor of copy/pasting information.

Why would bots "overwhelm" a site is another discussion — I find it really hard to create a website that would be "overwhelmed" by traffic these days, computers are stupidly fast.


Do you think the introduction of Anubis on a lot of open source websites was a coincidence. The AI companies' crawling bots don't play by the regular crawling rules and not a good citizen and they are causing a lot of issues. If your Claude session is using the same user agent of their data crawling bot (most of the time it will just check for claude in the user agent) yes you will be classified as bot as well.

> Why would bots "overwhelm" a site is another discussion — I find it really hard to create a website that would be "overwhelmed" by traffic these days, computers are stupidly fast.

are the cloudflare walls really about reducing load? I thought it's because bots are not profitable. They don't click on ads, don't buy, etc.


I find the "em dashes mean AI" trope annoying — I've been typing em dashes since I learned how to do this on a Mac, which was around 2007 I think. Shift-Option-hyphen became second nature, just like Option-; for an ellipsis (…). It's just how I write. Two hyphens now seem outright barbaric.

It's just a classic noise problem. For better or worse people are flooding the internet with LLM output and the vast majority is not worth reading. People will focus on cheap "tells" to judge what's worth spending their time reading.

The 555 timer is still the most popular chip that hobbyists add to their parts inventory (see rankings at https://partsbox.com/ecdb.html). I find this both interesting and curious — I'd say it has mostly nostalgic value at this point. Almost every practical problem today is better solved by something else. And yet it persists, I guess mostly because of beginner tutorials and first LED blinky circuits.

One nice thing about the 555 is that at least it aged well and still is very usable in those beginner tutorials. Unlike for example the uA741 which no one should use.


> Almost every practical problem today is better solved by something else.

I'm curious about this claim. It's certainly easier to just wire up a modern microcontroller, but is there a better option that involves no software and is likely to still work the same today as it did 50 years ago?


I find it much easier to write a ten line program for an 8 pin CH32V003 (or ATTiny85 in past times) to do exactly the timing or SDC comparisons I want than to figure out the circuit and component values for a 555 or op-amp.

For that matter, a 16 pin CH32V003 can emulate a vast array of 7400 series devices as long as you don't need ns timing — no problem for µs. It's also cheaper.


Using a cpu running software to emulate a handful of gates is just the furthest thing from interesting. It's the inverse of elegant.

Until you go to lay out your circuit board. There's a reason microcontrollers are used for tasks like debouncing switches.

I said uninteresting and inelegant. No one disputes that brute force is functional.

"There's a reason microcontrollers are used for tasks like debouncing switches."

Because people are too cheap (or fail that hard at basic analog electronic control) to get a proper single-pole single-throw switch with a pair of MOSFETs in a monostable mode, or use an S-R flip-flop latch to debounce, or even a very simple R-C filter circuit.

"Throw a microcontroller on it and call it a day" is the surest sign of someone not properly educated in electronic engineering.


I think it's like living under a waterfall.

If you live under a waterfall you'll use 1000 gallons of fresh water pumped at blasting high speed to wash a cup.

We live under a waterfall of cpus and gates in general, and organisms don't care if their environment is perverse. A thoughtless organism will happily consume 1000 units of a free resource just to get 1 unit of some other non-free resource.

And a lot of humans are the worst. Thinking beings who elect not to care about anything like that. Like spammers that operate simply because sending email is free for the sender. They get almost nothing from it, and it costs everyone else a lot, but it costs them even less than the tiny bit they gain, and the external costs don't matter to them the tiniest bit.

But the environment is perverse, created by economies of scale and Asian slave labor and the push for advancement for it's own sake which makes existing useful things artificially low value by being "obsolete".

A software version of that might be making apps with Electron. It doesn't matter how much cpu and ram and disk and general mass of tech stack it takes to make some trivial app. The developers precious time outweighs all other considerations. If they can make the app in a few minutes with no effort instead of a few hours, it doesn't matter how much of everyone else's resources they consume since their time is valuable and 1M other people's cpus are free.


All of that stuff is more expensive and uses more board space.

While it incurs a programming issue, the microcontroller will generally be more stable, less temperature sensitive, and consume less power.

For other posters saying 'just wire up a microcontroller': please self-reflect on your disregard for the concepts of simplicity & elegance. Never mind robustness, or educational aspects.

'Grab laptop, fire up IDE & plug in programmer cable' vs. 'configure the circuit using a soldering iron'. Both have their place.


Why is no software so important? If you design your board well enough, you can route the programming ports somewhere you can program it in-situ, possible with other components that also need programming.

But in terms of cost, a simple microcontroller is usually cheaper than a 555 nowadays, often doesn't require external components, and so even if all you wanted was a single function like an edge-triggered pulse, or generate a single frequency, it probably still makes sense to use a microcontroller from a board design perspective. As soon as you want anything slightly more complicated, odds are you can replace a ton of other circuitry on the board with that single chip and a small program.


"Why is no software so important?"

Because nothing is faster and more responsive than direct hardware logic.

"a simple microcontroller is usually cheaper than a 555 nowadays, often doesn't require external components,"

Often? Every UC I've ever used has required a whole slew of caps and resistors just to get the thing to take in operative firmware through a programming port. Even the simple light flashers for vehicles that I've made using a UC and accelerometer need at least two caps and two resistors to make a proper circuit that allows for flashing info to the controller.

"so even if all you wanted was a single function like an edge-triggered pulse, or generate a single frequency, it probably still makes sense to use a microcontroller from a board design perspective."

Frequency generation? Inductor, capacitor, input voltage. Zero UC required and guaranteed to be cheaper.

"As soon as you want anything slightly more complicated, odds are you can replace a ton of other circuitry on the board with that single chip and a small program."

And accomplish things at a glacial speed that a basic hardware-only solution would've solved. As an example - BOSS pedals have basically zero latency because it is all analog. All these newer Line 6 and POD and other digital FX pedal makers have horrible latency, some I've measured past 50ms (almost as bad as trying to live-monitor a Windows Audio device.) It has been this way for the over 30 years I've been playing guitar.

Most times, raw hardware with zero software is THE way to go. Anything else is just a performance loss.


> "a simple microcontroller is usually cheaper than a 555 nowadays, often doesn't require external components,"

> Often? Every UC I've ever used has required a whole slew of caps and resistors just to get the thing to take in operative firmware through a programming port.

ATtiny for example. Many others only requiring an external capacitor, and complaining about a decoupling cap on a chip replacing a 555 that also needs an RC network to function seems rather petty.

> And accomplish things at a glacial speed that a basic hardware-only solution would've solved.

Most of these uCs operate at least 1 MHz or higher. The ATtiny85 can run at 8MHz from the internal oscillator and has an interrupt latency of 4-6 cycles. To achieve anything that's replacing something you'd do with a 555, you'd have to try incredibly hard to get latency as bad as you're describing. Perhaps they're actually doing something significantly more complicated than just replacing a 555?


Yes. Look at the old national app notes for the lm339 family of comparators.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: