The dynamism exists to support the object model. That's the actual dependency. Monkey-patching, runtime class mutation, vtable dispatch. These aren't language features people asked for. They're consequences of building everything on mutable objects with identity.
Strip the object model. Keep Python.
You get most of the speed back without touching a compiler, and your code gets easier to read as a side effect.
I built a demo: Dishonest code mutates state behind your back; Honest code takes data in and returns data out. Classes vs pure functions in 11 languages, same calculation. Honest Python beats compiled C++ and Swift on the same problem. Not because Python is fast, but because the object model's pointer-chasing costs more than the Python VM overhead.
Don't take my word for it. It's dockerized and on GitHub. Run it yourself: honestcode.software, hit the Surprise! button.
The reason this is impressive has less to do with the tolerances themselves and more to do with backward compatibility across decades at scale. That's the genuinely hard part.
The history here is deeper than most people realize. The United States spent fifty years (roughly 1800 to 1853) at the Springfield and Harper's Ferry armories trying to achieve what LEGO now does routinely: parts manufactured to tight enough tolerances that they are truly interchangeable without fitting. In 1853, a visiting British inspector randomly selected ten muskets made in ten different years, disassembled them, mixed the parts, and reassembled ten functional muskets using only a screwdriver. Tolerances of a thousandth of an inch. It was considered impossible by most of the engineering establishment of the time.
The way they got there was by building machines, then using the parts those machines made to build better machines, then using those improved parts to build even better machines. A virtuous circle of transferring skill from human hands to tooling. This is the actual origin story of what historians call the American System of Manufacture, and it's the foundation the entire modern automotive supply chain sits on.
So yes, any competent injection molder holds tight tolerances today. But that's precisely the point: the reason it seems unremarkable now is that two centuries of compounding precision made it so
My experience is you are indeed correct. Modern phone/tablet users are so divorced from a file systems that those brought up on them have no idea what a file is or where their data is saved... The data (video, music, shopping-list) simply exists "in the app".
(And don't get me started on their concept of data portability!)
...I mean, we've spent the last decade building products with the explicit intention of hiding or otherwise papering over the File abstraction. The hell did anyone expect to happen as a result? People coming in or aging out lose the knowledge as part of context degradation. There was value in keeping it as a first order abstraction.
What did you expect from people going around and thinking "big balls" is a cool and impressive nickname.
The problem is that as idiotic as the people of DOGE were, the questionable motives for their actions are the only thing that's left following the monumental failure DOGE has been in terms of meeting their announced goals.
There is literally no fraud/abuse that has been discovered let alone prosecuted and they redefined "waste" to "programs I don't like" to save few hundred millions, maybe a billion... from and original target between 1 and 2 TRILLIONS (according to Musk).
I unfortunately don't think we can (nor should) apply Hanlon's razor, they will abuse the collected data one way or another.
>There is literally no fraud/abuse that has been discovered let alone prosecuted and they redefined "waste" to "programs I don't like" to save few hundred millions, maybe a billion... from and original target between 1 and 2 TRILLIONS (according to Musk).
What do you expect from an Admin engaging in concerted destruction of the Institutional framework we've been dependent on all these years? Nothing can be discovered if you don't look, and Donnie boy replaces anyone who'd be tasked with looking with a goon. Nothing to see here, moving right along.
This is the strongest point in the thread. The article treats poverty, climate, and markets as though the obstacle is insufficient model capacity. But these systems contain agents with values and motivations who actively resist interventions. A billion-parameter model of a system whose components are trying to game the model will never be a theory of that system. The agents will simply route around it.
More broadly, the article assumes that scaling model capacity will eventually bridge the gap between prediction and understanding. I have pre-registered experiments on OSF.io that falsify the strong scaling hypothesis for LLMs: past a certain point, additional parameters buy you better interpolation within the training distribution without improving generalization to novel structure. This shouldn't surprise anyone. If the entire body of science has taught us anything at all, it is that regularity is only ever achieved at the price of generality. A model that fits everything predicts nothing.
The author gestures at mechanistic interpretability as the path from oracle to science. But interpretability research keeps finding that what these models learn are statistical regularities in training data, not causal structure. Exactly what you'd expect from a compression algorithm. The conflation of compression with explanation is doing a lot of quiet work in this essay.
The interesting version of the argument isn't about substrate: it's about motivation.
Present the trolley problem to GPT-4 and it gives you a philosophy survey answer.
Present it to a human and their palms sweat. The gap isn't computation, it's that humans are value-making machines shaped by millions of years of selection pressure.
Pollan lands on the wrong argument (biology vs. silicon) when the real one is: where do the values come from, and can they emerge without a reproductive lineage that stakes survival on getting them right?
I'm not sure I would call it a requirement for consciousness, but knowing that most beings with general intelligence (humans) have a form of it similar to my own does make it easier to sleep at night.
To clarify: I'm not talking about morals specifically. I mean value in the broader sense of spontaneously assigning relative importance to things, producing a hierarchy that drives action.
You're thirsty. There's pond water in the forest and clean well water in the town square, but you're an escaped prisoner. Suddenly the value hierarchy flips: safety trumps water quality. You do this instantly, with incomplete information, integrating survival, context, and preference in a way that no one programmed into you.
Morality is one expression of this capacity, but so is aesthetic judgment, risk assessment, curiosity, and the decision to walk down a dark alley or not. The trolley problem is just a dramatic example. The mundane examples are actually more telling, because we do them thousands of times a day without noticing.
No current AI has any form of this. It has no mechanism for deciding that anything matters more than anything else except through weightings that were derived from human-generated training data. It borrows our value hierarchies statistically. It doesn't have its own.
The substrate argument is the wrong hill for Pollan to die on. The stronger version isn't "meat vs. silicon" — it's that brains are value-making machines operating under evolutionary pressure, and no current AI architecture has anything analogous to that. You can simulate the outputs of valuation without having the mechanism. The question isn't whether consciousness can exist in another substrate, it's whether you can get there without the thing that actually drives human cognition: spontaneous assignment of moral and survival value with no prior programming.
AI is an extension and acceleration of so called "evolutionary pressure". But so far AI models lack both agency and consciousness, and do not "experience" this pressure, though they are entirely defined by it. They can also explain this relationship to you.
That's true in the same sense that agriculture and nuclear weapons are extensions of evolutionary pressure. Everything humans produce is, by definition. But it empties the term of any useful meaning.
The distinction that matters: evolutionary pressure operates through differential survival across generations, where the organism has skin in the game. AI models are optimized via gradient descent on loss functions that humans define. That's artificial selection toward human objectives, not evolutionary pressure in any meaningful sense. The model has no stake in the outcome. Nothing is at risk for it.
You actually make this point yourself in your second sentence: they "lack both agency and consciousness, and do not experience this pressure." I agree completely. But that's precisely why the first sentence doesn't do any work. If they don't experience it, then calling it evolutionary pressure is metaphorical at best. And the metaphor obscures the exact gap we should be paying attention to: the absence of anything at stake.
The commonality breaks down at value assignment. You hear an unexpected sound and have a threat/delight assessment in 170ms. Faster than Google serves a first byte. You do this with virtually no data.
An LLM doesn't assign value to anything; it predicts tokens. The interesting question isn't whether we share a process with LLMs, it's whether the thing that makes your decisions matter to you, moral weight, spontaneous motivation, can emerge from a system that has no survival stake in its own outputs. I wrote about this a few years ago as "the consciousness gap": https://hackernoon.com/ai-and-the-consciousness-gap-lr4k3yg8
You don't own him.
reply