I don't know anything about this ONTO Standards group, and the piece gives me weird pseudoscience vibes, but if their idea really leads to a useful probabalistic and epistemiologic responses from LLMs then that could be a game changer.
"Law is about justice" is one of those things a good professor gets every 1L to raise their hands in agreement to before spending the next semester proving why that's 100% not the case.
Justice is part of a moral framework. Law is part of a procedural framework. You can structure the law to try to optimize for justice, but the law has never been about morality, the law is about keeping society operating on top of whatever structure is dominant.
Example: the Supreme Court ruled in Ozawa v. United States in 1922 that a Japanese descended person could not naturalize as a US citizen despite having white skin because he was not technically Caucasian. The next year in 1923 they ruled in United States v. Bhagat Singh Thind that an Indian descended man could not natural despite being Caucasian because his skin was not white.
Why did the court give two contradictory reasons for the rulings which would each be negated if the reasoning were swapped? I wouldn't say it was for justice. It was because America at that time did not want non-white immigrants, and what 'white' is, is a fiction that means something completely different than what it claims to mean, and the justices were upholding that structure.
>For the owner of a five-year-old EV, this results in a repair bill ranging from $3,000 to $4,500.
>This is the EV equivalent of a blown engine caused by a faulty spark plug.
Its an interesting problem. Many people focus on batteries and motors, but the fact that capacitors turn out to be so critical AND not replaceable seems to change the economics of EVs after 5 years.
Now that I look at it that article has AI tells like:
In 2026, we are seeing a rising trend of "insulation fatigue."
But I dunno, I think there are always going to be enterprising techs who will figure out how to replace the $25 fuse inside a $2500 module. I've bought back more than one ICE car from the insurance company after it was totaled and managed to get it back on the road at a reasonable expense.
And that brings us back to square one - if everyone is a 100x engineer, then everyone's again a 1x engineer. Lewis Carroll nailed it with the Red Queen's Race.
This mythical class of developer doesn't exist. Are you trying to tell me that there are a class of developers out there that are doing three months worth of work every single day at the office?
Litigation can be one reason, but I think the more likely reason is that people want to avoid confrontation.
Could you tell the person taking the pizza that that is inappropriate behavior? Sure. But that is confrontational. The people who might set the boundary are worried both about how they will appear to others (am I being a bully?) AND about the possible repurcissions (is the guy I'm telling off going to yell at me or threaten me?)
That's crazy to me. At this point, I don't even know if the git commit log would be useful to me as a human.
Maybe it's just me, but I like to be able to do both incremental testing and integration testing as I develop. This means I would start with the lexer and parser and get them tested (separately and together) before moving on to generating and validating IR.
It looks like the AI is dumping an entire compiler in one commit. I'm not even sure where I would begin to look if I were doing a bug hunt.
YMMV. I've been a solo developer for too many years. Not that I avoided working on a team, but my teams have been so small that everything gets siloed pretty quickly. Maybe life is different when more than one person works on the same application.
The problem is that convenience trumps everything.
- It is convenient to use Facebook to chat with family
- It is convenient to use credit cards to pay the local shop
- It is convenient to use Netflix to watch movies
- It is convenient to pay a (lower) monthly fee than a (higher) purchase price for MS products
- It is convenient to have Apple / Google take care of email
- It is convenient to use Uber instead of a taxi
The golden cage of convenience is why nothing will change in the US -- we prize convenience above all else.
Sorry to be blunt, but it is extremely inconvenient to be force-exposed to internal politics of some religious shithole country which twice votes against their own interests. Where people don't believe in healthcare but accept school shootings. Where society cares about body positivity until Ozempic arrives. A country which talks bigly about geopolitics and ignores agreements they have signed.
It is inconvenient to buy a Tesla to help save the planet and then see emerald nepo baby Elon Musk doing hitler salutes, and US citizens downplaying it due to their special understanding of freedom of speech.
Or a sweaty Peter Thiel morphing from startup evangelist to religious nut babbling about the antichrist.
Or a Jeff Bezos who ships stuff from china to europe being so unhappy with his life that he needs to marry the wife of his neighbor.
On top of this there's this still unresolved child sexual abuse scandal that basically implicates all of US upper class including senior leadership of US tech companies, who suddenly come out of retirement like Sergey Brin because they keep being mentioned in the Epstein files.
For more and more non-US people the inconvenience of seeing all this outweighs the benefit of being able to use some sort of web application. We have survived before on Nokia phones and TomTom navigation systems, and we'll be able to do so again.
US tech companies had US government support and helpful non-US regulatory environment to capture value from our countries. In their core, they are rent-seeking middlemen, parasitic to our economies.
The parasite needs a host, but the host can always find a new parasite.
It's pretty clear that the security models designed into operating systems never considered networked systems. Given that most operating systems were designed and deployed before the internet, this should not be a surprise.
Although one might consider it surprising that OS developers have not updated security models for this new reality, I would argue that no one wants to throw away their models due to 1) backward compatibility; and 2) the amount of work it would take to develop and market an entirely new operating system that is fully network aware.
Yes we have containers and VMs, but these are just kludges on top of existing systems to handle networks and tainted (in the Perl sense) data.
> It's pretty clear that the security models designed into operating systems never considered networked systems. Given that most operating systems were designed and deployed before the internet, this should not be a surprise.
I think Active Directory comes pretty close. I remember the days where we had an ASP.NET application where we signed in with our Kerberos credentials, which flowed to the application, and the ASP.NET app connected to MSSQL using my delegated credentials.
When the app then uploaded my file to a drive, it was done with my credentials, if I didn't have permission it would fail.
> It's pretty clear that the security models that were design into operating systems never truly considered networked systems
Andrew Tanenbaum developed the Amoeba operating system with those requirements in mind almost 40 years ago. There were plenty of others that did propose similar systems in the systems research community. It's not that we don't know how to do it just that the OS's that became mainstream didn't want to/need to/consider those requirements necessary/<insert any other potential reason I forgot>.
Yes, Tanenbaum was right. But it is a hard sell, even today, people just don't seem to get it.
Bluntly: if it isn't secure and correct it shouldn't be used. But companies seem to prefer insecure, incorrect but fast software because they are in competition with other parties and the ones that want to do things right get killed in the market.
Developers will militate against anything that they perceive to make their life difficult, eg anything that stops them blindly running ‘npm get’ and running arbitary code off the internet.
Well yeah, we had to fix some LLM that broke things at a client; we asked why they didn't sandbox it or whatever and the devs said they tried to use nsjail; could not get their software to work with it, gave up and just let it rip without any constraints because the project had to go live.
There is a lot to blame on the OS side, but Docker/OCI are also to blame, not allowing for permission bounds and forcing everything to the end user.
Open desktop is also problematic, but the issue is more about user land passing the buck, across multiple projects that can easily justify local decisions.
As an example, if crun set reasonable defaults and restricted namespace incompatible features by default we would be in a better position.
But docker refused to even allow you to disable the —privileged flag a decade ago,
There are a bunch of *2() system calls that decided to use caller sized structs that are problematic, and apparmor is trivial to bypass with ld_preload etc…
But when you have major projects like lamma.cpp running as container uid0, there is a lot of hardening tha could happen with projects just accepting some shared responsibility.
Containers are just frameworks to call kernel primitives, they could be made more secure by dropping more.
But OCI wants to stay simple and just stamp couple selinux/apparmor/seccomp and dbus does similar.
Berkeley sockets do force unsharing of netns etc, but Unix is about dropping privileges to its core.
Network aware is actually the easier portion, and I guess if the kernel implemented posix socket authorization it would help, but when user land isn’t even using basic features like uid/gid, no OS would work IMHO.
We need some force that incentivizes security by design and sensible defaults, right now we have wack-a-mole security theater. Strong or frozen caveman opinions win out right now.
> It's pretty clear that the security models designed into operating systems never considered networked systems.
Having flashbacks to Windows 95/98 which was the reverse: The "login" was solely for networked credentials, and some people misunderstood it as separating local users.
This was especially problematic for any school computer lab of the 90s, where it was trivial to either find data from the previous user or leave malware for the next one.
Later on, software was used to try to force a full wipe to a known-good state in-between users.
the security models designed into operating systems never considered networked systems
The security model was aimed at putting the user in control of the software they run. That's what general-purpose computing is: allowing the user to use the machine's resources for whatever general purpose they intend. The only protection required was to make sure the user couldn't interfere with other users on the same system.
What was never considered before is adversarial software. The model we're now operating under is that users are no longer in control of the software they run. That is the primary thing that has changed; not the users, not the network, but the provenance and accountability of software.
Excuse me? Unix has been multiuser since the beginning. And networked for almost all of that time. Dozens or hundreds of users shared those early systems and user/group permissions kept all their data separate unless deliberately shared.
AI agents should be thought of as another person sharing your computer. They should operate as a separate user identity. If you don't want them to see something, don't give them permission.
We focus on negative outcomes because that relates directly to survival. Our brains are wired for it. Talking about negative outcomes means we learn about them and have a better chance of avoiding them. Plus, the fear response is much stronger and lasts longer than the happy / joy response.
Note that for humans and other social animals "survival" doesn't always mean life or death -- it can mean being included or excluded from a social group which indirectly affects survival chances.
reply