> One engineer captured this shift perfectly in a widely shared essay, describing how AI transformed the engineering role from builder to reviewer.
I stopped here. Was this written by an an LLM? This sentence in particular reads exactly like the author supplied said essay as context and this sentence is the LLM's summarization of it. Nowhere is the original article linked, either, further decreasing trust. Moreover, there's an ad at the bottom for some BS "talent" platform to hire the author. This article is probably an LLM generated ad.
My trust is vacated.
This makes me feel that the SWE work/identity crisis is less important than the digital trust crisis.
Yeah, agreed. I probably wasn't going to delete my OpenAI account (ala the link that is also being upvoted on HN), it just seemed like a hassle vs ceasing to use OpenAI. But when the staff at OpenAI employ mental gymnastics, selective hearing, willful ignorance, or plain ignorance to justify compliance with manmade horrors, I think it's probably important to vote with our feet.
I use agentic LLMs (for side projects, properly sandboxed) as much as the next guy, but collectively the normalization of deviance is pretty apparent and shocking (https://en.wikipedia.org/wiki/Normalization_of_deviance).
Backdooring you own machine, sending your .env files, normalizing slop in code review, leaking IP (which is trained on and send to RLHF), YOLO mode, these things would have been unconscionable 2 years ago.
Tangentially related, "slop" really isn't a negative enough term for unwanted LLM garbage. "Slop" which is fed to pigs, has utility. "Slop" as a verb doesn't necessarily have a (strong) negative association ("It was slopped on the plate, but it was tasty").
I use the term "barf" more often. Barf has no utility*. Barf is always seen in a negative context. Barf is forcibly ejected from an unwilling participant (the LLM), and barf's foulness is coerced upon everyone that witnesses it. I think it's a better metaphor.
I know that this is just semantics, but still.
* even though LLM output __can__, and often does, have utility, we are specifically referring to unwanted LLM output that does not have utility. I'm not trying to argue that LLMs are objectively useless here, only that they are sometimes misused to the users' detriment.
This is an interesting observation. One could argue that some AI generated or driven things does have utility, and thus qualifies as "slop" (although not for those on the receiving end). For example, when used to drive clicks and generate revenue, to troll, or to spread propaganda. You get the idea.
In this instance however, I agree, barf is more accurate.
Seems like a catch-22. For codebases that I'm highly familiar with and regularly perform code review in, I'd say "thanks LLM, but I don't trust you, I'm more familiar with this codebase than you, and I don't need your help." For codebases that I'm not familiar with, I'm not really performing code review (at least not approving MR/PRs or doing the merging).
But still, this is very creative and a nice application of LLMs that isn't strictly barf.
At 60%, it highlights significantly more test code than the material changes that need review. Strike one.
At no threshold (0-100) does it highlight the deleted code in UniqueBroadcastEvent.php, which seems highly important to review. The maintainer even comments about the removal in the actual PR! Strike two.
The only line that gets highlighted at > 50% in the material code diffs is one that hasn't changed. Strike three.
So, honest attempt, but it didn't work out for me.
From a quick Google query, it says that ~%90 of Americans have health insurance (which seems higher to me than I'd expected). I'd be very interested in knowing the number of uninsured, negligent/nefarious, and exorbitant invoices that are issued as a percentage of all invoices, for the purpose of determining the scale of criminality with respect to your description.
I glanced at Ubuntu Touch, but its device compatibility looked severely lacking (https://devices.ubuntu-touch.io/).... I have old Pixel phones I could potentially try it out on, but the last Pixel phone that is officially supported is the 3a. So that is a bummer.
There are decent Linux phones you can buy now, such as the FuriPhone FLX1 (Debian), Volla Quintus (Ubuntu Touch), Jolla C2 (SailfishOS) etc. The best part is that all of them also support running Android apps (via Waydroid or similar compatibility layer), so you get the best of both worlds.
It seems as though all of the AI Agents (I'm still not sure what that even means) require 3rd parties? Or more specifically: are you aware of any Ollama compatible AI Agents?
Thanks. I was able to get up and running with AnythingLLM, which offers agency with Ollama (provided a model that supports tools). Pretty neat, I'm excited to try it out.
I stopped here. Was this written by an an LLM? This sentence in particular reads exactly like the author supplied said essay as context and this sentence is the LLM's summarization of it. Nowhere is the original article linked, either, further decreasing trust. Moreover, there's an ad at the bottom for some BS "talent" platform to hire the author. This article is probably an LLM generated ad.
My trust is vacated.
This makes me feel that the SWE work/identity crisis is less important than the digital trust crisis.