creating isolated staging & prod environments -- good idea
allowing an AI agent to get hold of creds that let it execute destructive changes against production -- not a great idea
allowing prod database changes from the machine where the AI agent is running at all -- not a great idea
choosing a backup approach that fails completely if there's an accidental volume wipe API call -- not a great idea
choosing to outsource key dependencies to a vendor, where you want a recovery SLA, without negotiating & paying for a recovery SLA -- you get what you get, and you dont get upset
> choosing to outsource key dependencies to a vendor
This is the entire thing. The author is basically slinging blame at a bunch of different vendors, and while some of the criticisms might be valid product feedback, it absolutely does not achieve what they're trying to, which is to absolve themselves of responsibility. This is a largely unregulated industry, which means when you stand up a service and sell it to customers, you are responsible for the outcome. Not anyone else. It doesn't matter if one of your vendors does something unexpected. You don't get to hide behind that. It was your one and only job to not be taken by surprise. Letting the hipster ipsum parrot loose with API credentials is a choice. Trusting vendors without verifying their claims is a choice. Failing to read and understand documentation is a choice.
> creating isolated staging & prod environments -- good idea
Would have been a good idea but he didn’t do this either. The volume in question was used in both staging and production apparently, per the “confession”. The agent was deleting the volume because it was used for staging, not realizing it was also used for prod.
Hindenburg Research is great. They also did the Nikola expose (that bunch of shysters who claimed to have electric truck technology where their truck couldn't even move under its own power so they filmed it rolling down a gentle slope).
For anyone wanting to get into the weeds about detecting accounting fraud, the book "Financial Shenanigans" has lots of historical examples of ways company executives have cooked the books to make their public company financial statements appear more appealing to investors than they actually are.
If I follow, this isn't a compile time inline directive, it's a `go fix` time source transformation of client code calling the annotated function.
Per the post, it sounds like this is most effective in closed-ecosystem internal monorepo-like contexts where an organisation has control over every instance of client code & can `go fix` all of the call sites to completely eradicate all usage of a deprecated APIs:
> For many years now, our Google colleagues on the teams supporting Java, Kotlin, and C++ have been using source-level inliner tools like this. To date, these tools have eliminated millions of calls to deprecated functions in Google’s code base. Users simply add the directives, and wait. During the night, robots quietly prepare, test, and submit batches of code changes across a monorepo of billions of lines of code. If all goes well, by the morning the old code is no longer in use and can be safely deleted. Go’s inliner is a relative newcomer, but it has already been used to prepare more than 18,000 changelists to Google’s monorepo.
It could still have some incremental benefit for public APIs where client code is not under centralised control, but would not allow deprecated APIs to be removed without breakage.
yeah this is the part that got me excited honestly. we're not google-scale by any stretch but we have ~8 internal Go modules and deprecating old helper functions is always this awkward dance of "please update your imports" in slack for weeks. even if it doesn't let you delete the function immediately for external consumers, having the tooling nudge internal callers toward the replacement automatically is huge. way better than grep + manual PRs
it could be better than a nudge -- if you could get a mandatory `go fix` call into internal teams' CI pipelines that either fixes in place (perhaps risky) or fails the build if code isn't already identical to fixed code.
> It could still have some incremental benefit for public APIs where client code is not under centralised control, but would not allow deprecated APIs to be removed without breakage.
It makes those breakages less painful. A project can eventually remove a deprecated API after notifying other projects to run `go fix`. And when projects ignore that advice (some always will), they can revert to a previous working version, run `go fix`, and then upgrade, without spending time in the code identifying how to replace each removed API.
And for those projects that routinely update and run `go fix`, they'll never notice the removal of deprecated code. Given the other benefits of `go fix`, switching to easier to read methods, and leveraging more efficient methods, in addition to security fixes that come with regular updates, this should be the workflow for most maintained projects.
I'm not sure what all of the hazards are, but I could imagine a language (or a policy) where public APIs ship with all of the inline fix directives packaged as robust transactions (some kind of "API-version usage diffs"). When the client pulls the new API version they are required to run the update transaction against their usage as part of the validation process. The catch being that this will only work if the fix is entirely semantically equivalent, which is sometimes hard to guarantee. The benefits would be huge in terms of allowing projects to refine APIs and fix bad design decisions early rather than waiting or never fixing things "because too many people already depend on the current interface".
True — and for a lot of people spreadsheets are perfectly fine. GoldFrame is for freelancers who want something that looks client-ready out of the box without formatting work
there's also a bunch of dedicated constraint programming solvers / high level modelling languages for these kinds of constraint-y combinatorial optimisation problems
e.g. https://www.minizinc.org/ offers a high level modelling language that can target a few different solver backends
might be pretty good results to completely ignore writing a custom algorithm and drop in an existing industrial-grade constraint programming solver, model your procgen problem using a high level language, and use the existing solver to find you random solutions (or exhaustively enumerate them). then more time to iterate on changing the problem definition to produce more interesting maps rather than getting bogged down writing a solver.
Yeah, you can also use Clingo [0] which is pretty popular and people have tried it specifically with WFC content generation [1]. You can even run it in the browser easily [2].
allowing an AI agent to get hold of creds that let it execute destructive changes against production -- not a great idea
allowing prod database changes from the machine where the AI agent is running at all -- not a great idea
choosing a backup approach that fails completely if there's an accidental volume wipe API call -- not a great idea
choosing to outsource key dependencies to a vendor, where you want a recovery SLA, without negotiating & paying for a recovery SLA -- you get what you get, and you dont get upset
reply