This is a good exercise but IMHO, when you really start using a workflow for production usecases, you need a a proper, turing-complete programming language as a DSL.
There used to be a project called Benthos (since acquired and rebranded by Redpanda in 2024) that was amazing, that you might want to gain some inspiration from.
However, durable workflows have also gained popular acceptance as functional design reaches a wider audience.
While Temporal is the most popular choice when it comes to durable workflows, DBOS (cofounded by the father of PostgreSQL) is my personal favorite.
At the moment, orchestration in DBOS has certain gaps - you might very well consider spending your effort on closing those gaps. The value there would be phenomenal!
Hi Felipe! Just point your agent at https://docs.dbos.dev/python/prompting and give it a go - you can really play around with it as much as you want and solve real problems you care about than me lecturing you about it :)
That said, DBOS really makes durable workflows accessible and approachable. Having already used Temporal, I think you're really appreciate how quickly you can get started with DBOS. I forget if they support SQLite but if you have a PostgreSQL server set up, you really don't need anything else to write your first few DBOS durable workflows (vs. needing a Temporal server or cluster)
Let me know if I got you interested to try it out. I first learned about Temporal from Mitchell Hashimoto as they were using it for Hashicorp Cloud. Eventually I discovered DBOS and now all my personal projects are on DBOS.
I'm not a fan of regulation in general but over the last decade it has been extremely frustrating with the removal of replaceable SD cards and batteries from Androids.
I never put my phones in my back pocket nor do I wear butt hugging leggings, so having a thick phone stick out my ass and make it look bad isn't on my list of worries. I end up purchasing thick waterproof cases for these slim phones anyways.
What's most confusing is the premium phones lack replaceable SD cards and batteries - it's like they are trying to take the worst ideas from the Apple ecosystem and simply don't understand why some people use Androids.
Surprisingly, it's the cheaper models that carry replaceable SD cards and batteries - I would have imagined the opposite!
I often go on trips and hikes with poor cellular coverage and having some SD cards with useful information or being able to swap them out as the camera gets full is really helpful. Attaching drives over the USB port isn't really practical.
When I do have cellular coverage, I might have to rapidly download a LOT of data, which overheats the phone and discharges the battery. With a replaceable battery, this isn't even an issue.
The benefits of replaceable batteries cannot be overstated when you're not on the grid or take great care of the phone where they last more than a few years. I can have a few batteries charged, during the day using solar that I can then just swap them in as evening sets in, instead of having to plug the phone into a powerbank and pray it doesn't shut off as I keep using it.
I think in general not being able to replace the battery toolless is quite an acceptable compromise nowadays. The needed mechanism and the protective shell the replaceable battery needs definitely takes up space which can be used for more capacity instead. You have (sometimes quite insane) fast charging and also powerbanks which support it. Also quality batteries can be quite durable.
The real problem I think is the hostility towards repair, glue everywhere, no spare parts, etc.
Good points, but from a chemistry perspective, fast charging is detrimental to the battery. It would be more efficient to have two or three batteries standard charged to 70% that you can swap in as you go than have one that you need to repeatedly fast charge.
I argue that easier they make for user to swap batteries themselves, higher the demand for the batteries will be, thus lower their price.
> The needed mechanism and the protective shell the replaceable battery needs definitely takes up space
This is true
> The real problem I think is the hostility towards repair, glue everywhere, no spare parts, etc.
I think when a manufacturer isn't designing to allow a regular customer (the owner) to be able to replace the battery themselves, using glue and restricting spare parts is a natural consequence of financial realities: Most people are not going to take a $500 phone that has been used a few years to a shop that will need to charge $100+ in just labor to swap out a battery. So there's no incentive to have a bunch of spare batteries.
I'm a huge fan of user replaceable batteries because in addition of obvious benefits, you can also just remove the battery and power it simply off USB-C when running something heavy on the phone for extended periods of time. A battery used in that scenario wouldn't just overheat itself but stop the phone from cooling off too.
If you're a software engineer who wants to setup and maintain infrastructure, give PyInfra and Pulumi a go!
Huge fan of PyInfra. For my homelab, I use Pulumi with Python and PyInfra to build fully declarative intent based infrastructure. You can use actual software engineering principles like composition, inheritance, DI to setup and wire your infrastructure and services. One of the benefits of this is your infrastructure and services are now self documenting (have them write out a mermaid diagram!) and easily testable using pytest (from cheap unit tests to extensive integration tests (I use Incus)).
Instead of Pulumi, I originally used Terraform CDK with Python before CDK got IBM'd. The migration to Pulumi was refreshingly painless. My original reason for not choosing Pulumi was the crippled state of the open source, self hosted backend support a decade ago but it looks like that is now way more mature and less crippled.
PyInfra is a breath of fresh air compared to Ansible - its not just fast, it's more Pythonic, so IDE features actually work, readable, maintainable, debuggable. I call it infrastructure for software engineers.
If anyone wants to use an AI agent to try out PyInfra - One issue I've faced is that PyInfra was rearchitected in v2 (and some more in v3?) but what belongs in v1 vs v2 vs v3 isn't very clear, so an AI agent could spend a lot of time writing v1 code, having it fail and iterate to v2 and then to v3.
The official site uses the version in the URL as the namespace but it seems like the SOTA AI agents don't pay much attention to that.
Maybe writing a llms.txt for PyInfra v2, or v3 would be an extremely useful task to help with onboarding newcomers?
Disclosure: PyInfra core contributor here.
We just shipped 3.8.0.
PyInfra is an agentless infrastructure automation tool. Same job description as Ansible, Salt, Chef. SSH into hosts, describe desired state, it diffs and converges. No agent, no central server, no daemon.
The difference: your "playbook" is just Python. Not Python cosplaying as YAML. Not Jinja smuggled inside YAML inside a Helm chart inside a Kustomize overlay. Actual Python:
from pyinfra.operations import apt, files, server
apt.packages(packages=["nginx"], update=True)
files.template(src="nginx.conf.j2", dest="/etc/nginx/nginx.conf")
server.service(service="nginx", running=True, enabled=True)
Idempotent operations. Facts gathered from hosts, branched on with normal `if` statements. Real loops, real imports, a real debugger, real type hints. Your editor autocompletes arguments because, brace yourself, they are just function signatures.
About YAML. Wonderful format. For about eleven minutes. Then someone needs an `if`, and you have `{% if %}` inside a string inside a list inside a map. Then someone types `no` as a country code for Norway and it ships to prod as `False`. Then someone indents with a tab and the parser dies without saying where. Congratulations, you reinvented a programming language. Badly. The honest move is to admit you wanted code, then write code.
PyInfra skips the eleven good minutes and goes straight to code.
Release notes in the link. Happy to answer questions.
Infrastructure as Code, not infrastructure as YAML.
TBH, I was worried a few years ago that there was basically just one (original) contributor. This now gives me added trust that I'm taking the right decision to lean heavily into it.
Indeed! (I am that original contributor :)), lots of work ongoing to address this, we now have a small maintainers group and are sharing out review and release loads.
Vaultwarden is a very lean implementation of Bitwarden but if you want to look into an alternative to the Bitwarden ecosystem, I recommend - AliasVault https://github.com/aliasvault/aliasvault - check it out!
> As a Waymo (and other driverless car) supporter, this seems like an obviously good thing, right? I’m a little surprised this wasn’t possible before given the amount of regulatory scrutiny (correctly) applied to these companies
Not necessarily. I went into a bit more detail in my own comment but it might be useful to think that when regulations are written keeping in mind multibillion dollar automobile companies, what the effect of those regulations on a person maintaining their own vehicles might be.
Consider that your Waymo got ticketed, but you had flashed it with a "no customer telemetry" firmware. Once Waymo gets the ticket, they flag your car as having "unauthorized" software and now the ball's in your court that the reason why your Waymo got ticketed has nothing to do with the telemetry feature that tells Waymo what radio stations you were listening to.
Also, when regulations are written keeping in mind multibillion dollar automobile companies, the ticket isn't going to cost $500.
I'm of the opinion that if one owns an autonomous vehicle, regardless of software modification or not (which should be allowed), then one is fully responsible for it's actions. If one doesn't trust the software provided by the manufacturer, don't buy/use it. Once one chooses to buy it and operate it, then it's that person.
Possible exceptions would be in the case that, after purchase, the manufacturer pushes a software update that meaningfully changes the behavior in such a way that it causes issues. In that case, both A) the manufacturer should be responsible and B) the owner should have the option to get some kind of compensation.
UPDATE (can't respond to the two subcomments below due to post throttling, so I'm updating this comment instead)
> the car is basically a taxi and the taxi service is to blame for any mistakes
@skybrian - Agreed! but if you read the article, the CA DMV is ticketing the manufacturer, not the operator.
None of my concerns hold if the operator was ticketed - infact, existing regulations are set up exactly that way, so no new regulation was even necessary. Something's not adding up.
> Right now, no one can independently own and operate an AV the way Waymo or Tesla does
@ourspacetabs - Sure but the regulation seems to be specifically addressed at the manufacturer, not the operator.
I would have no concern if the regulation was addressed to the operator. The article atleast doesn't imply that's the case.
---
> The state's Department of Motor Vehicles (DMV) has announced new regulations on autonomous vehicles (AVs), including a process for police to issue a "notice of AV noncompliance" directly to the car's manufacturer.
> Under the new rules, police can cite AV companies when their vehicles commit moving violations. The rules will also require the companies to respond to calls from police and other emergency officials within 30 seconds, and will issue penalties if their vehicles enter active emergency zones.
These are new frontiers in automotive regulation. Typically, if a car failed because of a manufacturer issue, the driver would be ticketed. For example: if Hyundai sold vehicles where the engine would explode around 50k miles and that caused an accident, the driver of the vehicle would be ticketed for it.
Now if we take the human out of it, it is Hyundai that would be ticketed for it. Insurance companies are certainly going to take notice and adjust their risk models accordingly.
I imagine there will be a lot of fingerpointing by the manufacturer towards customers.
In the worst case, this is the end of customers servicing their own autonomous vehicles.
If we imagine that most vehicles in the next 15 years will be autonomous, this would mean customers would have to handle regulation aimed at multibillion dollar companies, if they were to service their own autonomous vehicles, or give up on servicing their own autonomous vehicles entirely and just rent them instead.
Not sure I agree. The clear boundary here to me is who owns and is operating the vehicle. Waymo both owns and operates their vehicles, it’s a taxi service, you wouldn’t say a Waymo rider is operating a vehicle and therefore deserves the ticket. Right now, no one can independently own and operate an AV the way Waymo or Tesla does.
When that happens someday, then the ticket would go to the owner/operator of the vehicle - whoever bought the car. If you get a ticket due to something dumb your personally owned Waymo did, wouldn’t you pursue that case against Waymo separately, the same way you’d pursue Hyundai for selling you a car whose engine blew up after 50k miles?
It seems pretty reasonable to me that when you're not driving, the car is basically a taxi and the taxi service is to blame for any mistakes. The car manufacturer isn't just making cars anymore. It's providing a service.
Perhaps they could sell the car to a different taxi service, though?
There's no monolithic "Job Market", so specific details matter. I have not been tracking details too closely but here are two things I am tracking:
- CRUD generation by running through JIRA tickets and clearing backlogs seem to be replaced by agentic workflows. So if you were an extremely productive dev who would machete your way through CRUD and API integrations, agentic workflows do it better, faster and for cheaper. I can point CC, Codex (Cursor in progress) at design specifications and it can turn those into perfect Django apps with well written test cases like there's no tomorrow. It might not make sense for such a business to continue to hire humans to do the same work
- Tokens for frontier models over the API are really expensive. I am personally aware of some companies that have monthly high five figure token expenses and one company that has a monthly six five figure token expense.
It's still worth it because they are churning out code 24x7 vs a typical human's 8x5 if you're putting in the right workflows, guardrails in place - that's a 4x productivity gain.
You're getting done in a month, what a full quarter would require humans to do. However, the company still has to pay for that and unless they are signing up 4x more paying net new customers every month with 0 churn, engineers have to be let go to pay for those tokens.
But how do they scale the reviewing of the agentic output? Or they just blindly trust it and worst case scenario they get to write a sob story on HN about how Claude has deleted the production db?
A company can operate aimlessly for a long time and carry along due to inertia and/or monopoly position. So chances are nobody (competent) is reviewing it.
> But how do they scale the reviewing of the agentic output? Or they just blindly trust it and worst case scenario they get to write a sob story on HN about how Claude has deleted the production db?
> WAL limits need to be set carefully or you just end up filling WAL volumes and the database becoming unavailable.
This is true. For anyone getting alarmed that this is due to a bug in PostgreSQL, it's not - it's PostgreSQL protecting the customer from attempting to write data that it cannot durable commit - "I am going to go unavailable because I don't have enough space to save more data".
There are multiple ways to handle this, the easiest, most hands on way is to keep a monitor and alert that watches the WAL size like a hawk and then alerts OPS the moment it breaches a threshold.
The way I read that issue and the linked discussion was, that pgBackRest handles a lot of details itself that's otherwise handled by Kubernetes. Hence, a lot of functionality in pgBackRest is not only redundant but incompatible with how Kubernetes CSI could be used to provide incremental and differential backups. Hence, Barman and `barman-cloud` plugins are a better, natural fit for a Kubernetes environment than pgBackRest.
This is a fantastic project that a lot of self-hosters using PostgreSQL use. Specially with pgBackRest archived by the owner on Apr 27, 2026, this is likely the leading option that has been around the block for a while.
Anyone here had considered Barman in the past, used it for a while and went to pgBackRest? Are you revisiting that decision now?
One interesting thing about Barman is that it just uses PG's own backup utilities. It doesn't implement custom parsers and things like that. So, there's less maintenance work needed for Barman when PostgreSQL changes data-file internals. Tradeoff is that there's less custom optimization than pgBackRest/pg_probackup/WAL-G-local.
Databasus seems to be taking somewhat similar approach to Barman, but (at this time) does not appear to use pg_receivewal, which makes it less efficient than Barman.
For PG v17+, Barman seems to be the most efficient backup solution based on PG native tools, that is able to do low-RPO or even zero-RPO (if configured as a synchronous receiver).
It looks like pgBackRest will likely continue, multiple companies are stepping up with sponsorships. Mentioning this just in case anyone is making plans to move away, it's probably worth waiting a bit for things to settle.
YES! I have never told anyone this (because it feels so random) but it's great to know I'm not the only one. I pretty much expect this when I am in depths of an issue I cant somehow consciously resolve, so much that I keep a pen and writing pad next to me, before I sleep, because the code I saw in my dream often gets lost by the time I go to my office and resume the laptop.
I even figured out a hack - I just force myself to go to sleep if I can't consciously resolve it for more than an hour. It's as if my brain gets an otherwise untapped firepower.
That said, this absolutely destroys my sleep cycle for the next day or two and spikes my BP for the rest of the day to the point where I feel sick.
Although in theory I'm sleeping more than the 8h, I feel horribly mentally exhausted. I can work out, physically just fine but my brain is on empty - because of this, I limit this to critical blockers.
There used to be a project called Benthos (since acquired and rebranded by Redpanda in 2024) that was amazing, that you might want to gain some inspiration from.
However, durable workflows have also gained popular acceptance as functional design reaches a wider audience.
While Temporal is the most popular choice when it comes to durable workflows, DBOS (cofounded by the father of PostgreSQL) is my personal favorite.
At the moment, orchestration in DBOS has certain gaps - you might very well consider spending your effort on closing those gaps. The value there would be phenomenal!
reply